text
stringlengths 3
744k
⌀ | summary
stringlengths 24
154k
|
---|---|
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Non-Discrimination of Israel in
Labeling Act''.
SEC. 2. FINDINGS.
Congress finds the following:
(1) Prior to the issuance of Trade Directive (T.D.) 95-25
and T.D. 97-16, the Customs Service had taken the position
that, in order for the country of origin marking of a good
which was produced in the West Bank or Gaza Strip to be
considered acceptable, the word ``Israel'' must appear in the
marking designation.
(2) The Department of State advised the Department of
Treasury that, in view of certain developments, principally the
Israeli-PLO Declaration of Principles on Interim Self-
Government Arrangements (signed on September 13, 1993), also
known as the Oslo Accords, the primary purpose of section 304
of the Tariff Act of 1930 (19 U.S.C. 1304) would be best served
if goods produced in the West Bank and Gaza Strip under the
Palestinian interim self-government were permitted to be marked
``West Bank'' or ``Gaza Strip''.
(3) The Oslo Accords created a new self-rule entity, an
interim self-governing Palestinian council, granting it the
authority to independently conduct its affairs, including
financial matters such as import and export.
(4) On March 17, 1995, President Clinton signed
Presidential Proclamation 6788 designating the West Bank and
Gaza Strip as a beneficiary of the generalized system of
preferences program.
(5) The United States Customs Border Protection Cargo
Systems Messaging Service guidance dated March 28, 1995,
stated: ``The extension of the generalized system of
preferences program to the West Bank and Gaza Strip pursuant to
this Presidential Proclamation applies only to goods produced
in the areas for which arrangements are being established for
Palestinian interim self-government, as set forth in Articles
I, III, and IV of the Declaration of Principles on Interim
Self-Government arrangements.''.
(6) The March 28, 1995, guidance further articulated
Articles IV and V of the Declaration of Principles on Interim
Self-Government arrangements, stating: ``It is understood that:
Jurisdiction of the Council will cover West Bank and Gaza Strip
territory, except for issues that will be negotiated in the
permanent status negotiations: Jerusalem, settlements, military
location, and Israelis.''.
(7) It is the longstanding policy of the United States to
oppose any effort to delegitimize Israel.
(8) The first free trade agreement by the United States was
between the United States and Israel, effective September 1,
1985.
(9) The United States-Israel Strategic Partnership is a
vital asset to United States national, economic, and security
interests and any boycott, or sanctions effort, or policy that
serves to delegitimize or discriminate against Israel will
ultimately harm United States economic interests.
SEC. 3. ADDITIONAL MARKINGS OF IMPORTED ARTICLES AND CONTAINERS FROM
THE WEST BANK AND GAZA STRIP.
(a) Articles of West Bank.--For purposes of section 304 of the
Tariff Act of 1930 (19 U.S.C. 1304), every article of origin of the
geographical area known as the West Bank (or the container of any such
article) imported into the United States shall be marked in accordance
with the requirements of such section, which--
(1) in the case of an article of an area not administered
by Israel in the West Bank, shall include the words ``West
Bank''; and
(2) in the case of an article of an area administered by
Israel in the West Bank, shall include the words ``Israel'',
``Made in Israel'', or ``Product of Israel''.
(b) Articles of Gaza Strip.--For purposes of section 304 of the
Tariff Act of 1930 (19 U.S.C. 1304), every article of origin of the
geographical area known as the Gaza Strip (or the container of any such
article) imported into the United States shall be marked in accordance
with the requirements of such section, and shall include the words
``Gaza'' or ``Gaza Strip''.
(c) Additional Requirement.--The Secretary of the Treasury or any
other competent Federal official (or the official's designee) may not
prohibit the use of any of the markings specified in subsections (a)
and (b) for purposes of satisfying the applicable requirements under
section 304 of the Tariff Act of 1930 with respect to articles of the
West Bank or the Gaza Strip.
(d) Effective Date.--This section shall take effect on the date of
the enactment of this Act and shall apply with respect to articles
imported into the United States on or after such date of enactment. | Non-Discrimination of Israel in Labeling Act This bill requires that, for purposes of marking imported articles and containers under the Tariff Act of 1930, every article of origin from the geographical area known as the West Bank (or its container) imported into the United States shall include the words: "West Bank" in the case of an article of an area in the West Bank not administered by Israel; and "Israel," "Made in Israel," or "Product of Israel" in the case of an article of an area in the West Bank administered by Israel. Every article of origin from the geographical area known as the Gaza Strip (or its container) imported into the United States shall include the words "Gaza" or "Gaza Strip." Neither the Department of the Treasury nor any other federal department or agency may prohibit the use of any of such markings for purposes of satisfying the Tariff Act of 1930. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``North American Energy Infrastructure
Act''.
SEC. 2. FINDING.
Congress finds that the United States should establish a more
uniform, transparent, and modern process for the construction,
connection, operation, and maintenance of oil and natural gas pipelines
and electric transmission facilities for the import and export of oil
and natural gas and the transmission of electricity to and from Canada
and Mexico, in pursuit of a more secure and efficient North American
energy market.
SEC. 3. AUTHORIZATION OF CERTAIN ENERGY INFRASTRUCTURE PROJECTS AT THE
NATIONAL BOUNDARY OF THE UNITED STATES.
(a) Authorization.--Except as provided in subsection (c) and
section 7, no person may construct, connect, operate, or maintain a
cross-border segment of an oil pipeline or electric transmission
facility for the import or export of oil or the transmission of
electricity to or from Canada or Mexico without obtaining a certificate
of crossing for the construction, connection, operation, or maintenance
of the cross-border segment under this section.
(b) Certificate of Crossing.--
(1) Requirement.--Not later than 120 days after final
action is taken under the National Environmental Policy Act of
1969 (42 U.S.C. 4321 et seq.) with respect to a cross-border
segment for which a request is received under this section, the
relevant official identified under paragraph (2), in
consultation with appropriate Federal agencies, shall issue a
certificate of crossing for the cross-border segment unless the
relevant official finds that the construction, connection,
operation, or maintenance of the cross-border segment is not in
the public interest of the United States.
(2) Relevant official.--The relevant official referred to
in paragraph (1) is--
(A) the Secretary of State with respect to oil
pipelines; and
(B) the Secretary of Energy with respect to
electric transmission facilities.
(3) Additional requirement for electric transmission
facilities.--In the case of a request for a certificate of
crossing for the construction, connection, operation, or
maintenance of a cross-border segment of an electric
transmission facility, the Secretary of Energy shall require,
as a condition of issuing the certificate of crossing for the
request under paragraph (1), that the cross-border segment of
the electric transmission facility be constructed, connected,
operated, or maintained consistent with all applicable policies
and standards of--
(A) the Electric Reliability Organization and the
applicable regional entity; and
(B) any Regional Transmission Organization or
Independent System Operator with operational or
functional control over the cross-border segment of the
electric transmission facility.
(c) Exclusions.--This section shall not apply to any construction,
connection, operation, or maintenance of a cross-border segment of an
oil pipeline or electric transmission facility for the import or export
of oil or the transmission of electricity to or from Canada or Mexico--
(1) if the cross-border segment is operating for such
import, export, or transmission as of the date of enactment of
this Act;
(2) if a permit described in section 6 for such
construction, connection, operation, or maintenance has been
issued;
(3) if a certificate of crossing for such construction,
connection, operation, or maintenance has previously been
issued under this section; or
(4) if an application for a permit described in section 6
for such construction, connection, operation, or maintenance is
pending on the date of enactment of this Act, until the earlier
of--
(A) the date on which such application is denied;
or
(B) July 1, 2016.
(d) Effect of Other Laws.--
(1) Application to projects.--Nothing in this section or
section 7 shall affect the application of any other Federal
statute to a project for which a certificate of crossing for
the construction, connection, operation, or maintenance of a
cross-border segment is sought under this section.
(2) Energy policy and conservation act.--Nothing in this
section or section 7 shall affect the authority of the
President under section 103(a) of the Energy Policy and
Conservation Act.
SEC. 4. IMPORTATION OR EXPORTATION OF NATURAL GAS TO CANADA AND MEXICO.
Section 3(c) of the Natural Gas Act (15 U.S.C. 717b(c)) is
amended--
(1) by striking, ``For purposes of subsection (a) of this
section'' and inserting the following:
``(1) In general.--For purposes of subsection (a)''; and
(2) by adding at the end the following:
``(2) Deadline for approval of applications relating to
canada and mexico.--In the case of an application for the
importation or exportation of natural gas to or from Canada or
Mexico, the Commission shall approve the application not later
than 30 days after the date of receipt of the application.''.
SEC. 5. TRANSMISSION OF ELECTRIC ENERGY TO CANADA AND MEXICO.
(a) Repeal of Requirement To Secure Order.--Section 202(e) of the
Federal Power Act (16 U.S.C. 824a(e)) is repealed.
(b) Conforming Amendments.--
(1) State regulations.--Section 202(f) of the Federal Power
Act (16 U.S.C. 824a(f)) is amended by striking ``insofar as
such State regulation does not conflict with the exercise of
the Commission's powers under or relating to subsection
202(e)''.
(2) Seasonal diversity electricity exchange.--Section
602(b) of the Public Utility Regulatory Policies Act of 1978
(16 U.S.C. 824a-4(b)) is amended by striking ``the Commission
has conducted hearings and made the findings required under
section 202(e) of the Federal Power Act'' and all that follows
through the period at the end and inserting ``the Secretary has
conducted hearings and finds that the proposed transmission
facilities would not impair the sufficiency of electric supply
within the United States or would not impede or tend to impede
the coordination in the public interest of facilities subject
to the jurisdiction of the Secretary.''.
SEC. 6. NO PRESIDENTIAL PERMIT REQUIRED.
No Presidential permit (or similar permit) required under Executive
Order No. 13337 (3 U.S.C. 301 note), Executive Order No. 11423 (3
U.S.C. 301 note), section 301 of title 3, United States Code, Executive
Order No. 12038, Executive Order No. 10485, or any other Executive
order shall be necessary for the construction, connection, operation,
or maintenance of an oil or natural gas pipeline or electric
transmission facility, or any cross-border segment thereof.
SEC. 7. MODIFICATIONS TO EXISTING PROJECTS.
No certificate of crossing under section 3, or permit described in
section 6, shall be required for a modification to the construction,
connection, operation, or maintenance of an oil or natural gas pipeline
or electric transmission facility--
(1) that is operating for the import or export of oil or
natural gas or the transmission of electricity to or from
Canada or Mexico as of the date of enactment of the Act;
(2) for which a permit described in section 6 for such
construction, connection, operation, or maintenance has been
issued; or
(3) for which a certificate of crossing for the cross-
border segment of the pipeline or facility has previously been
issued under section 3.
SEC. 8. EFFECTIVE DATE; RULEMAKING DEADLINES.
(a) Effective Date.--Sections 3 through 7, and the amendments made
by such sections, shall take effect on July 1, 2015.
(b) Rulemaking Deadlines.--Each relevant official described in
section 3(b)(2) shall--
(1) not later than 180 days after the date of enactment of
this Act, publish in the Federal Register notice of a proposed
rulemaking to carry out the applicable requirements of section
3; and
(2) not later than 1 year after the date of enactment of
this Act, publish in the Federal Register a final rule to carry
out the applicable requirements of section 3.
SEC. 9. DEFINITIONS.
In this Act--
(1) the term ``cross-border segment'' means the portion of
an oil or natural gas pipeline or electric transmission
facility that is located at the national boundary of the United
States with either Canada or Mexico;
(2) the term ``modification'' includes a change in
ownership, volume expansion, downstream or upstream
interconnection, or adjustment to maintain flow (such as a
reduction or increase in the number of pump or compressor
stations);
(3) the term ``natural gas'' has the meaning given that
term in section 2 of the Natural Gas Act (15 U.S.C. 717a);
(4) the term ``oil'' means petroleum or a petroleum
product;
(5) the terms ``Electric Reliability Organization'' and
``regional entity'' have the meanings given those terms in
section 215 of the Federal Power Act (16 U.S.C. 824o); and
(6) the terms ``Independent System Operator'' and
``Regional Transmission Organization'' have the meanings given
those terms in section 3 of the Federal Power Act (16 U.S.C.
796). | North American Energy Infrastructure Act - Prohibits any person from constructing, connecting, operating, or maintaining a cross-border segment of an oil or natural gas pipeline or electric transmission facility at the national boundary of the United States for the import or export of oil, natural gas, or electricity to or from Canada or Mexico without obtaining a certificate of crossing under this Act. Requires the Secretary of State, with respect to oil pipelines, or the Secretary of Energy (DOE), with respect to electric transmission facilities, to issue a certificate of crossing for the cross-border segment within 120 days after final action is taken under the National Environmental Policy Act of 1969, unless it is not in U.S. public interest. Directs DOE, as a condition of issuing a certificate, to require that the cross-border segment be constructed, connected, operated, or maintained consistent with specified policies and standards. Amends the Natural Gas Act to require the Federal Energy Regulatory Commission (FERC) to approve within 30 days after receipt any application for the importation or exportation of natural gas to or from Canada or Mexico. Declares that no presidential permit shall be necessary for the construction, connection, operation, or maintenance of an oil or natural gas pipeline or electric transmission facility, including any cross-border segment. |
the accretion of matter onto a forming star is inextricably associated with mass outflow .
this manifests itself as well - collimated jets typically observed in higher excitation shock - excited forbidden lines such as [ sii ] or [ feii ] , or lower excitation emission of shock - exited h@xmath1 .
larger - scale and older outflows are typically detected in co emission of entrained ambient material .
while the precise mechanism of jet launching is actively being debated , steady progress has been made in observational studies of the launch regions of jets .
such studies require the highest possible spatial resolutions , since all theories postulate that jets are launched from the inner regions of the protostellar disks on spatial scales of a few au , or from the protostar s magnetosphere , on scales of a stellar diameter . at present ,
the highest spatial resolutions for studies of the jet launch regions are achieved at optical wavelengths with the hubble space telescope ( hst ) , and at near - infrared wavelengths with adaptive optics on large ground - based telescopes .
the near - infrared techniques have the advantage of being able to better penetrate dust extinction , so that the jet launch regions of more deeply embedded , generally younger stars can be observed . however , near - infrared laser - guide - star adaptive optics observations today are still limited by the requirement to have a fairy bright , optically visible ( r @xmath5 16 ) tip - tilt reference star close to the object , or that the object itself can serve this purpose . as a consequence , even at near - infrared wavelengths , the best candidates for high spatial resolution studies are objects near the end of their class i phase or early in the classical t tauri star phase because in nearby molecular ( dark ) clouds , the only available optical tip - tilt reference star is often the young star itself . in this paper
we present detailed adaptive - optics corrected integral field spectroscopy of ngc 1333 svs 13 , the driving source of the famous chain of herbig - haro objects hh 7 - 11 @xcite .
other commonly used names for svs 13 are v512 per and 2mass j03290375 + 3116039 .
the svs 13 outflow appears relatively poorly collimated and is comprised of a number of individual shock fronts , as can be seen in fig . 1 that presents a progression from seeing - limited to diffraction - limited images of the svs 13 outflow .
there is an anti - parallel counter outflow visible , but it is displaced from the axis of the hh 7 - 11 outflow . for a general overview of ngc 1333 and a review of the literature on the svs 13 subcluster , the reader is referred to the review article by @xcite .
the two micron all sky survey ( 2mass ) position of svs 13 is @xmath6 at epoch 1999 nov .
26 with an accuracy of @xmath7 @xcite .
this position lies between the vla configuration a radio position of vla 4b at 3.6 cm reported by @xcite at @xmath8 with an effective epoch of 1997.9 and estimated errors of @xmath9 and the 7 mm vla configuration b position of the same radio source , which @xcite give as @xmath10 with an epoch of 2001 may 4 .
the epoch of the 2mass observations is between the epochs of the vla observations , and the reported near - infrared coordinates lie between those vla coordinates . within the errors of @xmath9 for the vla data and of @xmath7 for 2mass , and a tentatively indicated small proper motion of vla 4b , these data show that the near - infrared source svs 13 is identical to the vla mm and cm - wavelength source vla 4b . it should be noted that @xcite and @xcite , on the basis of older optical astrometry , had originally identified svs 13 with a different cm - radio source : vla 4a .
m ) image based on the same original data as those used by @xcite for their proper motion study .
the image shows the full extent of the shocked molecular hydrogen emission .
the second panel from the top is a small portion of a h@xmath1 s(1 ) 2.122 @xmath4 m image obtained in 1996 with the quirc camera at the uh 2.2 m telescope that shows the emission features closest to svs 13 .
the third panel shows the integrated h@xmath1 s(1 ) line intensity observed with osiris on keck ii in 2011 with the 100 mas spaxel scale , while the fourth panel was taken with osiris on keck i in 2012 with the 20 mas spaxel scale . ]
we use the distance to ngc 1333 svs 13 determined by @xcite from vlbi parallax measurements of masers associated with this object : 235 @xmath11 18 pc .
the study of the dynamics of the ngc 1333 region by @xcite has determined the systemic velocity of the gas near svs 13 to be 8 km @xmath12 relative to the local standard of rest . throughout this paper
, we will give velocities relative to this systemic velocity of the ngc 1333 molecular core that svs 13 is embedded in .
young , still accreting stars often experience eruptive changes in brightness that traditionally get classified into either fu orionis ( fuor ) or ex lupi ( exor ) type outbursts , depending on the duration of the outburst and its spectrum .
the first review of these phenomena has been given by @xcite .
fuor outbursts exhibit time scales of decades to centuries and the spectral characteristics of an optically thick luminous disk with absorption line spectra , while the less substantive exor outbursts have timescales of years , have been observed to actually return to pre - outburst brightness , and show optically thin emission line spectra .
jets emanating from young stars are often comprised of a series of individual shock fronts that have been postulated to arise from changes in the jet velocity as a result of repetitive eruptive events , as described in the review by @xcite and references therein .
this gives , in principle , a way to study the history of such eruptive accretion instabilities from the `` fossil '' record in the jet shock fronts .
our target object , svs 13 , is certainly in this class of multi - shock outflow sources and has been observed to undergo an outburst around 1990 , but the light curve does not match either the classical fuor or exor curves .
the relationship of this last observed outburst to the structure of the svs 13 outflow will be studied here .
the proper motions of the shock fronts of the hh 7 - 11 system associated with svs 13 have been studied at optical wavelengths by @xcite and later by @xcite , and in the infrared in the h@xmath1 10 s(1 ) line by @xcite using the near - infrared camera and multi - object spectrometer ( nicmos ) , by @xcite using the united kingdom infrared telescope ( ukirt ) , and with the spitzer space telescope by @xcite . these studies
all gave proper motions of the shock fronts in the range of 33 mas yr@xmath13 that establish a kinematic expansion age of 2100 yrs for the most distant shock front studied there , the hh 7 bow shock .
in general , the jets seen in optical and near - infrared forbidden lines , mostly in [ sii ] and [ feii ] , offer the most direct view of the material ejected from an accretion disk or from a protostar s magnetosphere .
there are differences in the details of theoretical models of this process , e.g. , in @xcite , @xcite , @xcite , or @xcite , but they all agree on the main point : jets originating from the magnetosphere of a rotating star , or from a rotating disk , and detected in high - excitation shock - excited lines are expected to carry away excess angular momentum and thereby enable the mass accretion process .
however , to date , the most convincing detection of a rotational signature of jets from very young and still deeply embedded objects in sed classes 0 and i have been obtained with radio interferometric observations of outflows in co and sio emission .
these observations typically achieve spatial resolutions of a few arcseconds but make up for this disadvantage by exquisite velocity resolution .
carbon - monoxide high - velocity emission is mostly associated with the entrainment of ambient molecular material into a jet originally emitted at even higher velocity .
any signature of the original jet rotation is expected to be highly confused at this turbulent interface between the jet and the ambient material .
nevertheless , @xcite detected the kinematic signature of rotation in the jet of cb 26 in the co(2 - 1 ) line , very close to the source of the jet .
@xcite found some evidence for rotational components in the ori - s6 outflow using various higher transitions of co and so , while @xcite recently reported a rotational signature in the hh 797 outflow in ic 348 using co(2 - 1 ) . in these cases ,
the strongest rotational signatures were found at some distance ( @xmath14 ) from the driving source .
even farther away from the driving source , using sio data from the vla , @xcite found a rotation signature of the ngc 1333 iras 4a2 protostellar jet , consistent with the disk rotation in that object .
the jet could only be observed at distances of more than 5@xmath15 from its source , and the rotation signature appears most pronounced at distances of 20@xmath15 .
these results are far from universal , however . in a series of papers on hh 211 culminating in @xcite they obtained only a tentative detection of rotation .
@xcite studied the kinematics of sio emission in the hh 212 jet and did not detect a credible rotational signature .
in this paper we have tried to detect the kinematic signature of jet rotation in our [ feii ] data and are presenting our velocity data , but in the end failed to detect this effect .
there are two strong systems of near - infrared emission lines that are commonly used for the study of protostellar jets : the ro - vibrational lines of h@xmath1 and the forbidden [ feii ] lines . as was already discussed specifically in the case of svs 13 by @xcite , the h@xmath1 lines , the brightest being the 10 s(1 ) line at 2.122 @xmath4 m , trace low - velocity shocks , either internal shocks within the jet , or the turbulent interface to the ambient medium around the jet .
this line is therefore well suited for the study of internal shocks and entrainment of ambient material by the jet , but is less suited for a detection of the jet itself .
in contrast , [ feii ] traces higher temperatures and excitations and is usually confined to the densest parts of a jet near its launch region , and to its terminal shocks against ambient material .
the [ feii ] lines in the near - infrared are therefore particularly suited for a study of the kinematics of the jet itself . emission from atomic hydrogen , in the near infrared specifically the brackett series of lines , is observed in many young stars , and typically originates in the spatially unresolved accretion disk around the young star itself , and not in the jet .
the adaptive optics data reported here were obtained at the keck i and ii telescopes , using laser guide star adaptive optics in conjunction with the oh - suppression infrared imaging spectrograph ( osiris ) built by @xcite .
osiris is a lenslet integral field spectrograph where each spatial element ( spaxel ) forms an image of the telescope pupil before being dispersed into a spectrum .
this technique separates spatial flux gradients in each spaxel from any velocity shifts and is therefore very suitable for measuring small shifts in line centroid over extended objects with spatial scales of the order of a spaxel .
the spectral lines mentioned above can be observed in two setting of osiris . in the 3@xmath16 grating order and through the kn2 filter , the h@xmath1 s(1 ) line at 2.122 @xmath4 m is covered . in the 4@xmath17 grading order and through the hn3 filter , two [ feii ] lines and two brackett - series lines
are included in the spectral bandpass .
the first observations were carried out in the night of 2011 august 21 , ( ut ) ( mjd 55794.5345 = 2011.638 ) with osiris on the keck ii telescope , using the relatively coarse 100 mas per spaxel and 50 mas per spaxel scales and the kn2 and kbb filters . a second ,
higher quality set of data was obtained in the night of 2012 november 4 , ( ut ) ( mjd 56235.4853 = 2012.844 ) with osiris now at the keck i telescope .
the finest of the spaxel scales of osiris , with lenslets subtending 20 mas was used under conditions of exceptionally good seeing , with the canada - france - hawaii telescope ( cfht ) seeing monitor reporting average seeing at optical wavelengths of @xmath2 0@xmath03 .
a third set of data was obtained on 2013 november 22 and 23 ( mjd 56618.5 and 56619.5 , midpoint=2013.893 ) with osiris at the keck i telescope .
the seeing was not quite as good as in the year before , but still , very good data were obtained that are well suited for an astrometric comparison with the 2012 data .
.log of the keck osiris observations [ cols="^,^,^,^,^,^,^",options="header " , ] on 2012 nov . 04 and 2013 nov .
22 , we obtained data sets with 600 s individual exposure time in the hn3 filter , covering the 1.644002 @xmath4 m a@xmath18 a@xmath19 emission of [ feii ] ( referred in the following as the 1.644 @xmath4 m line ) and also the fainter [ feii ] @xmath20 @xmath21 line at 1.599915 @xmath4 m ( in the following called the 1.600 @xmath4 m line ) .
all wavelengths given in this paper refer to vacuum and the line wavelengths and identifications are based on @xcite .
2 shows a raw spectrum of svs 13 in the hn3 filter , averaged over a large 0@xmath02@xmath220@xmath02 box centered on svs 13 , to include the radiation from the object , but also the oh airglow sky flux used for wavelength calibration .
the [ feii ] 1.600 @xmath4 m line was evaluated and generally corroborates the conclusions drawn from the 1.644 @xmath4 m line , but due to its lower signal - to - noise ratio , these data are not presented here nor did we try to extract information on excitation conditions from a comparison of those two [ feii ] lines . @xcite
have already presented a detailed study of the excitation conditions of h@xmath1 and [ feii ] emission in svs 13 .
in addition to these shock - excited forbidden lines , osiris in 4@xmath17 grating order with the hn3 filter also covers two atomic hydrogen emission lines , the 13 - 4 ( 1.611373 @xmath4 m ) and 12 - 4 ( 1.641170 @xmath4 m ) recombination lines of the brackett series @xcite . in the kn2 filter ,
4 data sets with 600 s exposure time each were obtained that cover the h@xmath1 10 s(1 ) line at 2.121833 @xmath4 m @xcite .
2@xmath220@xmath02 box containing svs 13 , but also substantial sky oh airglow flux .
this figure serves to illustrate the hydrogen br 12 and 13 lines , the [ feii ] lines , and the night - sky oh airglow lines used for wavelength calibration . ] for the monitoring of svs 13 at infrared wavelengths , the brightness of the object around k=8 is actually a problem .
most archival infrared images obtained with large telescopes and modern infrared cameras are saturated on svs 13 . from 2012 to 2014
, we have therefore monitored the ngc 1333 region , including svs 13 , with the infrared imaging survey ( iris ) system specifically to study the variability of that object at the present epoch .
iris is a 0.8 m telescope and 1024@xmath221024 infrared camera dedicated to the monitoring of infrared variability .
it has been described in detail by @xcite .
the iris infrared camera , which is a refurbished version of the uh quick infrared camera ( quirc ) described by @xcite , is operated in a mode where after the first non - destructive read of the detector array we immediately take a second read , and compute a double - correlated image with only 2 s exposure time by differencing these first two reads .
after this second read - out , the integration continues up to a third read of the detector array , which gives the full integration time of 20 s usually used for our monitoring projects .
the first ( 2 s effective integration time ) images are far from saturating on svs 13 and were used for the photometry presented here .
the iris camera raw data are processed in a reduction pipeline based on the image reduction and analysis facility ( iraf ) software @xcite .
the astrometric solution of the co - added images is calculated based on sextractor @xcite and scamp @xcite .
photometry is obtained with the iraf task phot and calibrated against the 2mass catalog @xcite .
we chose to correct dark current and other signal - independent detector artifacts in osiris by subtracting the median of 10 dark frames of 600 s exposure time from each object exposure , in order not to waste observing time on frequent observations of an empty sky field for sky background subtraction .
the 2d detector frames were then processed into 3d spectral data cubes with the osiris final data reduction pipeline ( frp ) .
this process of extracting the spectra from the raw detector format is obviously critical for the wavelength calibration of the spectra and therefore for the extraction of velocity information . in 2013 ,
the keck observatory released a new version of the data reduction pipeline ( drp v3.2 ) with newly calibrated `` rectification matrices '' that contain the parameters for the extraction of individual spaxel spectra from the raw detector format into wavelength calibrated spectra .
a first inspection of the newly reduced 2013 data showed better consistency of the wavelength calibration across all spaxel and therefore the 2012 data were also re - reduced with the new software and calibration data .
these newly reduced data did indeed change the velocity structure observed in the svs 13 jet to the point of forcing a different conclusions .
the individual data cubes in the hn3 filter were cleaned of spurious bright spaxels using the iraf cosmicray task on individual planes of the data cube .
all data cubes in the hn3 filter were spatially mosaicked together with offsets determined by the centroid of the bright continuum image of the svs 13 star . for the shorter wavelengths , this method resulted in superior image quality compared to using offset data supplied by the ao system . for the longer wavelength spectra in the kn2 band , where we only took one dither pattern , relying on the offsets provided by the laser guide star system proved sufficient .
for both the hn3 and kn2 data cubes , the individual planes ( wavelengths ) of the resulting combined spectral data cubes were adjusted for zero sky background using the background zero adjustment in the iraf task imcombine .
the primary purpose of this procedure was to remove the oh(5,3)r@xmath23(2 ) night sky line at 1.644216 @xmath4 m that partly overlaps with the [ feii ] 1.644002 @xmath4 m line , as was discussed by @xcite .
the integral field spectrograph osiris records all wavelength planes simultaneously , so that the psf at continuum wavelengths is recorded under identical atmospheric conditions than the structure at emission line wavelengths . over small wavelength intervals where the wavelength - dependence of diffraction is negligible ,
very precise continuum subtraction can be achieved .
the continuum adjacent to a given line was computed as the average of approximately 10 wavelength planes outside of the line profile and centered on the line . between the 2011 and 2012 runs ,
the osiris instrument was moved from the keck ii telescope to keck i. both telescopes and their adaptive optics systems have nominally identical focal lengths , but for an astrometric comparison of data from these two telescopes , the spaxel scale in osiris should be observationally verified .
the best data available for this purpose were kindly provided by t. do and j. lu from their work on astrometry of stars near the galactic center where they calibrated osiris astrometry relative to astrometry with nirc2 , an instrument that stayed at the keck ii telescope .
these data indicate that at keck i , the spaxel scale in mas pixel@xmath13 of osiris is 1.28% larger than it had been at keck ii .
while installed on keck i , between our 2012 and 2013 observing runs , the grating of osirs was also changed , resulting in a new wavelength calibration .
also , as was discussed above , the data reduction pipeline used at keck for osiris data was upgraded .
finally , in moving from keck ii to keck i , the optical path leading to osiris now contains one less mirror reflection , leading to a change in parity of the images .
we used the osiris camera with its wider field to obtain a few images tying the osiris spectral data cubes astrometrically to conventional images of svs 13 and one detectable star in its vicinity .
in addition , we observed other objects to ascertain that our data are presented in the correct parity and orientation on the sky . the precise scale ratio of the nominal 50 mas ( used in 2011 ) and 20 mas ( used in 2012 and 2013 ) spaxel scales of osiris was measured on setup data cubes obtained in 2008 while observing the @xmath2 0@xmath025 separation binary @xmath24 orionis @xcite . by averaging the component separation measured in each continuum data cube plane within @xmath11 0.1 @xmath4 m of 2.122 @xmath4 m , a ratio of 2.4534 @xmath11 0.0005 between the two spaxel scales was measured .
this value deviates by about 1 % from the nominal scale ratio of 2.5 .
it should be noted that these deviations from the nominal spaxel scale ratios and the , even smaller , errors of this measurement are much smaller than the astrometric effects of shock - front motion and expansion discussed later in this paper .
the continuum images near the emission line features show nothing but the unresolved stellar object svs 13 .
they are thus suitable as a psf kernel for deconvolution of the individual ( wavelength ) cube planes .
the deconvolution was done with the lucy - richardson algorithm @xcite and @xcite , as implemented in the iraf stsdas package .
how fast the lucy - richardson algorithm converges depends on the signal - to - noise ratio in the frame .
we have tried out different numbers of iterations and chi - square criteria and have chosen a combination that leads to a significant improvement in the spatial resolution without creating obvious artifacts .
working with the original data without continuum subtraction led to strong artifacts in the wings of the overwhelmingly bright stellar source .
for the deconvolution of faint emission features well separated from svs 13 we have therefore worked from the continuum - subtracted data . for the purpose of the grey - scale and rgb representations of the images ,
we have computed a `` high - dynamic - range '' version of the image by adding the deconvolved continuum - subtracted image and a small fraction of the image flux of the original deconvolved image , effectively creating an image where the continuum source is strongly suppressed , but still indicated in the image .
3 shows the s(1 ) image from 2011 in greyscale , and false color representations of the deconvolved `` high - dynamic - range '' 2012 and 2013 data in the h@xmath1 s(1 ) line ( red ) , [ feii ] 1.644@xmath4 m ( green ) and continuum ( blue ) .
fiducial marks are included to illustrate the expansion of the s(1 ) bubble over the course of those 2 years .
s(1 ) image ( red channel ) and the continuum subtracted [ feii ] emission line image from 2012 nov .
4 . all three images are aligned to better than 1 mas , even though the deconvolution residuals in the continuum subtracted images do not align precisely . to mark the position of the continuum source svs 13 , the blue channel shows 10% of the continuum near the [ feii ] line .
a few fiducial marks indicate positions of the bubble rim in 2012 .
the comparison of these marks to the images taken earlier and later clearly indicated the expansion motion of the bubble . ]
the osiris final data reduction pipeline ( frp ) extracts the data from the raw 2-dimensional detector frame and forms a spectral data cube with 0.2 nm spacing in the h band ( 4@xmath17 grating order ) and 0.25 nm in the k band ( 3@xmath16 grating order ) .
the oh airglow lines in the bandpass of the hn3 and kn2 filters give an opportunity to check the wavelength calibration and to measure the spectral resolution in particular , the oh(5,3)r@xmath23(2 ) line at 1.644216 @xmath4 m is very close to the wavelength of the [ feii ] 1.644 @xmath4 m line .
for the analysis of the [ feii ] line emission , the uniformly distributed oh emission was eliminated by adjusting the spatial median of the flux in the image to zero .
in contrast , for the wavelength calibration , the original data were used and the flux in the oh line was measured . to account for continuum stray light , a continuum subtraction using the same continuum as for the [ feii ] line extraction was used .
the individual oh airglow lines were measured by summing up the flux of 400 pixels without detectable object flux , and measuring the position and width of the oh airglow lines in the resulting one - dimensional spectrum using the iraf task splot .
we used the vacuum wavelengths of oh lines given by @xcite and found that the wavelength calibration provided by the osiris reduction pipeline required a slight correction in zero point and dispersion , of order 0.01 nm .
after this correction , the individual oh line positions have residual wavelength errors of 0.008 nm , about a factor of two larger than the fit residuals given by our wavelength reference , @xcite .
the oh airglow lines are , in reality , close doublets with line spacing well below the resolution of osiris .
the measured fwhm of the oh lines was 0.44 @xmath11 0.06 nm in the hn3 filter , and 0.61 @xmath11 0.06 nm in the kn2 filter , a little more than two wavelength planes in the osiris data cubes .
near the [ feii ] line at 1.644 @xmath4 m , one wavelength interval of the data cube corresponds to 37 @xmath25 in radial velocity .
the spectral resolution near that line therefore corresponds to 81 @xmath25 . near the h@xmath1 10 s(1 ) line at 2.122 @xmath4 m , one wavelength interval of 0.25
nm corresponds to a radial velocity difference of 35 @xmath25 , and the measured fwhm of the spectral lines corresponds to 85 @xmath25 width of the spectral profile .
most of the velocity effects discussed in this paper are therefore smaller than one spectral resolution element .
the accuracy of velocity measurements depends on the signal - to - noise ratio of the emission in question . in our discussion of the velocity structure of the [ feii ]
microjet in section 4.5 , we will present a detailed analysis of the measurement errors in that particular case .
we have taken data on an a0v star during the observations , but in the end have chosen not to use these for telluric absorption correction . our data analysis does not rely on spectrophotometric correction of the data cubes . in the hn3 filter ,
the study of the jet in the 1.644 @xmath4 m [ feii ] line and of the hi13 - 4 and hi12 - 4 brackett lines are potentially all affected by the broad atomic hydrogen absorption in the standard star , and the data reduction pipeline would therefore interpolate through the hydrogen lines , making the telluric absorption correction meaningless at these wavelengths .
we have also not been able to use those standard star measurements for a flux calibration of our spectra .
a consistency check of data obtained with different exposure times showed a problem with the way osiris computes the effective integration times for multi - sampled exposures .
we do not have the data to fully calibrate this effect , and given that the flux calibration is not essential for our data analysis , prefer not to give a flux calibration for our data .
figure 1 shows the h@xmath1 shock - excited line emission associated with svs 13 on four different spatial scales : the top image is an archival spitzer space telescope infrared array camera ( irac ) @xcite channel 2 ( 4.5 @xmath4 m ) image covering most of the outflow emission .
the second panel is a small cutout from a ground - based , seeing limited image in the h@xmath1 10 s(1 ) line obtained at the uh 2.2 m telescope , showing just the emission in the immediate vicinity of svs 13 .
the third panel from the top is the s(1 ) integrated line image obtained with the 100 mas scale of osiris in 2011 and the bottom panel shows the 20 mas scale s(1 ) image from osiris obtained in 2012 , without deconvolution .
we show this figure to demonstrate the relationship between the bubbles in the svs 13 outflow very close to the source and the larger scale structure of the flow further downwind . this relationship is not entirely trivial , since not all the bubbles have propagated in the same direction from the svs 13 star .
the seeing - limited ground - based image shows the brightest of these bubbles in the glare of the psf .
the intermediate , 100 mas scale , osiris line image shows three distinct partial bubbles near the source of the outflow , the outer two of which were also observed , but not further discussed , by @xcite , on hst / nicmos images and had also been detected through ukirt narrow - band fabry - perot imaging by @xcite .
the best prior detection of these bubbles was by @xcite who used adaptive - optics - assisted long - slit spectroscopy and detected all the bubbles as a series of emission maxima along the slit .
the most recent emission , the 0@xmath02 long [ feii ] jet , its faint extension into the s(1 ) bubble , and that youngest s(1 ) bubble are oriented at p.a .
145@xmath3 , as seen in fig .
the older two bubbles , visible in the 100 mas image of fig . 1 , are lying more to the south at p.a .
159@xmath3 than those aforementioned features , indicating some variations in the jet emission direction .
in contrast the main chain of herbig - haro knots hh 7 - 11 lies at p.a .
123@xmath3 @xcite .
while at optical wavelengths , the svs 13 outflow manifests itself only in the herbig - haro chain hh 7 - 11 @xcite , longer wavelengths increasingly reveal a counterflow that appears displaced from the hh 7 - 11 axis , and appears in general fainter and less organized ( fig . 1 , top panel ) .
based on spitzer telescope images over a time span of 7 years , @xcite have obtained one proper motion data point for a relatively well defined knot in that counterflow , and this one point is consistent with the counterflow originating from svs 13 .
also , no other embedded protostar has been detected by @xcite to the north of svs 13 that might explain this flow as being independent from svs 13 .
the displacement in the outflow axis , in combination with the recent changes in outflow direction reported in this paper , suggest that the source of the svs 13 outflow is changing direction , probably due to some precessing motion , as will be discussed in more detail in section 4.2 . as an explanation for the differences between the hh 7 - 11 flow and the counterflow , @xcite
have suggested that the northern counterflow enters into the central cavity of ngc 1333 , resulting in different ambient pressure and excitation conditions than the hh 7 - 11 flow .
we have obtained astrometry of the bubble expansion on two different spatial scales .
relatively wide field osiris data cubes with the 100 mas spaxel@xmath13 scale were obtained on keck ii on 2011 august 21 and keck i on 2013 november 23 . a difference image of the h@xmath1 s(1 ) emission at these two epochs
is shown in fig . 4 and illustrates that all three of the closest shock fronts to svs 13 are showing noticeable motion .
the farthest of these three shock fronts was defined enough to allow a cross - correlation measurement of its expansion age in the box indicated in the figure .
astrometry of the smallest and youngest expanding bubble was done on the 2012 november 4 and 2013 november 22/23 data that were taken at the keck i telescope with the same adaptive optics system and the same spaxel scale ( 20 mas spaxel@xmath13 ) and are therefore suitable for a precise astrometric measurement . in this case
, the individual planes of the data cube were deconvolved using the lucy - richardson algorithm , to improve the definition of the bubble edge .
figure 3 illustrates the expansion and motion of the youngest , smallest bubble between the three epochs by superposing fiducial marks that outline features in the middle ( 2012 ) image . for fine registration of the images ,
the iraf task xregister was used to measure the relative alignment of all the frames at the position of the svs 13 stellar object and a magnified and optimally registered version of the images was produced . to measure the expansion of the bubble , we worked under the assumption that its motion relative to the star can be described as a simple constant velocity expansion with the origin at the position of the star .
we computed magnified versions of the individual 2012 november 4 images with magnification factors in the range from 0.95 to 1.10 and the center of the magnification on the svs 13 star .
we then computed the product of these magnified images and each of the individual 2013 november 22 images .
the average of this product in a box centered on the s(1 ) bubble feature image varies smoothly with expansion factor for each of these pairs and the maximum of this one - dimensional correlation function was simply read from the table of cross - correlation values .
we show the cross - correlation functions of the 2012 and 2013 high resolution images in fig . 5 to illustrate the variations due to noise and deconvolution noise amplification , and to document how the errors of this measurement were obtained .
error of the mean scaling factor .
the mean value corresponds to a kinematic start of the expansion in 1980 , an early limit to the true starting date since the expanding bubble will realistically be decelerated . ]
the expansion factor so determined was then converted to a kinematic expansion age of 32 yrs ( prior to 2012 ) , i.e. , kinematic starting time of the expansion in 1980 , which gives the upper limit to the true age of the bubble assuming that the bubble has been expanding at constant velocity . since , realistically , the bubble is expanding into the dense environment of a molecular core , the true age of this youngest bubble will be smaller than the kinematic expansion age and is therefore consistent with this expanding bubble having been generated in the 1990 photometric outburst of svs 13 that will be discussed in secion 4.4 . in 2012 ,
the apex of the bubble was located 654 mas from the star .
the front end of the bubble therefore has moved with an average projected angular velocity of 20 mas yr@xmath13 ( 4.7 au yr@xmath13 = 22.3 km s@xmath13 at 235 pc distance ) away from the star .
similarly , in 2012 , the bubble had a radius of 212 mas and an average radial proper motion of 6.6 mas yr@xmath13 ( 1.55 au yr@xmath13 = 7.35 km s@xmath13 ) .
the proper motion of the bubble center is therefore the proper motion of the leading shock front minus the radial expansion : 13.4 mas yr@xmath13 @xmath2 15 km s@xmath13 . in order to also obtain an estimate of the kinematic age of the next two more distant bubbles , we used data cubes obtained in 2011 august 21 on keck ii with the 100 mas scale , and similar measurements obtained on 2013 november 22 on keck i. the difference of these two images is shown in fig . 4 .
the only shock front where a clear maximum of this correlation function was detected in these 100 mas data was the most distant of the three , indicated by a box in fig . 4 , separated from the star by 2.87@xmath26 in 2011 .
for this bow shock front , we derive a kinematic formation time of 1919 @xmath11 7 and a linear projected proper motion of 31 mas yr@xmath13 .
we have not obtained a reliable kinematic age for the middle shock - excited feature at @xmath2 1@xmath05 from the star , between those two features discussed above ( fig .
the fact that this rather poorly defined system of shock fronts lies pretty precisely in the middle of the 1980 and 1919 ( kinematic formation time ) features suggests that this feature must have formed , again in the kinematic sense without accounting for deceleration , around 1950 .
if there is a regular pattern to the formation of these bubbles , which with only three examples can not be convincingly established yet , the next outburst could be expected within the next decade .
it should be noted , as was already pointed out by @xcite that the proper motion of the major shock fronts in the older parts of the svs 13 outflow indicate a much longer time interval between shock front generating events : about 500 yrs .
it is not clear whether svs 13 exhibits multiple periods , or whether the frequency of ejection events has recently increased .
an argument for the latter point of view may be that the outflow direction has apparently changed in the past century , as we will discuss now . as demonstrated by fig .
1 , the two previous bubbles are located more toward the se of svs 13 and , for example , @xcite chose p.a . 159@xmath3 as the best slit orientation to cover them .
the larger chain of hh objects 7 to 11 is oriented along a position angle of 123@xmath3 @xcite .
the larger scale proper motion study by @xcite based on spitzer 4.5 @xmath4 m images shows some emission knots to the se of svs 13 with a proper motion vector generally to the se , in particular the herbig - haro knots 7 , 8 , and 10 . north of svs 13 , in the counter - flow
, they find an emission knot with a generally northern proper motion vector ( p.a .
-10@xmath3 ) that they ascribe to a chance superposition of another outflow far to the south of svs 13 and with generally northern outflow direction .
the [ feii ] jet originating from svs 13 ( fig .
3 ) is oriented along p.a . 145@xmath3 and
the h@xmath1 bubble center is displaced from svs 13 along the same angle . with the exception of this most recent h@xmath1 bubble ,
the other recent mass ejection events from svs 13 have therefore ejected material initially in a more southerly direction ( p.a .
@xmath2 155@xmath3 - 159@xmath3 ) , as seen in figs . 1 , and 4
. with outflows generally being bipolar
, there must also be mass ejected into a northerly direction .
we therefore believe that , contrary to the assertion by @xcite , the 4.5 @xmath4 m emission knot north of svs 13 and with northerly proper motion is part of the counter jet to the bubbles reported here .
the more distant parts of the s(1 ) emission nw of svs 13 are anti - parallel , but laterally displaced , from the hh 7 - 11 system of emission knots .
we suggest that the hh 7 - 11 chain , the system of bubbles immediately south of svs 13 , the emission knots north of svs 13 , and the more distant emission knots further to the nw ( fig . 1 ) are all part of the same bipolar outflow originating in svs 13 .
this outflow has the s - shaped morphology indicative of a precessing or otherwise unstable jet source . corroborating this
, radio interferometry mapping of molecular emission near svs 13 by @xcite has similarly found an orientation of high - velocity material south of svs 13 different than the hh 7 - 11 herbig - haro chain .
they had already concluded that the differences in the alignment of features of different age indicate a precessing source of the outflow .
a prominent other example of such s - shaped morphology of a molecular hydrogen jet , iras 03256 + 3055 , is located just south of svs 13 in ngc 1333 and has been studied in detail by @xcite .
the relatively low proper motion and spatial velocity measured for the h@xmath1 bubble studied here is consistent with proper motion measurements of the more distant hh 7 - 11 chain of shock fronts by @xcite , @xcite , and @xcite , but is inconsistent with the much higher proper motions reported by @xcite in the first near - infrared proper motion study of hh 7 - 11 . while a precessing accretion disk provides an explanation for the rapid changes in the outflow direction observed in svs 13
, our data do not show any indication for the presence of a companion object that would be close enough to cause the disk precession on the timescales discussed here . from the size and location of the h@xmath1 bubbles
, we can conclude that bubble ejection events happen with a period of several decades . in the model where such events are triggered by periastron passages of a companion object ,
the orbital semimajor axis must be of order of tens of au , or several of the original 20 mas spaxels of our data . the fact that we do nt see a companion object implies that such an object , if it existed , is intrinsically too faint and/or too deeply embedded to be visible in the h and k atmospheric windows .
the only other case of a young stellar object with a pronounced bubble structure of its outflow is xz tau that was studied in detail by @xcite on the basis of optical multi - epoch hst imaging .
very similar to the case of svs 13 discussed here , their images show a series of bubbles with a collimated jet propagating inside of the bubbles .
@xcite reported initial results of numerical simulations of a very young pulsed jet in close proximity to its driving source .
their simulations were specifically tuned to reproduce the observations of the xz tau a chain of bubbles and therefore modeled a faster , more rapidly pulsing jet resulting in more rapidly expanding , overlapping bubbles .
the general scenario underlying their model is however applicable to our case of svs 13 : the fuor - like photometric outburst in 1990 created a short - lived pulse of jet activity .
this newly created , relatively fast jet ran into slower moving material ejected prior to the outburst event , and this internal shock created an expanding `` fireball '' that subsequently expanded ballistically into a bubble carried away from the star by the outflow . some time after the formation of the bubble , a fast continuous jet then emerges to catch up with the bubble , pierce it , and partially destroy it . in svs 13
this process currently repeats itself about every 30 years , creating the series of bubble fragments that forms the string of herbig - haro objects . in distinction from xl tau , the case of svs 13
also involves a significant change in the direction of the jet and bubble ejection leading to the s - shaped overall morphology of the herbig - haro chain .
the individual velocity channels of the h@xmath1 s(1 ) line emission are shown in fig .
6 . in the blue - shifted wing of the velocity distribution , emission projected on the center of the bubble is visible , which is the expected characteristic of an expanding 3-dimensional bubble rather than a 2-dimensional ring . in fig . 7 , a color - coded velocity map of the h@xmath1 10 s(1 ) line emission is presented .
the s(1 ) emission shows three distinguishable velocity features .
emission near the intersection with the jet ( traced in [ feii ] ) shows the smallest blueshifted velocities and is coded red .
the rim of the s(1 ) bubble where the line of sight is tangential to the bubble shows intermediate velocities , coded yellow in the figure .
the highest velocities towards the observer are measured in the filamentary features projected against the center of the bubble and are coded in blue . in the brightly visible rim , the velocity centroid varies between -40 and -55 km s@xmath13 relative to the systemic velocity of the molecular material around svs 13 , while in the simple model of an expanding shell , those velocities should be constant and representative of the center motion of the bubble .
we take -47 @xmath11 7 km s@xmath13 as the typical radial velocity of the bubble center .
10 s(1 ) line images of the svs 13 jet .
the velocities indicated in each panel are relative to the systemic velocity of the svs 13 core .
the bubble feature is blueshifted relative to the systemic velocity . ]
s(1 ) line at 2.122 @xmath4 m .
three distinct velocity features can be distinguished : the emission at the intersection of the jet with the bubble has the lowest ( blueshifted ) radial velocity , the rim of the bubble has intermediate velocities , and the features projected against the center of the bubble have the highest blueshifted velocities .
overlayed on the velocity map are contours of the high - dynamic range lucy - richardson deconvolved flux maps that indicate the position of svs 13 itself . a color version of this figure is available in the electronic version of this paper . ] with the proper motion of the bubble center of 15 @xmath11 2 km s@xmath13 , this suggests an inclination angle of 18@xmath3 @xmath11 3@xmath3 against the line of sight .
@xcite had given an inclination angle of 20@xmath3 to 40@xmath3 for emission close to svs 13 while @xcite has given 40@xmath3 for more distant emission knots .
all the measurements of the inclination angle were done on different shock features .
their spread is therefore a combination of measurement uncertainties and the true variations in the motion of these shock fronts .
irrespective of which of the shock fronts are measures , all data indicate that the outflow emerging from svs 13 is pointed strongly towards the observer .
this explains why the counterjet , which moves away from the observer and into the molecular core around svs 13 , is not detectable at optical wavelengths .
shock fronts of this velocity running into stationary ambient molecular hydrogen are certainly capable of exciting v=10 s(1 ) line emission and are not in danger of dissociating the h@xmath1 , see , for example , @xcite .
the infrared source svs 13 was discovered by @xcite at a k - band magnitude of 9.08 in a 36@xmath26 aperture . soon after the discovery , @xcite reported k = 8.48 in a 30@xmath26 aperture observed in 1978 .
@xcite reported an observation by g. olofsson from 1980 at k=8.7 in a 14@xmath26 aperture as a private communication .
@xcite measured a pre - outburst brightness on k=9.34 in a 6 - 8@xmath26 aperture at the irtf on 1981 , oct . 11 - 13 and @xcite measured k=9.30 in a 16@xmath26 aperture on 1981 , dec .
the object experienced a sudden increase in brightness around 1990 , as first reported by @xcite and further studied by @xcite , @xcite , and @xcite . in the k band , where the best pre - outburst data are available ,
as listed above , the pre - outburst magnitude showed some variation between 9.0 and 9.5 mag .
post - outburst , @xcite documented brightness variation between 8.0 and 8.6 mag . based on the small amplitude of the brightness increase , the post - outburst brightness fluctuation , and the emission lines in its spectrum ,
both @xcite and @xcite have concluded that svs 13 underwent an exor or similar outburst , but both papers left the possibility open that the outburst may be of a different nature .
@xcite has studied the photometric behavior of svs 13 again and concluded that the object had not returned to its pre - outburst brightness at that time .
motivated by the uncertain classification of this event , we have re - examined the historical photometry and are discussing new measurements .
we have tried to gather the available photometric data on svs 13 from the literature and data archives .
while many images of the svs 13 region exist , the star svs 13 is saturated on most of these , and useful data can only be obtained from shallow surveys , usually with small telescopes . in fig .
8 , we show all the available photometric data as a light curve . the 2mass survey list svs 13 as k@xmath27=8.169 and these data had been obtained on nov .
26 , 1999 .
recent k@xmath27 data from the iris telescope @xcite show svs 13 varying in the range of k@xmath27=8.46 to 8.70 between 2012 august and 2014 january .
the photometric color transformations between the ukirt system and the 2mass system for the k vs. k@xmath27 filters are insignificant @xcite , so a direct comparison can be made between the measurements by @xcite and the most recent data .
there is extended emission around svs 13 , so very large photometric apertures tend to overestimate the brightness .
the @xcite , @xcite , @xcite and @xcite data were corrected to the aperture diameter of 6@xmath26 used for the iris photometry , while all other photometric data shown here were originally obtained with apertures in the range of 5@xmath15 - 8@xmath26 , close enough to the iris aperture to not require a correction .
figure 8 shows these aperture - corrected photometric values and clearly demonstrates that svs 13 has not declined back to its pre - outburst ( @xmath28 1990 ) brightness , but remains at or near its peak post - outburst brightness with some indication that brightness fluctuations have diminished over the course of the past 24 years .
similar to the case of the low - luminosity , deeply embedded , cometary nebula
oo ser @xcite and @xcite , svs 13 defies a clear classification as either fuor or exor .
while fuors generally show late - type low gravity absorption spectra in the infrared @xcite indicative of a luminous , optically dense disk , the prototypical exor ex lupi shows the co bandheads and many other optical lines in emission during extreme exor outbursts @xcite . during minor outbursts , optical emission lines
are observed , but co may be in photospheric absorption @xcite .
svs 13 shows the co bandheads in emission @xcite . in the context of this paper ,
the important conclusion is that the elevated post - outburst level of accretion and therefore jet activity in svs 13 has persisted for the past 2 decades to the present , so that the light curve of svs 13 shares the long duration maximum with fuors , while spectroscopically , it resembles an exor , and the small outburst amplitude resembles neither .
this suggests that at least for the younger , more deeply embedded accretion instability events , the traditional two classes may not be appropriate , and that a continous range of outburst characteristics may be a better way to understand this phenomenon .
the light curve also implies that the next outburst of svs 13 , if indeed these outbursts occur repetitively about every 30 years , will start from a brighter state of svs 13 than the previous one .
this would mean that in addition to the repetitive outbursts , we are also observing cumulative changes in svs 13 .
the seeing - limited integral field spectroscopy of @xcite had only marginally resolved the [ feii ] emission around svs 13 .
our adaptive - optics corrected osiris data show that [ feii ] traces a high - velocity microjet that extends from the source svs 13 into the area of the most recent molecular hydrogen bubble .
the results are summarized in the false - color images ( fig .
3 ) . here , the wavelength - integrated continuum - subtracted flux of the h@xmath1 10 s(1 ) line was lucy - richardson deconvolved and is displayed in the red channel . to show the location of the stellar central object in svs 13 without introducing the artifacts from imperfect deconvolution of a dominant bright source , we have added 10% of the deconvolved flux of two continuum wavelength channels to the continuum - subtracted image .
this produces , in effect , a high - dynamic - range version of the svs 13 image for the purpose of showing the relationship of features at different wavelengths .
note that the wavelength channels used here were different from those used as the psf .
the resulting deconvolved continuum image is therefore not the trivial solution of deconvolving the psf with itself .
the green channel of fig .
3 shows the deconvolved integral over the [ feii ] line at 1.644 @xmath4 m . in the same way as described above
, a fraction of the continuum wavelengths immediately adjacent to the line was added to the data to mark the location of the stellar source .
finally , to adjust the color balance of the stellar object to white , the same fraction of the continuum wavelength channels on either side of the [ feii ] line was assigned to the blue channel .
our lucy - richardson deconvolved images of the high - excitation [ feii ] shocks associated with the jets show that the jet is very narrow , about 20 - 40 mas wide .
the velocity diagram in fig .
9 shows that the [ feii ] emission of the jet is blueshifted by -140 to -150 km s@xmath13 relative to the systemic velocity of the molecular material around svs 13 @xcite .
a comparison of figs . 7 and 9 demonstrates that the [ feii ] emission is more blueshifted than the h@xmath1 s(1 ) emission .
the direct superposition of the h@xmath1 and [ feii ] emission line images shows that the bright portions of the microjet [ feii ] emission extend up to the rim of the h@xmath1 bubble , and that faint traces of [ feii ] emission can be detected up to about the center of the bubble .
the jet [ feii ] intensity drops by a factor of @xmath2 20 at the bubble surface .
in fact , the brightest [ feii ] along the jet axis is seen directly upwind from the h@xmath1 bubble surface . the total 1.644 @xmath4 m [ feii ] flux
is dominated by the emission outside of the expanding h@xmath1 bubble .
since the front edge of the bright jet component near the bubble rim is not sharply defined , and subject to the degree of lucy - richardson deconvolution and the different quality of the adaptive optics correction achieved at the two epochs , we could not directly measure the proper motion of the jet front edge . from fig . 3
it seems clear that the [ feii ] shock emission from the jet is surrounded by an envelope of entrained ambient material radiating in the low - excitation shocked h@xmath1 lines , and that at least the bright portions of the jet [ feii ] emission terminate at the bubble rim . from the proper motion of the bubble front side , and the rate of bubble expansion ( section 4.2 ) ,
the expected , but not directly measured , proper motion of the side of the bubble facing svs 13 ( the back side in the direction of motion ) is 6.6 mas yr@xmath13 , which we also take as the proper motion of the front end of the bright jet .
the length of the [ feii ] jet is therefore changing only very slowly , at a rate of @xmath2 7 mas year@xmath13 , and the centroid of the [ feii ] flux therefore moves at less than half this speed .
this is responsible for the impression noted already by @xcite that the [ feii ] emission looked stationary .
what we see from the jet in the [ feii ] emission line are either internal shocks or the shocks resulting from interaction with ambient molecular material that then gets entrained by the jet .
the bulk of the jet material propagates into the area of the h@xmath1 bubble , but the excitation conditions are , apparently , less favorable to the formation of such shocks radiating in [ feii ] .
1.644 @xmath4 m wavelength data cubes of the svs 13 jet , deconvolved using the lucy - richardson algorithm , for the 2012 and 2013 data , respectively .
the bottom panels show the corresponding velocity sigma maps .
the white contours outline the line integrated [ feii ] flux distribution .
it clearly shows that all the line emission from the jet is blue - shifted relative to the systemic velocity of svs 13 . a color version of this figure is available in the electronic version of this paper . ]
we have tried , both on the 2012 and 2013 data , to detect the signature of jet rotation .
initially , using only the 2012 data , such rotation appeared to be indicated @xcite .
however , the 2013 data , and the re - reduced 2012 data with the new calibrations , did not confirm this ( fig .
the velocity pattern measured in 2013 was more confused and , if anything , a faint indication of the opposite rotation direction was found . in fig .
9 ( lower panels ) we show the rms variations of the velocity measurements on the individual data cubes that were coadded to form the velocity maps in the top panels .
these indicate that the errors of the velocity maps are below 10 km s@xmath13 in most parts of the [ feii ] jet .
we conclude that the excellent spatial resolution and moderate spectral resolution of osiris are not sufficient to resolve the kinematic signature of jet rotation in svs 13 .
the permitted atomic hydrogen line emission from svs 13 was found to be broad with line widths of @xmath2 180 @xmath11 10 km s@xmath13 for br@xmath29 by @xcite and for br-12 by @xcite .
the emission is centered on the position of the continuum source , but spatially unresolved .
our data cube in the hn3 filter contains the br-12 and br-13 hydrogen recombination lines that trace the same hot hydrogen recombination regions as the more frequently used br-@xmath29 does . in fig .
10 , we show the deconvolved images of svs 13 across the br-13 emission line after subtraction of the continuum emission .
this figure confirms that emission in the atomic hydrogen recombination lines is spatially centered on the young star , and has the same flux profile as the continuum , i.e. we see no indication that the br-13 emission is extended .
figure 10 has 10 mas pixels ( 2.35 au ) , and any systematic differences between the line and continuum psf are well below that angular scale .
we conclude that the zone of atomic hydrogen emission , presumably the accretion disk itself , is less that 2 au in extent .
it is actually expected to be only of order of the dimensions of the star itself , i.e. , about 2 orders of magnitude smaller than this limit .
our velocity data are consistent with the higher spectral resolution data of @xcite who found that the br@xmath29 line is centered at -25 ( @xmath30 ) km s@xmath13 relative to the systemic velocity of the svs 13 core , essentially at the same velocity as the star . 2 au ) .
we have presented adaptive optics corrected integral field spectroscopy of the young outflow source svs 13 in ngc 1333 . the h@xmath1 10 s(1 ) line at 2.122 @xmath4 m , indicating low - velocity shocks , the higher excitation [ feii ] line of the micro - jet at 1.644 @xmath4 m , and atomic hydrogen emission in the hi12 - 4 and hi13 - 4 lines were analyzed and lead us to the following main conclusions : 1 . the hh 7 - 11 outflow originates in svs 13 , which is identical to vla 4b .
2 . the outflow , at present , originates as a micro - jet of @xmath2 0@xmath02 length , detectable in [ feii ] and oriented at p.a .
the formation of the youngest partly formed bubble visible in h@xmath1 emission can be traced back to the @xmath2 1990 outburst .
4 . the bright parts of the [ feii ] microjet reach up to the boundary of the h@xmath1 s(1 ) bubble , but fainter [ feii ] emission can be traced another @xmath2 0@xmath02 to near the center of the bubble .
5 . beyond that , h@xmath1 s(1 ) emission outlines a curved path of the jet . 6 .
the orientation of the next two bubbles at p.a .
@xmath2 159@xmath3 is roughly point - symmetric to the orientation of outflowing material in the counter - jet found by @xcite in co and @xcite at 4.5 @xmath4 m .
the chain of bubble fragments and their proper motions suggest that bubble - generating events are occurring repetitively , roughly every 30 years , at the present time .
the formation of a series of expanding bubbles within the outflow by a series of eruptive events provides an explanation for the widening of the outflow cavity .
atomic hydrogen emission in the hi12 - 4 and hi13 - 4 ( brackett series ) lines is detected around the continuum position of svs 13 , indicating ongoing accretion onto the star .
the outflow source svs 13 remains at or near the peak brightness reached during the 1990 outburst .
the light curve therefore resembles that of fuor type objects , while the emission line spectrum matches the characteristics of exors .
svs 13 therefore represents an object somewhere between those classical classes .
most of the data presented herein were obtained at the w.m .
keck observatory , which is operated as a scientific partnership among the california institute of technology , the university of california and nasa .
the observatory was made possible by the generous financial support of the w.m .
keck foundation .
some photometric data on svs 13 were obtained at the iris telescope on cerro armazones , which is operated under a cooperative agreement between the `` astronomisches
institut , ruhr universitt bochum '' , germany , the `` universidad catolica del norte '' in antofagasta , chile , and the institute for astronomy , university of hawaii , usa .
construction of the iris infrared camera was supported by the national science foundation under grant ast07 - 04954 .
the operation of the iris telescope is supported by the `` nordrhein - westflische akademie der wissenschaften und der knste '' in the framework of the academy program by the federal republic of germany and the state of nordrhein - westfalen .
we wish to thank angie barr dominguez , thomas dembsky , holger drass , lena kaderhandt , michael ramolla and christian westhues for operating the iris telescope for the acquisition of the data used in this paper , ramon watermann for writing the data reduction pipeline , and roland lemke for technical support .
we thank tuan do and jessica lu for kindly providing the information about the osiris spaxel scales on the keck i and keck ii telescopes .
this publication makes use of data products from the two micron all sky survey , which is a joint project of the university of massachusetts and the infrared processing and analysis center/ california institute of technology , funded by the national aeronautics and space administration and the national science foundation .
this publication also uses archival data obtained with the spitzer space telescope , which is operated by the jet propulsion laboratory , california institute of technology under a contract with nasa .
skrutskie , m. f. , cutri , r. m. , stiening , r. , weinberg , m. d. , schneider , s. , carpenter , j. m. , beichman , c. , capps , r. , chester , t. , elias , j. , huchra , j. , liebert , j. , lonsdale , c. , monet , d. g. , price , s. , seitzer , p. , jarrett , t. , kirkpatrick , j. d. , gizis , j. , howard , e. , evans , t. , fowler , j. , fullmer , l. , hurt , r. , light , r. , kopan , e. l. , marsh , k. a. , mccallon , h. l. , tam , r. , van dyk , s. , & wheelock , s. 2006 , , 131 , 1163 | we present the results of keck telescope laser adaptive optics integral field spectroscopy with osiris of the innermost regions of the ngc 1333 svs 13 outflow that forms the system of herbig - haro objects 7 - 11 .
we find a bright 0@xmath02 long microjet traced by the emission of shock - excited [ feii ] . beyond the extent of this jet
, we find a series of bubbles and fragments of bubbles that are traced in the lower excitation h@xmath1 10 s(1 ) line . while the most recent outflow activity is directed almost precisely ( pa @xmath2 145@xmath3 ) to the south - east of svs 13 ,
there is clear indication that prior bubble ejections were pointed in different directions . within these variations ,
a clear connection of the newly observed bubble ejection events to the well - known , poorly collimated hh 7 - 11 system of herbig - haro objects is established .
astrometry of the youngest of the expanding shock fronts at 3 epochs covering a time span of over two years gives kinematic ages for two of these .
the kinematic age of the youngest bubble is slightly older than the historically observed last photometric outburst of svs 13 in 1990 , consistent with that event launching the bubble and some deceleration of its expansion . a re - evaluation of historic infrared photometry and new data
show that svs 13 has not yet returned to its brightness before that outburst and thus shows a behavior similar to fuor outbursts , albeit with a smaller amplitude .
we postulate that the creation of a series of bubbles and the changes in outflow direction are indicative of a precessing disk and accretion events triggered by a repetitive phenomenon possibly linked to the orbit of a close binary companion .
however , our high - resolution images in the h and k bands do not directly detect any companion object .
we have tried , but failed to detect , the kinematic signature of rotation of the microjet in the [ feii ] emission line at 1.644 @xmath4 m . |
the majority of core - collapse supernovae ( sne ) are associated with the deaths of hydrogen - rich massive stars with zero - age - main - sequence masses @xmath1 > 8 . the most common hydrogen - rich type ii sne exhibit a @xmath2 day plateau in their light curves ( sne iip ) , and the physics behind their emission
is well understood .
in particular , the energy source powering the plateau phase is the energy deposited by the shock wave soon after the explosion .
the recombination process of the ionized hydrogen is responsible for the photosphere moving inward and allowing the stored energy to be radiated during this phase @xcite .
the recombination wave travels through the hydrogen - rich ejecta as the sn expands homologously and cools @xcite .
however , a fraction of sne ii have a linear decline in their light curves after a rapid rise to maximum light ( sne iil ; * ? ? ?
* ; * ? ? ?
after either the plateau or linear decline phase in sne ii there is a subsequent rapid drop , usually around 100 - 140 days after explosion for sne iip and 80 - 100 days for sne iil @xcite . this difference may be due to a smaller ejecta mass in sne iil .
@xcite argued that the fast light - curve decline of sne iil is the result of both the small amount ( 12 ) of hydrogen envelope in a progenitor of relatively larger radius ( a few 1000 ) .
these values can be compared with the progenitors of sne iip , which are red supergiants ( rsgs ) that span radii of 101600 @xcite at the moment of explosion and should have an ejected mass of @xmath3 610 .
recent hydrodynamical and radiative - transfer calculations have also confirmed that a smaller ejected mass may produce a short plateau , but can not reproduce the luminosity of sne iil @xcite . a larger radius , a different hydrogen distribution , or extra energy sources may play an important role in shaping the light curves of sne iil .
rsgs have been definitively identified as the progenitors of sne iip ( e.g. , @xcite ; @xcite ; @xcite ) , with masses estimated from the measured luminosities of the progenitor stars to be in the range 817 .
the fact that more - massive rsg progenitors have not been identified has been highlighted as significant and termed the `` red supergiant problem '' @xcite .
the separation , if any , between sne iip and iil is an open issue .
some authors suggest that they form a continuous distribution @xcite , while others favour their separation into distinct classes @xcite .
if one of the keys to differentiating sne iip from iil is the amount of hydrogen in the envelope , a clear dichotomy may suggest the presence of a specific , unknown ingredient in the evolution of massive stars that sometimes strips them of discrete ( large for sne iil , small for sne iip ) amounts of hydrogen . despite an increasing number of sne ii with published light curves and the growing evidence that sne ii form a continuum of light - curve properties , it remains unknown whether their progenitors span a range of stellar populations . in this paper
, we present comprehensive optical light curves of 12 sne ii that have been monitored by the las cumbras observatory global telescope ( lcogt , @xcite ) network between 2013 and 2014 .
the lcogt network of nine 1 m and two 2 m robotic telescopes enables photometric coverage with long temporal baselines and fast response times to cover the most time - critical phases of sn evolution .
many of these sne ii have also been followed at ultraviolet ( uv ) wavelengths with the uvot camera onboard _ swift
when possible , we will compare our data with those previously published and publicly available ( see next section ) .
the main goal of this paper is to focus on generic properties of our sample to gain an expanded understanding of the sn iip / iil diversity .
various authors use different values of the linear decay rate after maximum brightness to separate sne iip and sne iil .
_ the reader should be aware that in this paper , we will not define a specific decline - rate value to distinguish sne iip and sne iil ; rather , we will use iip to refer to sne ii with flatter linear decays after maximum light ( iip - like sne ) and iil to refer to sne ii with faster decline rates after maximum light ( iil - like sne ) .
_ when possible , we will colour - code the sample of objects using the decline rate of the @xmath4-band light curve ( @xmath5 ; see section [ sec : slope ] for the definition ) to emphasize that we are not dividing sne ii to iip and iil _ a priori_. in section [ sec : sample ] , we present our sample of new sne as well as sne from the literature that are used in the analysis . in section [ sec : data ] we present our photometric data , and section [ sec : slope ] describes the main parameters characterizing them ( listed also in the appendix [ apefigures ] ) .
we repeat the analysis in section [ sec : bolo ] using pseudobolometric light curves . in section [ sec : early ] we discuss early - time data , focusing on the uv differences between sne iip and iil .
we derive in section [ sec : nickel ] the amount of @xmath0ni produced in sne ii and compare our results with progenitor studies of sne ii @xcite . in section [ sec : nebular ]
we constrain the mass of the progenitor for a few sne in our sample using their nebular spectra , and section [ sec : discussion ] summarizes our results .
[ cols="<,^,^,^,^,^,^,^,^ " , ] | high - quality collections of type ii supernova ( sn ) light curves are scarce because they evolve for hundreds of days , making follow - up observations time consuming and often extending over multiple observing seasons . in light of these difficulties ,
the diversity of sne ii is not fully understood . here
we present ultraviolet and optical photometry of 12 sne ii monitored by the las cumbres observatory global telescope network ( lcogt ) during 20132014 , and compare them with previously studied sne having well - sampled light curves .
we explore sn ii diversity by searching for correlations between the slope of the linear light - curve decay after maximum light ( historically used to divide sne ii into iil and iip ) and other measured physical properties .
while sne iil are found to be on average more luminous than sne iip , sne iil do not appear to synthesize more @xmath0ni than sne iip .
finally , optical nebular spectra obtained for several sne in our sample are found to be consistent with models of red supergiant progenitors in the 1216 range .
consequently , sne iil appear not to account for the deficit of massive red supergiants as sn ii progenitors .
supernovae : general supernovae : individual : sn 2013bu , sn 2013fs , sn 2014cy , sn 2013ej , asassn-14 ha , asassn-14gm , asassn-14dq , sn 2013ab , sn 2013by , sn 2014 g , lsq13dpa , lsq14gv , sn 2014 g , sn 2013ab , sn 2015w . |
Europe's Rosetta mission, which aims to land on a comet later this year, has identified what it thinks is the safest place to touch down.
Image copyright ESA/Rosetta/MPS for OSIRIS Team Image caption The extracted square covers about 1km, which is approximately the extent of the uncertainty in Philae's landing precision
Scientists and engineers have spent weeks studying the 4km-wide "ice mountain" known as 67P, looking for a location they can place a small robot.
They have chosen what they hope is a relatively smooth region on the smaller of the comet's two lobes.
But the team is under no illusions as to how difficult the task will be.
Comet 67P/Churyumov-Gerasimenko, currently sweeping through space some 440 million km from Earth, is highly irregular in shape.
Media caption Stephan Ulamec: "It is fascinating but I have to say also quite frightening"
Its surface terrain is marked by deep depressions and towering cliffs.
Even the apparently flat surfaces contain potentially hazardous boulders and fractures.
Avoiding all of these dangers will require a good slice of luck as well as careful planning.
Pre-mission analysis suggested the chances of a successful landing on a roughly spherical body were 70-75%.
With 67P's rubber duck shape, those odds have surely lengthened, but European Space Agency (Esa) project manager Fred Jansen is excited at the prospect of trying.
"At the end of the day, you'll only know when you land. Then it will have been either 100% or zero. That's the way it is," he told BBC News.
Comet 67P/Churyumov-Gerasimenko Named after its 1969 discoverers Klim Churyumov and Svetlana Gerasimenko
Referred to as a "Jupiter class" comet that takes 6.45 years to orbit the Sun
Orbit takes it as close as 180 million km from the Sun, and as far as 840 million km
The icy core, or nucleus, is about 4km (2.5 mi) across and takes 12.4 hours to rotate
The plan still is to make the landing attempt on 11 November.
The Rosetta probe will despatch its piggybacked Philae robot from a distance of about 10km to 67P.
This spider-like device will then hope to engage the surface at "walking pace", deploying screws and harpoons in an effort to lock itself down to an object that has very little gravitational attraction.
Esa says it will be a one-shot opportunity. Events will be taking place so far away that real-time radio control will be impossible.
Instead, the process will have to be fully automated with the final commands uploaded to Rosetta and Philae many hours in advance.
The choice of landing site follows a weekend of deliberations in Toulouse, France.
Mission team-members gathered to assess the latest imagery downlinked from Rosetta, which has been closely tracking 67P now since early August.
Media caption Imaging leader Holger Sierks gave details of the site at a news briefing
Five potential landing locations were on the table, and these have now been reduced to just two - a primary and a back-up.
Both will be studied further in the coming weeks before a final go/no-go decision is made in mid-October.
The favoured location is identified for the moment simply by the letter "J". (A public naming competition will run in due course).
On 67P's smaller lobe, it has good lighting conditions, which for Philae means having some periods of darkness also to cool its systems.
"It's relatively flat, but there are still some cliffs in this terrain; there are still boulders. So, it's not easy to land on 'J'," explained Philae project manager Stephan Ulamec from the German Space Agency.
"We're getting very close now, and it is fascinating but I have to say also quite frightening to some degree - that 20 years of work boils down now to just a few hours. Are we going to be successful, or will we be unlucky, hitting a boulder that just happens to be under the lander?"
The back-up site is situated on the larger of 67P's lobes. Its designation through the selection process has been the letter "C".
Image copyright ESA/ATG medialab Image caption Getting Philae down safely to any location on the comet is going to be an immense challenge
It hosts a range of surface features, including depressions, cliffs and hills, but - crucially - many smooth plains, also.
More detailed mapping of J and C is ongoing.
This past week, Rosetta manoeuvred into an orbit just 30km from the 67P, enabling its camera system to see details that can be measured on the sub-metre scale.
For landing, such information only has a certain usefulness, however, as the "hands-off" touchdown can only be targeted with a best precision that will likely run to hundreds of metres.
And that error is larger than any of the apparently smooth terrains on the reachable parts of the comet
The whole separation, descent and landing procedure is expected to take seven hours.
If Philae gets down successfully into a stable, operable configuration, it would represent a historic first in space exploration.
But Esa cautions that this high-risk venture should really be seen as an "exciting extra" on the Rosetta mission.
The major objective from the outset has been to catch the comet with the Rosetta probe and to study it from orbit.
This is happening right now. The spacecraft's array of remote-sensing instruments are currently investigating the comet's properties, endeavouring to find out how the object is constructed and from what materials.
Image copyright ESA/Rosetta/NAVCAM Image caption The 10-billion-tonne target: 67P is roughly 4km wide with a bulk density of about 400kg per cubic metre - similar to many types of wood
But, of course, an in-situ analysis of the surface chemistry would be a huge boon to the mission overall, and this is what Philae aims to provide.
It will carry a drill to pull up comet samples into an onboard laboratory.
And, indeed, any surface information gathered by Philae will provide important "ground truth" for Rosetta's remote-sensing observations.
"If we get only a few measurements and a few samples, we will have been successful," said Jean-Pierre Bibring, the co-principal investigator on Philae.
Image copyright ESA/Rosetta/MPS for OSIRIS Team Image caption Site C will only be used if further study shows up previously unseen problems with site J
"We'd like to complete the first science sequence, which is two days on the comet. But for understanding activity on the comet, we also need the long-term science. That would be a matter of a few weeks, not necessarily a few months."
In any case, engineers do not expect Philae to survive beyond about March, when it will likely succumb to overheating.
But irrespective of the outcome on 11 November, Rosetta will continue to follow 67P for at least a year.
The probe will get a grandstand view of the comet as it warms on a swing around the Sun.
67P's ices will vaporise, throwing jets of gas and an immense cloud of dust out into space.
Holger Sierks, the principal investigator on Rosetta's Osiris camera system, will soon be acquiring pictures of the surface of 67P that resolve features down to just 20cm across.
He expects the mission to yield some profound knowledge.
"I feel this is really a historic moment in science. It's unprecedented; it's a quantum step in cometary science," he told BBC News.
"We will achieve centimetre resolution, getting us closer to understanding the origins of the Solar System and its building blocks 4.5 billion years ago." ||||| A 2.4 mile-wide (4km) region on the 'head' of comet 67P/Churyumov-Gerasimenko has been revealed as the spot for the daring landing of Rosetta's Philae probe.
The high-risk manoeuvre, if successful, will be the first time in history that a probe has been landed on a comet.
Scientists at mission control in Germany hope the spider-like probe will send back data that could answer questions on the origin of Earth's water and perhaps even life.
Scroll down for videos and animation
The relatively smooth landing region, identified for the moment simply by the letter 'J', is located on the smaller of the duck-shaped comet's two lobes. An inset showing a close up of the landing site is also shown
But they've warned that the landing should be seen as an 'exciting extra' on the Rosetta mission as the mission carries a 'high risk'.
Comet 67P/Churyumov-Gerasimenko is currently travelling through space some 273 million miles (440 million km) from Earth.
The relatively smooth landing region, identified for the moment simply by the letter 'J', is located on the smaller of the duck-shaped comet's two lobes.
A further back up site has been chosen on the larger of 67P's lobes, and is currently being marked by Esa with the letter 'C'.
Close-up of Philae’s primary landing site J, which is located on the ‘head’ of Comet 67P/Churyumov–Gerasimenko. Site J offers the minimum risk to the lander in comparison to the other candidate sites, and is also scientifically interesting, with signs of activity nearby
At Site J, the majority of slopes are less than 30º relative to the local vertical, reducing the chances of Philae toppling over during touchdown. Site J also appears to have relatively few boulders, and receives sufficient daily sunlight to recharge Philae and continue science operations when its battery runs out
LANDING PHILAE ON COMET 67P The Rosetta probe will launch its Philae robot from a distance of about 6.2 miles (10km) to Comet 67P. The 220lb (100kg) lander will reach the surface on 11 November. It will take around seven hours to descend. During the descent, images will be taken and other observations of the comet's environment will be made. Philae will make a gentle landing on the comet at walking pace, using screws and harpoons to lower and secure itself on the surface. Once the lander touches down, it will make a 360° panoramic image of the landing site to help determine where and in what orientation it has landed. The initial science phase will then begin, with other instruments analysing the plasma and magnetic environment, and the surface and subsurface temperature. The lander will also drill and collect samples from beneath the surface, delivering them to the onboard laboratory for analysis. The interior structure of the comet will be explored by sending radio waves through the surface towards Rosetta.
The announcement follows weeks of studying the comet, in the hopes of finding an ideal location to land the probe as minimal damage as possible.
If successful, the 220lb (100kg) lander will reach the surface on 11 November, where it will perform in-depth measurements on the comet.
'As we have seen from recent close-up images, the comet is a beautiful but dramatic world – it is scientifically exciting, but its shape makes it operationally challenging,' says Stephan Ulamec, Philae Lander Manager.
'None of the candidate landing sites met all of the operational criteria at the 100 per cent level, but Site J is clearly the best solution.'
At Site J, the majority of slopes have an angle less than 30º, reducing the chances of Philae toppling over during touchdown.
Site J also appears to have relatively few boulders, and receives sufficient daily sunlight to recharge Philae and continue science operations on the surface beyond the initial battery-powered phase.
Site C was chosen as a backup because of more sunlight hours and fewer boulders.
But even the flat surface chosen contains potentially dangerous boulders and cracks.
The Rosetta probe will launch its Philae robot from a distance of about 6.2 miles (10km).
The reconstructed-colour image, taken about 10 days ago, indicates how dark the comet appears. On the average, the comet's surface reflects about four per cent of impinging visible light, making it as dark as coal
This 3D image of Philae’s primary landing site on the ‘head’ of Comet 67P/Churyumov–Gerasimenko can be viewed using stereoscopic glasses with red–green/blue filters
Site C was chosen as the backup site for Rosetta’s lander Philae during the Landing Site Selection meeting held on 13–14 September 2014. The image was taken by Rosetta at a distance of about 43 miles (70km)
If all goes to plan, Philae will then make a gentle landing on the comet at walking pace, using screws and harpoons to lower and secure itself on the surface.
According to Esa, this is a 'one-shot opportunity' and real-time radio control will be impossible due to the incredible distance of the comet from Earth.
The entire landing process is expected to take seven hours, and if successful, will represent a historic moment in space exploration.
'We will make the first ever in situ analysis of a comet at this site, giving us an unparalleled insight into the composition, structure and evolution of a comet,' says Jean-Pierre Bibring, a lead lander scientist.
'Site J in particular offers us the chance to analyse pristine material, characterise the properties of the nucleus, and study the processes that drive its activity.'
Several different surface regions are shown in this map, which is oriented with the comet's 'body' in the foreground and the 'head' in the background. The map was used to help researchers pick a suitable place to drop a lander in November
The original five candidate landing sites for Rosetta’s lander Philae, and with the backup, Site C, indicated
The race to find the landing site could only begin once Rosetta arrived at the comet on 6 August, when the comet was seen close-up for the first time.
Since then, the spacecraft has moved to within 18 miles (30km) of the comet, allowing more detailed scientific measurements of the candidate sites.
'There's no time to lose, but now that we're closer to the comet, continued science and mapping operations will help us improve the analysis of the primary and backup landing sites,' says ESA Rosetta flight director Andrea Accomazzo.
'Of course, we cannot predict the activity of the comet between now and landing, and on landing day itself.
'A sudden increase in activity could affect the position of Rosetta in its orbit at the moment of deployment and in turn the exact location where Philae will land, and that's what makes this a risky operation.'
During the descent, images will be taken and other observations of the comet's environment will be made.
Zoom in on the image below to find out where Rosetta is at the moment
The Rosetta probe will launch its Philae robot from a distance of about 6.2 miles (10km) to 67P. If all goes to plan, Philae will then make a gentle landing on the comet at walking pace
Rosetta took an incredible selfie of its 131ft (40 metre) solar wings gleaming against the darkness of space last week. In the background is the duck-shaped comet, Comet 67P/Churyumov-Gerasimenko, with its distinct 'head' and 'body' clearly visible
Once the lander touches down, it will make a 360° panoramic image of the landing site to help determine where and in what orientation it has landed.
The initial science phase will then begin, with other instruments analysing the plasma and magnetic environment, and the surface and subsurface temperature.
The lander will also drill and collect samples from beneath the surface, delivering them to the onboard laboratory for analysis.
The interior structure of the comet will be explored by sending radio waves through the surface towards Rosetta.
'No one has ever attempted to land on a comet before, so it is a real challenge,' says Fred Jansen, Esa Rosetta mission manager.
'The complicated 'double' structure of the comet has had a considerable impact on the overall risks related to landing, but they are risks worth taking to have the chance of making the first ever soft landing on a comet.'
Comets are time capsules containing primitive material left over from the epoch when the sun and its planets formed.
By studying the gas, dust and structure of the nucleus and organic materials associated with the comet, via both remote and in situ observations, the Rosetta mission should become the key to unlocking the history and evolution of our solar system. | – Europe's space agency has finally decided exactly where on comet 67P/Churyumov-Gerasimenko it's going to attempt to land a robot—and in this case, "J" marks the spot. The robot, called Philae, is currently being carried by the space probe Rosetta, which is orbiting 67P after a decade-long pursuit. Experts had to find a relatively smooth area on the "ice mountain" (which also happens to be rubber-duck shaped) that would be hospitable to Philae, the BBC reports. They chose a spot, identified using the letter "J", on the smaller of the comet's two lobes. If the landing, set for Nov. 11, is successful, Philae will lock onto the 2.5-mile-wide comet with help from harpoons and screws, the BBC notes. But it won't be easy: "None of the candidate landing sites met all of the operational criteria at the 100% level," says the landing manager, who notes the "relatively flat" site that was chosen does have some cliffs and boulders. Further, the comet is currently about 273 million miles away from Earth, and due to that distance, the lander can't be controlled in real time. Rather, the landing commands will be uploaded to Philae days in advance. Should the landing be successful, Philae will study the comet, checking temperatures, collecting samples, and sending radio waves through 67P to investigate its insides, the Daily Mail reports. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Free Flow of Information Act of
2006''.
SEC. 2. PURPOSE.
The purpose of this Act is to guarantee the free flow of
information to the public through a free and active press as the most
effective check upon Government abuse, while protecting the right of
the public to effective law enforcement and the fair administration of
justice.
SEC. 3. DEFINITIONS.
In this Act--
(1) the term ``attorney for the United States'' means the
Attorney General, any United States Attorney, Department of
Justice prosecutor, special prosecutor, or other officer or
employee of the United States in the executive branch of
Government or any independent regulatory agency with the
authority to obtain a subpoena or other compulsory process;
(2) the term ``communication service provider''--
(A) means any person that transmits information of
the customer's choosing by electronic means; and
(B) includes a telecommunications carrier, an
information service provider, an interactive computer
service provider, and an information content provider
(as such terms are defined in sections 3 and 230 of the
Communications Act of 1934 (47 U.S.C. 153 and 230));
and
(3) the term ``journalist'' means a person who, for
financial gain or livelihood, is engaged in gathering,
preparing, collecting, photographing, recording, writing,
editing, reporting, or publishing news or information as a
salaried employee of or independent contractor for a newspaper,
news journal, news agency, book publisher, press association,
wire service, radio or television station, network, magazine,
Internet news service, or other professional medium or agency
which has as 1 of its regular functions the processing and
researching of news or information intended for dissemination
to the public.
SEC. 4. COMPELLED DISCLOSURE AT THE REQUEST OF ATTORNEYS FOR THE UNITED
STATES IN CRIMINAL PROCEEDINGS.
(a) In General.--Except as provided in subsection (b), in any
criminal investigation or prosecution, a Federal court may not, upon
the request of an attorney for the United States, compel a journalist,
any person who employs or has an independent contract with a
journalist, or a communication service provider to disclose--
(1) information identifying a source who provided
information under a promise or agreement of confidentiality
made by the journalist while acting in a professional
newsgathering capacity; or
(2) any records, communication data, documents, or
information that the journalist obtained or created while
acting in a professional newsgathering capacity and upon a
promise or agreement that such records, communication data,
documents, or information would be confidential.
(b) Disclosure.--Compelled disclosures otherwise prohibited under
subsection (a) may be ordered only if a court, after providing the
journalist, or any person who employs or has an independent contract
with a journalist, notice and an opportunity to be heard, determines by
clear and convincing evidence that--
(1) the attorney for the United States has exhausted
alternative sources of the information;
(2) to the extent possible, the subpoena--
(A) avoids requiring production of a large volume
of unpublished material; and
(B) is limited to--
(i) the verification of published
information; and
(ii) surrounding circumstances relating to
the accuracy of the published information;
(3) the attorney for the United States has given reasonable
and timely notice of a demand for documents;
(4) nondisclosure of the information would be contrary to
the public interest, taking into account both the public
interest in compelling disclosure and the public interest in
newsgathering and maintaining a free flow of information to
citizens;
(5) there are reasonable grounds, based on an alternative,
independent source, to believe that a crime has occurred, and
that the information sought is critical to the investigation or
prosecution, particularly with respect to directly establishing
guilt or innocence; and
(6) the subpoena is not being used to obtain peripheral,
nonessential, or speculative information.
SEC. 5. COMPELLED DISCLOSURE AT THE REQUEST OF CRIMINAL DEFENDANTS.
(a) In General.--Except as provided in subsection (b), a Federal
court may not, upon the request of a criminal defendant, compel a
journalist, any person who employs or has an independent contract with
a journalist, or a communication service provider to disclose--
(1) information identifying a source who provided
information under a promise or agreement of confidentiality
made by the journalist while acting in a professional
newsgathering capacity; or
(2) any records, communication data, documents, or
information that the journalist obtained or created while
acting in a professional newsgathering capacity and under a
promise or agreement that such records, communication data,
documents, or information would be confidential.
(b) Disclosure.--Compelled disclosures otherwise prohibited under
subsection (a) may be ordered only if a court, after providing the
journalist, or any person who employs or has an independent contract
with a journalist, notice and an opportunity to be heard, determines by
clear and convincing evidence that--
(1) the criminal defendant has exhausted alternative
sources of the information;
(2) there are reasonable grounds, based on an alternative
source, to believe that the information sought is directly
relevant to the question of guilt or innocence or to a fact
that is critical to enhancement or mitigation of a sentence;
(3) the subpoena is not being used to obtain peripheral,
nonessential, or speculative information; and
(4) nondisclosure of the information would be contrary to
the public interest, taking into account the public interest in
compelling disclosure, the defendant's interest in a fair
trial, and the public interest in newsgathering and in
maintaining the free flow of information.
SEC. 6. CIVIL LITIGATION.
(a) In General.--Except as provided in subsection (b), in any civil
action, a Federal court may not compel a journalist, any person who
employs or has an independent contract with a journalist, or a
communication service provider to disclose--
(1) information identifying a source who provided
information under a promise or agreement of confidentiality
made by the journalist while acting in a professional
newsgathering capacity; or
(2) any records, communication data, documents, or
information that the journalist obtained or created while
acting in a professional newsgathering capacity and upon a
promise or agreement that such records, communication data,
documents, or information would be confidential.
(b) Disclosure.--Compelled disclosures otherwise prohibited under
(a) may be ordered only if a court, after providing the journalist, or
any person who employs or has an independent contract with a
journalist, notice and an opportunity to be heard, determines by clear
and convincing evidence that--
(1) the party seeking the information has exhausted
alternative sources of the information;
(2) the information sought is critical to the successful
completion of the civil action;
(3) nondisclosure of the information would be contrary to
the public interest, taking into account both the public
interest in compelling disclosure and the public interest in
newsgathering and in maintaining the free flow of information
to the widest possible degree about all matters that enter the
public sphere;
(4) the subpoena is not being used to obtain peripheral,
nonessential, or speculative information;
(5) to the extent possible, the subpoena--
(A) avoids requiring production of a large volume
of unpublished material; and
(B) is limited to--
(i) the verification of published
information; and
(ii) surrounding circumstances relating to
the accuracy of the published information; and
(6) the party seeking the information has given reasonable
and timely notice of the demand for documents.
SEC. 7. EXCEPTION FOR JOURNALIST'S EYEWITNESS OBSERVATIONS OR
PARTICIPATION IN CRIMINAL OR TORTIOUS CONDUCT.
Notwithstanding sections 1 through 6, a journalist, any person who
employs or has an independent contract with a journalist, or a
communication service provider has no privilege against disclosure of
any information, record, document, or item obtained as the result of
the eyewitness observations of criminal conduct or commitment of
criminal or tortious conduct by the journalist, including any physical
evidence or visual or audio recording of the observed conduct, if a
court determines by clear and convincing evidence that the party
seeking to compel disclosure under this section has exhausted
reasonable efforts to obtain the information from alternative sources.
This section does not apply if the alleged criminal or tortious conduct
is the act of communicating the documents or information at issue.
SEC. 8. EXCEPTION TO PREVENT DEATH OR SUBSTANTIAL BODILY INJURY.
Notwithstanding sections 1 through 6, a journalist, any person who
employs or has an independent contract with a journalist, or
communication service provider has no privilege against disclosure of
any information to the extent such information is reasonably necessary
to stop or prevent reasonably certain--
(1) death; or
(2) substantial bodily harm.
SEC. 9. EXCEPTION FOR NATIONAL SECURITY INTEREST.
(a) In General.--Notwithstanding sections 1 through 6, a
journalist, any person who employs or has an independent contract with
a journalist, or communication service provider has no privilege
against disclosure of any records, communication data, documents,
information, or items described in sections 4(a), 5(a), or 6(a) sought
by an attorney for the United States by subpoena, court order, or other
compulsory process, if a court has provided the journalist, or any
person who employs or has an independent contract with a journalist,
notice and an opportunity to be heard, and determined by clear and
convincing evidence, that--
(1) disclosure of information identifying the source is
necessary to prevent an act of terrorism or to prevent
significant and actual harm to the national security, and the
value of the information that would be disclosed clearly
outweighs the harm to the public interest and the free flow of
information that would be caused by compelling the disclosure;
or
(2) in a criminal investigation or prosecution of an
unauthorized disclosure of properly classified Government
information by an employee of the United States, such
unauthorized disclosure has seriously damaged the national
security, alternative sources of the information identifying
the source have been exhausted, and the harm caused by the
unauthorized disclosure of properly classified Government
information clearly outweighs the value to the public of the
disclosed information.
(b) Rule of Construction.--Nothing in this Act shall be construed
to limit any authority of the Government under the Foreign Intelligence
Surveillance Act (50 U.S.C. 1801 et seq.).
SEC. 10. JOURNALIST'S SOURCES AND WORK PRODUCT PRODUCED WITHOUT PROMISE
OR AGREEMENT OF CONFIDENTIALITY.
Nothing in this Act shall supersede, dilute, or preclude any law or
court decision compelling or not compelling disclosure by a journalist,
any person who employs or has an independent contract with a
journalist, or a communications service provider of--
(1) information identifying a source who provided
information without a promise or agreement of confidentiality
made by the journalist while acting in a professional
newsgathering capacity; or
(2) records, communication data, documents, or information
obtained without a promise or agreement that such records,
communication data, documents, or information would be
confidential. | Free Flow of Information Act of 2006 - Prohibits federal courts in criminal or civil proceedings from compelling journalists to disclose their confidential sources or information which they obtain in a professional newsgathering capacity. Allows exceptions if a court finds that: (1) alternative means of obtaining such confidential information have been exhausted and reasonable and timely notice of a demand for such information has been given; (2) subpoenas for such information are limited in scope; (3) such information is critical to pending criminal or civil litigation; and (4) nondisclosure of such information would be contrary to the public interest.
Denies journalists a privilege against disclosure of confidential information if such information: (1) was obtained by eyewitness observations of criminal conduct by a journalist or involvement of such journalist in criminal or tortious conduct; (2) is necessary to prevent death or substantial bodily harm; (3) is necessary to protect national security; and (4) was provided or obtained without a promise of confidentiality. |
null | it has been hypothesized that environmental exposure to synthetic estrogenic chemicals and related endocrine - active compounds may be responsible for a global decrease in sperm counts , decreased male reproductive capacity , and breast cancer in women .
results of recent studies show that there are large demographic variations in sperm counts within countries or regions , and analyses of north american data show that sperm counts have not decreased over the last 60 years .
analyses of records for hypospadias and cryptorchidism also show demographic differences in these disorders before 1985 ; however , since 1985 rates of hypospadias have not changed and cryptorchidism has actually declined .
temporal changes in sex ratios and fertility are minimal , whereas testicular cancer is increasing in most countries ; however , in scandinavia , the difference between high ( denmark ) and low ( finland ) incidence areas are not well understood and are unlikely to be correlated with differences in exposure to synthetic industrial chemicals .
results from studies on organochlorine contaminants ( dde / pcb ) show that levels were not significantly different in breast cancer patients versus controls .
thus , many of the male and female reproductive tract problems linked to the endocrine - disruptor hypothesis have not increased and are not correlated with synthetic industrial contaminants .
this does not exclude an endocrine - etiology for some adverse environmental effects or human problems associated with high exposures to some chemicals.imagesfigure 1 |
the inflation paradigm @xcite is successful in explaining the horizon problem , flatness problem and the homogeneity problem in the standard hot - big - bang cosmology .
the generic inflation model predicts a nearly scale - invariant primordial scalar power spectrum which has been measured accurately by the observations of the cosmic microwave background radiation ( cmb ) such as _ wilkinson microwave anisotropy probe _ ( hereafter _ wmap _ ) @xcite and _ planck _ @xcite satellites .
however , even with precise constraints from cmb temperature fluctuations , there are still many models that predict the values of spectral index @xmath1 and its running @xmath11 which are allowed by the constraints from current data .
recently , the ground - based `` background imaging of cosmic extragalactic polarization '' experiment just completed its second phase experiment ( hereafter bicep2 ) , which observed the cmb b - mode polarization ( divergence - free mode of polarization ) on angular scales of a few degrees @xcite ( for cosmological implications , see also @xcite ) .
the cmb b - mode polarization can only be sourced by primordial gravitational waves , which is a very clean test of the primordial tensor fluctuations .
results from bicep2 @xcite show that the power spectrum of b - mode polarization @xmath12 on a few degree angular scales is detected at @xmath13 confidence level ( cl ) , which clearly indicates a signature of primordial gravitational waves .
if this is true , it becomes a strong observational support of the scenario in which the universe started from the inflationary exponential expansion , when the primordial tensor fluctuations are produced and stretched to super - hubble length , and later entered into the hubble horizon and decayed at small scales .
indeed , this field of cmb observation has been developing very fast over the past decades and many on - going experiments are seeking such a cmb b - mode polarization signal .
for instance , the _ planck _ satellite with its nine frequency channels may achieve higher signal - to - noise ratio and probe even larger angular scales than bicep2 .
ground - based sptpol @xcite , actpol @xcite , polarbear @xcite and class @xcite experiments are also completing with each other to make more precise measurement on the cmb b - mode polarization signals .
therefore further experiments may precisely determine not only the amplitude but also the shape of the primordial tensor power spectrum , therefore constitutes a direct test of the inflation mechanism .
therefore it is important to connect the predictions from inflation models with the current observational results from bicep2 and _ planck_. in pervious _
wmap _ and _ planck _ analysis papers @xcite , the authors plot the predictions of spectral index of scalar power spectrum @xmath1 and tensor - to - scalar ratio @xmath0 of various inflation models with the constraints from cmb data ( fig . 7 in @xcite and fig . 1 in @xcite ) .
while making the prediction of @xmath14@xmath0 relation for a given potential , the variation of the inflaton field is calculated by integrating the equation of motion from the end of inflation to some early epoch .
this duration of inflation is assumed by to around @xmath15@xmath16 number of e - folds ( @xmath17 ) .
although the @xmath1-@xmath0 relation works well , it is worth noticing the underlying assumption that during inflation , the inflaton potential ( which is typically taken as a monomial , for example , @xmath18 ) is the same as that during the first 10 e - folds of observable inflation . with the recent measurement of tensor - to - scalar ratio @xmath0 ,
this assumption become problematic .
it becomes much more challenging than before to build an inflation model , in which a simple potential describes the total @xmath16 e - folds of inflation without changing its shape and parameters . to see this , remember that the inflationary potential can be perturbatively expanded near a value of @xmath19 as @xmath20 where @xmath21 is the change of @xmath9 value during inflation . from the effective field theory point of view , the potential derivatives up to @xmath22 are relevant and marginal operators .
those operators can be naturally turned on without suppression . on the other hand ,
the @xmath22 and higher derivatives are irrelevant operators , which are suppressed with an energy scale defined by the uv physics ( at most the planck scale ) . for the expansion to converge we need @xmath23 to be smaller than the uv completion scale of inflation .
however , lyth bound @xcite suggests that , the change of the field with respect to the number of e - folds is related to the value of @xmath0 @xmath24 where @xmath25 is the reduced planck mass . by substituting the current measurement of @xmath0 from bicep2 @xcite @xmath26 thus per e - fold , @xmath27 . by assuming @xmath28
, we find that the inflaton field moves at least at a distance into account , to avoid model dependence .
otherwise the number in could change , while keep within the same order of magnitude .
also note that it is also possible that @xmath29 is not varying monotonically , to avoid large field inflation @xcite . ]
@xmath30 in its field space .
if this is true , @xmath23 at 60 e - folds is much greater than @xmath31 .
thus the expansion is no - longer valid since all the high derivatives of @xmath32 could in principle contribute along the 60 e - folds of the inflationary trajectory .
the effective field theory of inflationary background is therefore non - perturbative , and becomes out of control for higher order derivatives .
the uv completion of inflation becomes a sharper problem then ever before .
however , the leading uv completion paradigm , string theory , actually makes the problem worse . on the one hand ,
most string inflation models predict much smaller @xmath0 and thus not consistent with the bicep2 data . on the other hand ,
the characteristic energy scale of string theory is the string scale . for string theory to be perturbatively solvable , strong coupling had better to be small and the string scale should be lower than the planck scale ( say , 0.1 @xmath33 or lower ) .
the size of extra dimension may further lower the string scale .
with such a lower scale as the cutoff , the effective field becomes a greater challenge than that with the planck scale cutoff . before bicep2 ,
the major challenge for building stringy inflation models is the @xmath34-problem @xcite , with the observational @xmath34 smaller than theoretical expectations .
now , a more serious @xmath29-problem emerges , leaving the observed large @xmath29 for the string theorists to explain . in the effective field theory point of view , given the current constraint on @xmath0 , we may not be able to trust the inflaton potential along the whole 60 number of e - folds .
this motivates us not to integrate the potential throughout @xmath16 number of e - folds , but to reconstruct the potential @xcite locally .
therefore we focus on a local range of field values , along the first a few e - folds window . in this range , @xmath35 thus the inflationary potential expanded by eq . (
[ eq : eft ] ) is in better control .
we will show that , assuming small running , with current data it is possible to accurately reconstruct the amplitude and shape of the inflaton potential within the cmb observation window of about 10 e - folds .
however , in the case of large running , the uncertainty of the reconstruction becomes large when @xmath23 is comparable with @xmath36 , which corresponds to a field range of about 3 e - folds .
this paper is organized as follows : in sec . [
sec : slow - roll ] , we explain our notations of slow - roll parameters , and show the connection with @xmath1 and @xmath0 . in sec . [
sec : recon - slow ] we directly constrain the slow - roll parameters with current data from bicep2 . in sec .
[ sec : recon - poten ] , we sample the inflationary potential and compare its amplitude and shape with the large - field inflation models . the conclusion and discussions are presented in the last section .
the slow - roll parameters as derivatives of the scale factor can be defined as @xmath37 note that these are not equivalent to the parameters defined by the derivatives of the inflationary potential .
however , the definitions ( [ eq : def ] ) are increasingly commonly used and their role to keep track of the expansion history of inflation is by itself important . at the leading order of slow roll
, the scalar power spectrum can be written as @xcite @xmath38 where @xmath39gev is the reduced planck mass .
the spectral index of primordial power spectrum , the running of the spectral index and the tensor - to - scalar ratio for single - field slow - roll inflation is @xmath40 @xmath41 @xmath42 given the current measurement of @xmath0 ( eq . ( [ eq : r - val ] ) ) with @xmath43 at @xmath44 ( best - fitting value constrained by _ planck _
@xcite ) , the best - fitting values of @xmath29 and hubble parameter @xmath45 are @xmath46 respectively . this hubble scale sets the energy scale of inflationary perturbations .
the corresponding energy density is @xmath47 which is about @xmath48 times higher than the current large hadron collider ( lhc ) experiment .
therefore , the cmb experiment is essentially a high - energy experiment that probes the regime of physics unaccessible by the current ground - based accelerators .
note that @xmath49 is of order the maximal temperature that the universe could get at reheating .
the current preferred value @xmath50 is interestingly near the grand unification scale .
it thus becomes increasingly important to understand the relation between inflation and the grand unification , including model building , reheating mechanism , and possible topological defects which might be produced at the grand unification phase transition . the tensor power spectrum and its spectral index are @xmath51 where the second equation holds only for single - field slow - roll inflation models .
figure [ fig : nsr ] shows the joint constraints ( @xmath52 confidence level ) on @xmath1@xmath0 with current _ planck_+ wp+ highl cmb data is mainly from 150ghz south pole telescope ( spt ) @xcite and 148ghz antacama cosmology telescope ( act ) @xcite . ] + bicep2 .
we have also plotted together the prediction of @xmath1@xmath0 relation during the number of e - folds @xmath53@xmath16 , for the large - field inflation models @xmath54 , ( n=1,2,3,4 ) ( see sec . [
sec : compare ] for details ) .
we can see that the @xmath55 and @xmath56 are within or near the @xmath57 cl ( depending on e - folds ) , and the previously considered `` ruled out '' @xmath58 potential is now back inside @xmath59 contour if @xmath60 .
the linear potential becomes disfavored by the new data .
the @xmath1-@xmath0 diagram can be fitted by the multivariate normal distribution @xmath61 \right\ } ~,\end{aligned}\ ] ] where @xmath62 , @xmath63 ( @xmath64 , @xmath65 ) are the central value and standard deviation of @xmath66 ( @xmath0 ) respectively .
the @xmath67 is their correlation coefficient .
fitting this multi - variant gaussian distribution with the ( @xmath1 , @xmath0 ) diagram ( fig .
[ fig : nsr ] ) , we find @xmath68 the equal - probability contours of multivariate distribution are those with the exponent @xmath69 = \mathrm{constant}~.\end{aligned}\ ] ] the @xmath70 as a random variable obeys @xmath71 distribution ( i.e. the @xmath72 distribution with two degrees of freedom ) .
a contour with probability @xmath73 inside the contour corresponds to @xmath74 = 2\log\left ( \frac{1}{1-\alpha } \right)~.\end{aligned}\ ] ] from and , the inflationary slow - roll parameters @xmath29 and @xmath34 satisfies the multivariate normal distribution @xmath75 \right\}~,\end{aligned}\ ] ] with the central value , standard deviation and the correlation coefficient of @xmath29 and @xmath0 being @xmath76 @xmath77 @xmath78 plugging in the data from , we obtain @xmath79 the @xmath57 and @xmath59 cl of the joint constraints @xmath29 and @xmath34 are plotted in the right panel of fig .
[ fig : nsr ] .
one can see that the constraints on large - field inflation is the same as the left panel of fig .
[ fig : nsr ] : the @xmath80 and @xmath8 models are favoured by the current data within @xmath81 cl , while @xmath82 model is ruled out at @xmath83@xmath84 cl .
the probability distribution of @xmath85 , on the other hand , can be derived from the current bound of @xmath2 .
current data shows @xcite @xmath86 to good precision , one can approximate @xmath87 , considering that @xmath88 is much smaller than the current experimental bound . on the other hand , for most slow - roll models of inflation , @xmath85 is much smaller than the current bound .
here we shall assume @xmath89 and @xmath90 are of the order @xmath34 or smaller , while consider both cases of large @xmath85 and small @xmath85 , motivated by observations and theory respectively .
in this section we expand the inflationary potential in terms of the slow - roll parameters .
since we are only interested in the range of a few number of e - folds , we are able to locally expand the potential in an effective field theory and have more confident to drop the non - renormalizable terms . in addition , higher order derivatives on the potential are highly suppressed by slow - roll parameters ( and by the largeness of @xmath31 ) practically ( unless the higher order slow - roll parameters are unusually huge ) . thus we derive the derivatives of the potential up to @xmath91th order . in single field inflation ( without slow - roll approximation ) , the derivatives of the potential can be solved from the slow - roll parameters as @xmath92 slow - roll approximation simplifies the above equations .
however , it is important to note that if we allow large running , @xmath85 could be as large as @xmath93 .
thus here we perform slow - roll approximation , but leaves @xmath85 not approximated . , @xmath66 and @xmath2 around local potential does not rely on the smallness of @xmath85 .
see , for example , @xcite , for the computational details .
this is an advantage of making use of the slow roll parameters defined from expansion . on the other hand ,
if the slow roll parameters are defined by derivatives of the potential , the large @xmath85 enters the calculation , though eventually cancelled in calculating the observables . ] by using @xmath94 , eqs .
( [ eq : v1][eq : v4 ] ) can be simplified as @xmath95 where dimensionless parameters @xmath96 ( @xmath97 ) are defined to measure the derivatives of the inflationary potential . without loss of generality
we have used @xmath98 , i.e. we have chosen the positive sign solution instead of the negative sign solution @xmath99 .
this is because given a potential with a @xmath100 solution , one can always flip the potential by @xmath101 redefinition without change of any physics . with the definition in ,
@xmath102 is of order @xmath103 in slow - roll parameters .
however , one should note that there may be a hierarchy between @xmath29 and @xmath34 such that the above counting ( @xmath103 ) may break down if @xmath104 .
fortunately , with the current tensor we should have at least @xmath105 .
thus the slow - roll order counting should be fine unless fine tuning happens . with the above definition
, the potential can be reconstructed till @xmath91th order as @xmath106~.\end{aligned}\ ] ] here one can see explicitly that if @xmath107 , the coefficients @xmath102 are required to be smaller for higher orders in order for the taylor expansion to be converged . before performing the numerical reconstruction of potential ,
we derive a few analytical relations . given the distribution of @xmath29 and @xmath34 , the statistical properties of @xmath108 and @xmath109 can be calculated as @xmath110 where @xmath111 is the hypergeometric function and @xmath112 is the modified bessel function of the first kind . for the calculation of @xmath113 and @xmath114 ,
we have assumed that @xmath85 is small and negligible .
however , in the following numerical sampling , we shall consider both possibilities : either sampling @xmath85 from observational bound of @xmath2 , or assuming @xmath85 is small and negligible .
first(and also theoretically reasonable ) , we can assume that @xmath115 ( @xmath116 ) are random variables with the same variance as @xmath34 ( a difference at the same order of magnitude does not change the result significantly ) .
the probability distribution of @xmath117 and @xmath118 are plotted in the left panel of fig .
[ fig : d3d4 ] . from the plot
, we confirm that with the mild theoretical assumptions discussed before , the derivative expansion of the potential converges nicely as a local expansion .
we have also checked our assumption of the @xmath89 and @xmath90 distribution in the middle panel of fig .
[ fig : d3d4 ] , where @xmath89 and @xmath90 are set to zero , which does not significantly modify the distribution of @xmath117 and @xmath118 .
second , we also assume non - zero @xmath2 and take its constraints ( eq . ( [ eq : alphas ] ) ) ( the values near the observational bound ) to sample the potential . this possibility seems not very probable theoretically .
but on the other hand , observationally a large @xmath2 would be the easiest way to resolve the tension between the low @xmath0 reported by _
wmap_/_planck _ , and the high @xmath0 reported by bicep2 .
the tension may either be resolved by considering isocurvature perturbations or the anomalous suppression of power at low @xmath119 .
but those possibilities are beyond the scope of the current work , but interested readers can refer to @xcite . the distribution of @xmath108 and @xmath109 is illustrated in fig .
[ fig : d1d2 ] , with small @xmath2 and observational @xmath2 respectively . as one can find ( and analytically expect ) ,
the dependence on @xmath2 is weak for @xmath108 and @xmath109 . in right panel of fig .
[ fig : d3d4 ] , we use @xmath2 to constrain @xmath85 .
there @xmath89 and @xmath90 are treated as having a variance the same as @xmath34 . but
the choice of @xmath89 and @xmath90 only affects @xmath118 , which is the least important one in the reconstruction . finally , with the realizations of @xmath108 , @xmath109 , @xmath117 and @xmath118 , we can reconstruct @xmath120 locally from .
the reconstruction is plotted in fig .
[ fig : v ] , for @xmath121 case ( left panel ) and large @xmath2 case ( right panel ) respectively . for the case of @xmath122 ,
the reconstructed potential is highly linear within the range @xmath3 .
but when we extrapolate the potential into about @xmath123 range , which is suggested by large field inflation , we see higher order derivatives does tend to bend the reconstructed potential . in the large @xmath2 case , the reconstructed potential has significant non - linearities at @xmath124 , especially for exceptional values of @xmath102 in the parameter space .
on the other hand , we are still fine for a local range of @xmath125 .
@xmath1@xmath0 diagram with predictions of @xmath126 models and the joint constraints .
the _ _ planck__+wp+highl plots without running is extracted from @xcite .
the bicep2 @xmath127 result is taken from @xcite .
( for comparison , the planck+wp+highl and planck+wp+highl+bicep2 results with running can be found in @xcite.),scaledwidth=60.0% ]
although the local reconstruction of the inflaton potential is safer than fitting a global potential , but nevertheless considering the global inflationary potential by one polynomial function is very widely used , here we compare our local reconstruction with the global inflaton potential @xmath128 .
the detailed analysis of the reheating history is beyond the scope of the current paper .
here we use the analytical approximation that inflation ends when @xmath129 , and use slow - roll approximation before reaching @xmath129 . in this approach
, @xmath129 corresponds to @xmath130 .
the relevant quantities at horizon crossing can be calculated as @xmath131 those values corresponds to @xmath132 the corresponding parameters are plotted on the @xmath133-@xmath0 diagram in fig .
[ fig : pbcombine ] . currently , the best models which fits the bicep2 data are the @xmath55 and @xmath56 models . from the power spectrum , @xmath134 values for those models
are ( note that the power spectrum is calculated at @xmath135 ) @xmath136 and @xmath137 here the @xmath134-value for the @xmath55 potential corresponds to @xmath138 ( @xmath53 ) and @xmath139 ( @xmath140 ) .
the @xmath141 and @xmath142 models can be calculated similarly .
the potential of those four models are plotted together with the reconstructed potential in fig .
[ fig : v ] .
as one can observe from the figures , the @xmath141 model falls outside the @xmath81 range of reconstructed potential ( in almost whole plotted range with small @xmath2 , and when the higher derivatives not yet become dominate with large @xmath2 ) . while the @xmath143 model stays at the boundary of @xmath81 at small @xmath23
this is consistent with the @xmath1-@xmath0 or @xmath29-@xmath34 contours in the fig .
[ fig : nsr ] . on the other hand ,
our approach is delightful that we now directly have the form and variance of the potential .
we have reconstructed the inflationary potential locally around a value @xmath19 , which corresponds to the time when the @xmath144 modes exits the horizon .
the distribution of the inflationary slow - roll parameters ( which are defined through the expansion ) are calculated , and converted to derivatives of the inflationary potential .
two different assumptions have been tested against the reconstruction
a ( theoretically ) small and negligible running of the spectral index @xmath2 , and an observationally allowed @xmath2 from current constraints . for the case of small and negligible @xmath2 , the reconstructed potential is highly linear over @xmath145 range .
the effective field theory is practically fine ( although still theoretically challenged ) .
however , for the large @xmath2 case , higher derivative corrections to the potential quickly dominates while @xmath9 rolls , which implies the inflaton keeps switching between different effective field theories , or there is a need of a tuned inflaton field theory . with the new observational window as shown by bicep2 data @xcite , much works are left to be done to accurately reconstruct the amplitude and shape of the inflation potential .
here we fit the ( @xmath1 , @xmath0 ) diagram with the multi - variant gaussian distribution .
we find that with current constraints from _
planck_+wp+highl+bicep2 data , the @xmath7 and @xmath8 models are consistent within @xmath59 cl , while @xmath9 potential is ruled out at around @xmath146 cl , and @xmath147 model is consistent within @xmath59 cl if the number of e - folds is around @xmath16 .
this is of - course , not a global fitting of the inflationary prediction , but constitutes a quick examination of the consistency between models and data .
it is also important to examine the theoretical assumption of the shape of gravitational wave spectra .
for example , if parity is violated , which results in different amplitudes for the two tensor modes .
another example would be non - gaussianly distributed tensor modes .
it remains interesting to see whether the different theoretical models can fit the new data of cmb polarization .
theoretically , the super - planckian range of @xmath9 motion poses serious challenge to the field theory of inflation .
it is very important to see how to obtain theoretical naturalness for large field inflation .
alternatively , it remains an open question that if other sources of gravitational waves , instead of the tensor fluctuation from the vacuum , could change the predictions .
yzm is supported by a cita national fellowship .
yw is supported by a starting grant of the european research council ( erc stg grant 279617 ) , and the stephen hawking advanced fellowship .
p. a. r. ade _ et al .
_ [ bicep2 collaboration ] , arxiv:1403.3985 [ astro-ph.co ] .
w. zhao , c. cheng and q. -g .
huang , arxiv:1403.3919 [ astro-ph.co ] ; t. higaki , k. s. jeong and f. takahashi , arxiv:1403.4186 [ hep - ph ] ; k. nakayama and f. takahashi , arxiv:1403.4132 [ hep - ph ] ; d. j. e. marsh , d. grin , r. hlozek and p. g. ferreira , arxiv:1403.4216 [ astro-ph.co ] . e. j. copeland , a. r. liddle , d. h. lyth , e. d. stewart and d. wands , 1994 , phys .
d * 49 * , 6410 [ astro - ph/9401011 ] .
e. j. copeland , e. w. kolb , a. r. liddle and j. e. lidsey , 1993 , phys .
d * 48 * , 2529 [ hep - ph/9303288 ] . j. e. lidsey , a. r. liddle , e. w. kolb , e. j. copeland , t. barreiro and m. abney , 1997 , rev .
phys . * 69 * , 373 [ astro - ph/9508078 ] .
i. ben - dayan , & r. brustein , 2010 , jcap , 09 , 007 | we locally reconstruct the inflationary potential by using the current constraints on @xmath0 and @xmath1 from bicep2 data . assuming small and negligible @xmath2 , the inflationary potential is approximately linear in @xmath3 range but becomes non - linear in @xmath4 range .
however if we vary the value of @xmath2 within the range given by constraints from _ planck _
measurement , the local reconstruction is only valid in the range of @xmath5 , which challenges the inflationary background from the point of view of effective field theory .
we show that , within the range of @xmath6 , the inflation potential can be precisely reconstructed . with the current reconstruction ,
we show that @xmath7 and @xmath8 are consistent , while @xmath9 model is ruled out by @xmath10 confidence level of the reconstructed range of potential .
this sets up a strong limit of large - field inflation models .
= 1 |
in recent years , a significant amount of research has been conducted to investigate
the efficacy of 2% lidocaine versus 4% articaine [ 1 - 5 ] .
one common topic of
investigation is to compare the effectiveness of these two anesthetics in
challenging situations , such as the ability to anesthetize maxillary teeth with
irreversible pulpitis .
several studies comparing the efficacy of 2% lidocaine and
4% articaine have had contradictory outcomes [ 1 - 7 ] .
some studies and literature
reviews have shown a statistically significant advantage to the use of articaine ,
especially on teeth with irreversible pulpitis .
one such study found that 4%
articaine was 1.59 to 3.76 times more likely to produce anesthetic success than of
2% lidocaine , and 3.81 times more likely when given as an infiltration .
similarly , a separate study found 4%
articaine with 1:100,000 epinephrine to be superior to 2% lidocaine with 1:100,000
epinephrine in patients with irreversible pulpitis when given as a maxillary buccal
infiltration .
still other studies have
found there to be no significant difference in the efficacy of 2% lidocaine
( 1:100,000 ) and 4% articaine ( 1:100,000 ) in achieving anaesthesia of maxillary teeth
with irreversible pulpitis .
one major consideration in reviewing many of these studies is that the comparison of
the efficacy of 2% lidocaine versus 4% articaine is made using equal volumes of
anesthetic instead of equal doses . considering that a given volume of 4% articaine
contains twice as much active drug as an equivalent volume of 2% lidocaine
, no
direct milligram - to - milligram comparison is being performed . as has been discussed
by other authors
, it would be expected that a 4% solution would perform better than
a 2% solution given equal volumes of fluid .
consequently , the purpose of this case - series study was to investigate the efficacy
of 2% lidocaine ( 1:100,000 ) and 4% articaine ( 1:100,000 ) in the surgical anaesthesia
of teeth with irreversible pulpitis using a milligram - to - milligram comparison of
both solutions .
the null hypothesis of the present study is that there is no
difference in obtaining surgical anaesthesia of maxillary posterior teeth with
irreversible pulpitis with the use of 2% lidocaine ( 1:100,000 ) or 4% articaine
( 1:100,000 ) when equal - milligram doses are administered .
the primary outcome measure
was a pain - free extraction procedure of a maxillary posterior tooth diagnosed with
irreversible pulpitis .
patient recruitment and data collection for this study took place over the course of
8 months , commencing in august of 2011 .
patients were evaluated and treated either
in the private dental practice setting in fort collins , colorado , usa , or in the
hospital setting in ann arbor , michigan usa . forty - one adult patients who presented to one of these two clinical settings on an
emergency basis and who were diagnosed with irreversible pulpitis were initially
included in this study . only those patients with a single symptomatic maxillary
posterior tooth in the quadrant to receive treatment were included .
two clinicians confirmed the diagnosis , administered the anesthetic , and performed
the extractions .
for each individual patient , diagnoses were made and data collected
by the same author administering treatment .
diagnosis of irreversible pulpitis was
made utilizing the results of cold testing , electronic pulp testing , palpation and
percussion sensitivity tests , and radiographic analysis .
all conventional treatment options constituting regional standard of care were
verbally presented and discussed with patients for management of their irreversible
pulpitis .
various types of treatment were discussed , including root canal therapy ,
extraction , implant with fixed prosthesis , fixed partial denture or removable
prosthesis .
only patients who elected extraction as their treatment of choice were
included in this analysis . over a period of 8 months ,
forty - one patients presented for extraction using the
criteria defined above and whose data are included in this study .
the two compounds
administered during this study were 2% lidocaine hcl with 1:100,000 epinephrine
( lidocaine ; cook - waite laboratories , inc , new york , usa ) and 4% articaine hcl with
1:100,000 epinephrine ( zorcaine ; cook - waite laboratories , new york , usa ) .
the goal was to administer equal doses ( mg ) of
either solution in a fashion that approximates standard clinical administration for
a dental extraction .
each patient received approximately of the total anesthetic volume as a buccal
infiltration and approximately of the total anesthetic volume as palatal
infiltration .
lidocaine infiltration group : each patient received a total of 3.6 ml ( 72 mg ) 2%
lidocaine with 1:100,000 epinephrine solution .
approximately of the solution
volume ( 2.7 ml ) was administered as a buccal infiltration and approximately ( 0.9
ml ) as a palatal infiltration .
articaine infiltration group : each patient received a total of 1.8 ml ( 72 mg ) 4%
articaine with 1:100,000 epinephrine solution .
approximately of the solution
volume ( 1.35 ml ) was administered as a buccal infiltration and approximately ( 0.45
ml ) as a palatal infiltration .
local anesthetic solution was administered via infiltration at the level of the
mucobuccal fold adjacent to the symptomatic tooth , in addition to a palatal
infiltration approximately 12 - 15 mm apical to the free gingival margin .
all
solutions were injected using a 27-gauge 20 mm sterile needle and standard dental
aspirating syringe that accepts 1.8 ml carpules .
each solution , in each site , was
deposited over the course of one minute after negative aspiration .
any
patients with a medical history that contraindicated the use of amide- or
ester - containing local anesthetics with epinephrine ( e.g. , uncontrolled
hypertension ) were excluded from this study .
patients with an unstable medical
history ( e.g. , recent history of myocardial infarct or poorly controlled diabetes ) ,
or other contraindications to oral surgery were also excluded .
patients currently
taking any prescription or over - the - counter analgesics were excluded .
patients with
allergies or reported adverse events specific to lidocaine , articaine , or their
components or who exhibited factors that would compromise data collection ( e.g. ,
neuralgia , undergoing pain management ) were also excluded from analysis . prior to
administration of local anesthetic
, patients were asked to rate the level of
discomfort in the affected tooth on a scale of 0 to 10 , with 0 being no pain and 10
being the worst pain ever experienced . after a period of 10 minutes following the administration of anesthetic solution ,
initial anaesthesia was assessed by penetrating the buccal and palatal gingival
tissues with a sterile 27-gauge needle and asking the patient if any discomfort was
experienced . if 0 pain was reported on a scale of 0 to 10 , the gingival cuff around
the symptomatic tooth was relieved , and again the patient was asked to report any
discomfort .
failures of either test resulted in recording the patient as failure of
initial anaesthesia and failure of treatment due to pain .
the patient was then
managed with further supplementary anesthetic injections as needed . upon verification of successful initial anaesthesia ,
the clinician proceeded to
perform extraction of the symptomatic tooth using the usual non - surgical extraction
protocol . before any elevation or forceps application
was attempted , a # 7
mucoperiosteal elevator was introduced into the mesial and distal pdl space to
verify anaesthesia and aid in atraumatic extraction .
if any pain was reported at any
point between initial verification of anaesthesia and complete delivery of the
symptomatic tooth , the patient was recorded as failure of pain - free treatment and
managed with further supplementary anesthetic injections as needed .
data were analysed with the fisher 's exact test , unpaired student 's t - test and
normality test .
the patient
age ranged between 19 and 63 years , with a median of 38 years and a mean of 38.5
years , standard deviation ( sd ) 12.25 years .
there was no significant difference between females
and males in this study with respect to age ( mean sd = 37.8 12.9 for females and
39.7 11.8 for males , t = 0.46 , p = 0.65 ) .
there was no significant difference
between the proportion of males to females between the lidocaine and articaine
groups ( fisher 's exact test , p = 0.341 ) .
there was no significant difference between
the lidocaine and articaine groups with respect to the reported pain ( 0 - 10 ) before
initiating treatment ( averages were 4.8 and 4.2 , respectively ) .
patient demographics , initial pain and categories of teeth
anesthetized tested between lidocaine and articaine groups .
non significant , unpaired student 's t - test . of the 41 patients , 21 patients received 3.6 ml of 2% lidocaine with 1:100,000
epinephrine , and 20 patients received 1.8 ml of 4% articaine with 1:100,000
epinephrine .
there was no significant difference in gender distribution or initial
pain between patients treated by the two examiners .
table 1 also illustrates that there was no
significant difference found between the categories of teeth anesthetized with 2%
lidocaine or 4% articaine ( fisher 's exact test , p = 0.326 ) .
table 2 illustrates that there was no significant difference
found in success of initial anaesthesia when tested with sterile 27-gauge needle , or
during relief of the gingival cuff between the lidocaine and articaine groups
( fisher 's exact test p = 0.488 ) .
overall , the success of initial anaesthesia was
97.6% , with the only initial anaesthesia failure occurring during release of the
gingival cuff around a molar in the articaine group .
the data in table 3 show that of those
patients who achieved successful initial anaesthesia , there was no significant
difference between lidocaine and articaine groups with respect to the categories of
teeth anesthetized ( fisher 's exact test , p = 0.324 ) .
of the 40 patients for whom
initial anaesthesia was successful , 33 ( 82.5% ) went on to experience elevation and
extraction of the affected tooth without pain , while 7 ( 17.5% ) experienced
discomfort during the extraction procedure .
table
4 shows that there was no significant difference in success rates of
treatment between the lidocaine and articaine infiltration groups ( fisher 's exact
test , p = 0.226 ) .
with respect to the categories of teeth anesthetized ,
success rates of treatment were similar for both premolars and molars ( fisher 's
exact test , p = 0.387 ) for the 40 patients for whom initial anaesthesia was
successful ( table 5 ) .
the number ( % ) of patients with successful initial anaesthesia in
different categories of teeth after lidocaine and articaine
infiltrations tested between lidocaine and articaine groups .
treatment success after lidocaine and articaine infiltrations in 40
patients with successful initial anaesthesia tested between lidocaine and articaine groups .
treatment outcomes by tooth category after lidocaine and articaine
infiltrations in 40 patients with successful initial anaesthesia tested between lidocaine and articaine groups . non significant , fisher 's exact test . combining the results of anaesthesia success at all stages of treatment , overall
treatment success
successful treatment was achieved in 19 of 21 patients ( 90.5% ) who
received 3.6 ml 2% lidocaine ( 1:100,000 ) and in 14 of 20 ( 70% ) patients who received
1.8 ml 4% articaine ( 1:100,000 ) .
there was no significant difference in the
incidence of pain - free treatment between lidocaine and articaine groups
( fisher 's exact test , p = 0.13 ) .
table 7
provides an overall summary of the present results for the lidocaine and articaine
groups .
overall success rates of treatments after lidocaine and articaine
infiltration for all 41 patients included in the present study tested between lidocaine and articaine groups .
summary of results after lidocaine and articaine infiltrations for all 41
patients included in the present study tested between lidocaine and articaine groups .
in 2000 , 4% articaine ( 1:100,000 ) was given fda approval for use in the united states
and has been steadily growing in popularity .
following its fda approval , 4% articaine has proven to be safe and
effective for use as a dental local anesthetic for both maxillary and mandibular
procedures . for dental procedures , 2% lidocaine with epinephrine
is
considered the " gold standard " , and the anesthetic to which others are often
compared . a wide variety of studies
have
been conducted to compare the safety and efficacy of 4% articaine to the standard
local anesthetic , 2% lidocaine , with varying outcomes [ 1 - 10 ] .
these two local
anesthetic solutions were selected because they are two of the most commonly used
preparations used in the practice of dentistry in the united states , and this study
aimed to approximate standard clinical practice . as discussed by kanaa et al .
, inconsistent
results have been obtained in the comparison of 2% lidocaine to 4% articaine , with
the latter proving to be more effective in obtaining pulpal anaesthesia after
mandibular infiltration [ 1,3 - 5 ] ,
while no difference in efficacy was noted after inferior alveolar nerve block [ 1,15 - 18 ] .
a similarly inconsistent
set of results emerges when comparing lidocaine and articaine solutions administered
via infiltration in the maxilla .
the volunteer trial by kanaa et al . found no significant difference in ability to
achieve pulpal anaesthesia when evaluating the efficacy of these two solutions when
administered via buccal infiltration .
this finding was mirrored by another study by
costa et al . , which found no statistical
difference in anesthetic success of lidocaine and articaine in the anaesthesia of
maxillary posterior teeth .
other studies have failed to demonstrate a difference in
efficacy between these two solutions in the anaesthesia of maxillary central
incisors and maxillary canines . on the contrary , a study by evans et al
.
found 4% articaine to be superior to 2% lidocaine in the anaesthesia of maxillary
lateral incisors . in the investigation of the anaesthesia of maxillary posterior
teeth with irreversible pulpitis , srinvasan et al
. found 4% articaine to be superior to 2% articaine at a highly
significant level . in perhaps one of the most extensive reviews of the literature to
date ,
brandt et al . found in their meta
analysis that 4% articaine proves to be 2.44 to 3.81 times more effective than 2%
lidocaine when given as infiltration . because of these highly varied results in the anaesthesia of maxillary teeth , the
results of the present study are congruent with some prior research , but
contradictory to others .
we found no statistically significant difference in the
anesthetic efficacy of lidocaine and articaine in the anaesthesia of maxillary
posterior teeth with irreversible pulpitis .
our results are consistent with those
obtained by kanaa et al . , which is most
similar in methodology to the present study .
in their study , no significant
difference was found in the efficacy of lidocaine and articaine with respect to
pain - free treatment of maxillary teeth with irreversible pulpitis .
found a 96.2% success rate of
pain - free extraction , while our success rate was slightly lower at 80.5% .
in which the efficacy of these local anesthetic solutions in
maxillary posterior teeth with irreversible pulpitis was investigated . in that
study , it was found that 4% articaine offered a significant advantage in the
anaesthesia of both premolars and molars , whereas the present study found no
differences in efficacy between lidocaine or articaine solutions . it should be noted , however , that while the efficacy of lidocaine and articaine in
the present study did not differ at the statistically significant level ( p = 0.13 ) ,
the lidocaine group did show a 20.5% higher success rate in achieving treatment with
no reported pain , which could be significant in a clinical setting considering the
limited number of patients included in this study .
the results of the present study and other studies which demonstrated contradictory
results exhibit differences in sample size , tooth type variances , anesthetic doses ,
concentration of vasoconstrictor , and the study definition of success . in the
current study ,
3.6 ml of 2% lidocaine with 1:100,000 epinephrine and 1.8 ml of 4%
articaine with 1:100,000 epinephrine were compared .
employed 2.0 ml volumes of 2% lidocaine with
1:80,000 epinephrine and 4% articaine with 1:100,000 epinephrine .
the 3.6 ml lidocaine and 1.8 ml articaine volumes were used in the
present study in an attempt to elucidate any difference in efficacy when equal doses
( in milligrams ) of each anesthetic were employed .
the definition of treatment success in the present study was similar to that employed
by kanaa et al .
other studies included mild pain in their definition
of treatment success , which may help to
explain the differences in reported outcomes . in the present study ,
successful
treatment was completed on 33 of 40 ( 82.5% ) patients with successful initial
anaesthesia , or 33 of the total 41 ( 80.5% ) patients initially included .
this figure
is higher than the study by kanaa et al . , who achieved pain - free treatment in 84.9%
of patients with successful pulpal anaesthesia and 62% of all patients recruited
.
the likely difference in treatment success rates between the present study and those
by kanaa et al .
, srinivasan , and others , is that treatment success in the
present study depended solely on a pain - free extraction .
other studies included
extraction as well as other treatment modalities such as pulp extirpation in their
outcomes measures [ 1 - 10 ] .
this is something that should be considered when comparing
the present results against other investigations , since it has been shown that
obtaining anaesthesia for extraction procedures on teeth with irreversible pulpitis
is simpler and more successful than procedures such as pulp extirpation . when comparing the
present results with the study by kanaa et al . ,
and limiting the comparison to extractions only , an 80.5% success
rate is found in the present study , versus a 70% success rate in the previous study .
this may be attributable to the increased quantity of vasoconstrictor administered
in the lidocaine group ( 0.036 mg epinephrine versus 0.022 mg ) in the present study
versus the previous study by kanaa et al .
the present symptoms or reason for extraction of maxillary teeth may play an
important role in evaluating the outcomes of research directed at evaluating
efficacy of anesthetic solutions .
teeth with irreversible pulpitis are 8 times more
likely to experience failure of anaesthesia than normal teeth , so the results of the
present study may not be applicable to patients undergoing extraction or other
treatment of maxillary teeth that are not experiencing irreversible pulpitis .
the present study was conducted as a case - series , and as such , only a small patient
pool was utilized with no patient or clinician blinding .
a future full clinical trial
should include multiple treatment modalities and a blinded randomized protocol to
mitigate these problems .
furthermore , future research would require a specific
number of patients in order to achieve the desired confidence interval and level of
statistical significance required for a full prospective clinical trial . without
making such attempts to eliminate bias and increase the size of the subject pool
, no
specific clinical recommendations can be made from the results of the present study .
it is up to the reader to draw their own conclusions and to hopefully investigate
the subject further through future research .
a combined buccal and palatal infiltration with 72 mg 2% lidocaine with 1:100,000
epinephrine or 72 mg 4% articaine with 1:100,000 epinephrine exhibited similar
success rates of preliminary anaesthesia and similar pain - free treatment in patients
undergoing simple extraction of a single tooth with irreversible pulpitis in the
posterior maxilla .
success rates of treatment were the same for molars and premolars
undergoing extractions , both within the same anesthetic group , and between the
lidocaine and articaine groups .
the
authors do not have any financial interests , either directly or indirectly , in
the products or information listed in the paper . | abstractobjectiveslocal anaesthesia is the standard of care during dental extractions . with the
advent of newer local anesthetic agents , it is often difficult for the
clinician to decide which agent would be most efficacious in a given
clinical scenario
.
this study assessed the efficacy of equal - milligram doses
of lidocaine and articaine in achieving surgical anaesthesia of maxillary
posterior teeth diagnosed with irreversible pulpitis.material and methodsthis case - series evaluated a total of 41 patients diagnosed with irreversible
pulpitis in a maxillary posterior tooth .
patients randomly received an
infiltration of either 3.6 ml ( 72 mg ) 2% lidocaine with 1:100,000
epinephrine or 1.8 ml ( 72 mg ) 4% articaine with 1:100,000 epinephrine in the
buccal fold and palatal soft tissue adjacent to the tooth .
after 10 minutes ,
initial anaesthesia of the tooth was assessed by introducing a sterile
27-gauge needle into the gingival tissue adjacent to the tooth , followed by
relief of the gingival cuff .
successful treatment was considered to have
occurred when the tooth was extracted with no reported pain .
data was
analyzed with the fisher 's exact test , unpaired t - test and normality
test.resultstwenty-one patients received lidocaine and 20 received articaine .
forty of
the 41 patients achieved initial anaesthesia 10 minutes after injection : 21
after lidocaine and 19 after articaine ( p = 0.488 ) .
pain - free extraction was
accomplished in 33 patients : 19 after lidocaine and 14 after articaine
buccal and palatal infiltrations ( p = 0.226).conclusionsthere was no significant difference in efficacy between equivalent doses of
lidocaine and articaine in the anaesthesia of maxillary posterior teeth with
irreversible pulpitis . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``College Opportunity Tax Credit Act
of 2006''.
SEC. 2. COLLEGE OPPORTUNITY TAX CREDIT.
(a) In General.--
(1) Allowance of credit.--Section 25A(a) of the Internal
Revenue Code of 1986 (relating to allowance of credit) is
amended--
(A) in paragraph (1), by striking ``the Hope
Scholarship Credit'' and inserting ``the eligible
student credit amount determined under subsection
(b)'', and
(B) in paragraph (2), by striking ``the Lifetime
Learning Credit'' and inserting ``the part-time,
graduate, and other student credit amount determined
under subsection (c)''.
(2) Name of credit.--The heading for section 25A of such
Code is amended to read as follows:
``SEC. 25A. COLLEGE OPPORTUNITY CREDIT.''.
(3) Clerical amendment.--The table of sections for subpart
A of part IV of subchapter A of chapter 1 of such Code is
amended by striking the item relating to section 25A and
inserting the following:
``Sec. 25A. College opportunity credit.''.
(b) Eligible Students.--
(1) In general.--Paragraph (1) of section 25A(b) of the
Internal Revenue Code of 1986 is amended--
(A) by striking ``the Hope Scholarship Credit'' and
inserting ``the eligible student credit amount
determined under this subsection'', and
(B) by striking ``Per student credit'' in the
heading and inserting ``In general''.
(2) Amount of credit.--Paragraph (4) of section 25A(b) of
such Code (relating to applicable limit) is amended by striking
``2'' and inserting ``3''.
(3) Credit refundable.--
(A) In general.--Section 25A of such Code is
amended by redesignating subsection (i) as subsection
(j) and by inserting after subsection (h) the following
new subsection:
``(i) Portion of Credit Refundable.--
``(1) In general.--The aggregate credits allowed under
subpart C shall be increased by the amount of the credit which
would be allowed under this section--
``(A) by reason of subsection (b), and
``(B) without regard to this subsection and the
limitation under section 26(a) or subsection (j), as
the case may be.
``(2) Treatment of credit.--The amount of the credit
allowed under this subsection shall not be treated as a credit
allowed under this subpart and shall reduce the amount of
credit otherwise allowable under subsection (a) without regard
to section 26(a) or subsection (j), as the case may be.''.
(B) Technical amendment.--Section 1324(b) of title
31, United States Code, is amended by inserting ``, or
enacted by the College Opportunity Tax Credit Act of
2006'' before the period at the end.
(4) Limitations.--
(A) Credit allowed for 4 years.--Subparagraph (A)
of section 25A(b)(2) of such Code is amended--
(i) by striking ``2'' in the text and in
the heading and inserting ``4'', and
(ii) by striking ``the Hope Scholarship
Credit'' and inserting ``the credit
allowable''.
(B) Elimination of limitation on first 2 years of
postsecondary education.--Section 25A(b)(2) of such
Code is amended by striking subparagraph (C) and by
redesignating subparagraph (D) as subparagraph (C).
(5) Conforming amendments.--
(A) The heading of subsection (b) of section 25A of
such Code is amended to read as follows:
``(b) Eligible Students.--''.
(B) Section 25A(b)(2) of such Code is amended--
(i) in subparagraph (B), by striking ``the
Hope Scholarship Credit'' and inserting ``the
credit allowable'', and
(ii) in subparagraph (C), as redesignated
by paragraph (4)(B), by striking ``the Hope
Scholarship Credit'' and inserting ``the credit
allowable''.
(c) Part-Time, Graduate, and Other Students.--
(1) In general.--Subsection (c) of section 25A of the
Internal Revenue Code of 1986 is amended to read as follows:
``(c) Part-Time, Graduate, and Other Students.--
``(1) In general.--In the case of any student for whom an
election is in effect under this section for any taxable year,
the part-time, graduate, and other student credit amount
determined under this subsection for any taxable year is an
amount equal to the sum of--
``(A) 40 percent of so much of the qualified
tuition and related expenses paid by the taxpayer
during the taxable year (for education furnished to the
student during any academic period beginning in such
taxable year) as does not exceed $1,000, plus
``(B) 20 percent of such expenses so paid as
exceeds $1,000 but does not exceed the applicable
limit.
``(2) Applicable limit.--For purposes of paragraph (1)(B),
the applicable limit for any taxable year is an amount equal to
3 times the dollar amount in effect under paragraph (1)(A) for
such taxable year.
``(3) Special rules for determining expenses.--
``(A) Coordination with credit for eligible
students.--The qualified tuition and related expenses
with respect to a student who is an eligible student
for whom a credit is allowed under subsection (a)(1)
for the taxable year shall not be taken into account
under this subsection.
``(B) Expenses for job skills courses allowed.--For
purposes of paragraph (1), qualified tuition and
related expenses shall include expenses described in
subsection (f)(1) with respect to any course of
instruction at an eligible educational institution to
acquire or improve job skills of the student.''.
(2) Inflation adjustment.--
(A) In general.--Subsection (h) of section 25A of
such Code (relating to inflation adjustments) is
amended by adding at the end the following new
paragraph:
``(3) Dollar limitation on amount of credit under
subsection (a)(2).--
``(A) In general.--In the case of a taxable year
beginning after 2007, each of the $1,000 amounts under
subsection (c)(1) shall be increased by an amount equal
to--
``(i) such dollar amount, multiplied by
``(ii) the cost-of-living adjustment
determined under section 1(f)(3) for the
calendar year in which the taxable year begins,
determined by substituting `calendar year 2006'
for `calendar year 1992' in subparagraph (B)
thereof.
``(B) Rounding.--If any amount as adjusted under
subparagraph (A) is not a multiple of $100, such amount
shall be rounded to the next lowest multiple of
$100.''.
(B) Conforming amendment.--The heading for
paragraph (1) of section 25A(h) of such code is amended
by inserting ``under subsection (a)(1)'' after
``credit''.
(d) Credit Allowed Against Alternative Minimum Tax.--
(1) In general.--Section 25A of the Internal Revenue Code
of 1986, as amended by subsection (b)(3), is amended by
redesignating subsection (j) as subsection (k) and by inserting
after subsection (h) the following new subsection:
``(j) Limitation Based on Amount of Tax.--In the case of a taxable
year to which section 26(a)(2) does not apply, the credit allowed under
subsection (a) for the taxable year shall not exceed the excess of--
``(1) the sum of the regular tax liability (as defined in
section 26(b)) plus the tax imposed by section 55, over
``(2) the sum of the credits allowed under this subpart
(other than this section and sections 23, 24, and 25B) and
section 27 for the taxable year.''.
(2) Conforming amendment.--Section 25(a)(1) of such Code is
amended by inserting ``25A,'' after ``24,''.
(e) Effective Date.--The amendments made by this section shall
apply to taxable years beginning after December 31, 2006. | College Opportunity Tax Credit Act of 2006 - Amends the Internal Revenue Code to replace the Hope Scholarship and Lifetime Learning Tax Credits with an increased, partially refundable college opportunity tax credit to cover up to four years (currently, limited to two years) of the tuition and related expenses of full or part-time postsecondary and graduate students. |
Shaquille O'Neal says he would dominate today's NBA and goes off on his doubters, plus love for DeAndre Ayton, Senselessly Sensitive, and the return of Walmart or Waffle House - The Big Podcast with Shaq - Episode 172
Shaquille O'Neal opens the show this week by going OFF on the doubters who think he wouldn't dominate today's NBA - because he is sure he'd be able to average 50 a night against the soft NBA centers today. Plus, Shaq thinks that DeAndre Ayton is a great player, but he shouldn't strive to be the next Shaq but rather the first Ayton, and he hopes Ayton breaks his records. Shaq is also happy for Dwyane Wade, we recap the insanity from the Lost Lands Music Festival, and debate whether Shaq could take Henry Cavill's place as the new Superman. We do a proper full "Senselessly Sensitive" segment this week, we try to get back in the hunt in the PodcastOne Sportsnet NFL Challenge, of course we get Borderline, and we play a round of the classic "Walmart or Waffle House!" Get into the mix on Twitter by following @Shaqcast or using #Shaqcast - follow The Big Podcast with Shaq on Instagram and Facebook - or email your best stuff to [email protected]. Head over to BetOnline.AG and use promo code PODCAST1 to receive a 50% sign up bonus.
Loading... ||||| Shaquille O’Neal gives the flat-Earth theory his seal of approval. (AP)
I’m sorry to break it to you, but Shaquille O’Neal is apparently a flat-Earther, too. Actually, I’m not sorry at all. I love this NBA narrative so, so much, and I’d like to thank Shaq for breathing more life into it.
Cleveland Cavaliers star Kyrie Irving was the first NBA player to reveal his flat-Earth beliefs, summarized as such: “Can you really think of us rotating around the sun, and all planets align, rotating in specific dates, being perpendicular with what’s going on with these ‘planets’ and stuff like this?”
Soon afterwards, Denver Nuggets wing Wilson Chandler and Golden State Warriors forward Draymond Green endorsed Irving’s flat-Earth theory, with the latter explaining away NASA’s photos of the planet from space by suggesting everyone can manipulate doctored photos of the globe on their phones.
[Follow Ball Don’t Lie on social media: Twitter | Instagram | Facebook | Tumblr]
The NBA storyline became so outrageous commissioner Adam Silver had to address it in his annual state-of-the-league address at the All-Star Game, clarifying, “I believe the world is round,” and suggesting Irving was making some broader social commentary about fake news in this country.
Which, no he wasn’t. Irving doubled down on his flat-Earth theory this past week, before detailing his lucid dreaming skills and informing us how an ex-teammate came to him in a dream to say goodbye.
These are all very real things that NBA players have said.
This is one wild theme to the 2016-17 NBA season, and Shaq just made it wilder when asked about Irving’s flat-Earth theory on his podcast. This was his response, through a series of interruptions:
Shaq is a flat-Earther, too I’m speechless I love this NBA narrative so muchhttps://t.co/eijTsZKJZm pic.twitter.com/3zOLbABfeQ — Ben Rohrbach (@brohrbach) March 19, 2017
Story Continues
“It’s true. The Earth is flat. The Earth is flat. Yes, it is. Listen, there are three ways to manipulate the mind — what you read, what you see and what you hear. In school, first thing they teach us is, ‘Oh, Columbus discovered America,’ but when he got there, there were some fair-skinned people with the long hair smoking on the peace pipes. So, what does that tell you? Columbus didn’t discover America. So, listen, I drive from coast to coast, and this s*** is flat to me. I’m just saying. I drive from Florida to California all the time, and it’s flat to me. I do not go up and down at a 360-degree angle, and all that stuff about gravity, have you looked outside Atlanta lately and seen all these buildings? You mean to tell me that China is under us? China is under us? It’s not. The world is flat.”
This man has a doctorate degree in education from Barry University in Miami, Fla. Seriously.
I’m not sure which detail I enjoyed better — Shaq thinking the world is flat because he drives coast to coast or Shaq thinking he’d be driving “up and down at a 360-degree angle” if the Earth was spherical.
Unfortunately, you can’t drive from here to Asia, but there are things called boats and airplanes, and if you head west from California, you’ll arrive in China. And if you head west from there, you’ll eventually end up in California again. Because we live on a globe. Shaq should know full well. He’s been to China.
And, technically, a 360-degree angle is just a circle. I don’t know why Shaq thinks you would be driving up and down on a circle, but it is possible to drive comfortably on a spherical object when that object’s circumference is 24,901 miles. Think of an ant walking around a basketball, if you will. It might think it’s moving in a straight line, but eventually it will navigate the orb and arrive in the same place.
[Join a Yahoo Daily Fantasy Basketball contest now]
Also, there are things called mountains, and you drive over them on your way to California. At various angles. But never at a 360-degree angle, because your car would just be careening in circles into a ravine. But we shouldn’t have to explain mountains to you, just how we shouldn’t have to tell you the Earth is not flat. And that’s what’s so great about this new NBA narrative. It raises so many questions, from where players think the sun goes at night to why they believe they travel to different time zones.
Of course, there remains the possibility that Shaq is just trolling us all. In which case, kudos to him for coming up with some fantastically elaborate fiction about how China cannot be under us, because I can’t get enough of NBA players and their flat-Earth theories — real or imaginary. Keep ’em coming.
– – – – – – –
Ben Rohrbach is a contributor for Ball Don’t Lie and Shutdown Corner on Yahoo Sports. Have a tip? Email him at [email protected] or follow him on Twitter! | – Apparently there's a belief circulating in the NBA that the Earth is flat. Kyrie Irving of the Cleveland Cavaliers first revealed his flat-Earth beliefs back in February, and now retired basketball player Shaquille O'Neal has revealed he's on the same page. In an episode of his podcast broadcast late February but only recently picked up by the media, Shaq said, per Sports Illustrated: "It’s true. The Earth is flat. The Earth is flat. Yes, it is. Listen, there are three ways to manipulate the mind—what you read, what you see, and what you hear." He used an example involving Christopher Columbus, arguing that Columbus didn't really discover America because there were already "fair-skinned people" living here when Columbus arrived. Then he got into the real nitty gritty. He explained that he drives from coast to coast, and it certainly seems flat to him: "I’m just saying. I drive from Florida to California all the time, and it’s flat to me. I do not go up and down at a 360-degree angle, and all that stuff about gravity, have you looked outside Atlanta lately and seen all these buildings? You mean to tell me that China is under us? China is under us? It’s not. The world is flat." Kenny Ducey at SI says that while he wants to believe this is all a joke, both Irving and O'Neal seem to be taking it seriously; Irving, for example, has continued to defend his beliefs. Ben Rohrbach at Yahoo Sports, who first uncovered the Shaq podcast, agrees that Irving is not kidding around (or trying to make some sort of point about "fake news," as NBA commissioner Adam Silver suggested), and points out that at least two other NBA players have agreed with him. |
A Mississippi woman has been charged in a second death related to giving buttocks-enhancing injections without being trained or licensed.
The state attorney general's office says 53-year-old Tracey Lynn Garner of Jackson, formerly known as Morris Garner, was arrested Thursday and charged with one count of depraved heart murder. Conviction carries a potential life sentence.
The release says Garner injected "a silicone substance" into Marilyn Hale of Selma, Ala., on Jan. 13, 2010, and Hale later died.
Records show Garner was in the Hinds County Detention Center on Monday. Her attorney, John Colette, was not immediately available.
Garner had been on house arrest awaiting trial in a similar case in the 2012 death of an Atlanta woman. ||||| Morris Garner, a Jackson man who lives as a woman under the name Tracey Lynne Garner, will face murder charges in the death of an Atlanta woman.
The specific charge of depraved heart murder stems from Garner's role in allegedly injecting 37-year-old Karima Gordon with "a foreign and possible counterfeit substance during an illegal medical procedure," according to Mississippi Attorney General Jim Hood.
If convicted of depraved heart murder, defined as demonstrating "callous disregard for human life" and resulting in death, Garner could get life in prison.
Hood's office said Garner performed the procedure at his home in Jackson. Garner's arrest resulted from a six-month-long investigation by the Mississippi AG's office, Mississippi Board of Medical Licensure and the U.S. Food and Drug Administration.
“While this remains a murder case, our intellectual property task force is involved to also investigate the possibility that the substance injected into the victim was a counterfeit version of silicone,” Hood said in a statement.
The Associated Press reported that Hinds County Judge Melvin Priester denied bond after a hearing Tuesday afternoon. Garner was arrested Sept. 6, according to jail records.
After the hearing, Garner's lawyer, John Colette, said he was "shocked" by the seriousness of the charge and the fact that his client was held without bond. He denied his client is guilty of depraved-heart murder.
Garner said during court that he is 53-years-old and worked as a floral and interior designer. He wore a yellow prison jumpsuit and shackles, and his hair was in a short ponytail.
Colette said Garner had undergone operations to change gender. Hood referred to Garner as a man, and Garner was booked into the Hinds County jail as a man.
Hood said that Gordon, who had served in the military and wanted to become a model, found Garner after meeting someone on the Internet known to authorities only as "Pebbles." Gordon met Pebbles in person in New York City and paid her $200 for the referral to Garner, according to Hood.
Hood said his investigators were looking for Pebbles.
The Associated Press found a Twitter account in Gordon's name from which only one message had been sent, and it was to a Pebbelz Da Model in December. The Twitter account of Pebbelz Da Model had a link to an website by the same name.
Pebblelz's YouTube page also has more than 1.7 million page views. However, Hood's office had no comment on whether Pebbelz Da Model is the same person his investigators were looking for.
"We've had people practicing medicine without a license, but nothing like this," Hood said.
The Associated Press contributed to this story. | – Tracey Lynn Garner had already been charged with murder in one silicone butt injection death, and now she's been charged in a second, the Clarion-Ledger reports. Garner (who, authorities say, is a man also known as Morris Garner who lives as a woman) is accused of illegally injecting counterfeit silicone into two women, both of whom later died, despite the fact that she was neither trained nor licensed, the AP reports. She was first arrested last year after Karima Gordon died of a blood clot in her lung a few days after the injection. Now authorities say a similar thing happened in 2010 with a woman named Marilyn Hale. Officials say Gordon found someone named "Pebbles" on the Internet, met up with her and paid her $200, and was referred to Garner for the procedure. Natasha Stewart, an adult entertainer also known as Pebbelz Da Model, has also been charged in the case. Last year, the Jackson Free Press identified Garner as a floral and interior designer, and reported that she performed the procedures in her Mississippi home. |
the quark model @xcite can reproduce the behavior of observables such as the spectrum and the magnetic moments in the baryon and meson sector , but it neglects quark - antiquark pair - creation ( or continuum - coupling ) effects . above threshold ,
these couplings lead to strong decays and below threshold , they lead to virtual @xmath2 ( @xmath3 ) components in the hadron wave function and shifts of the physical mass with respect to the bare mass .
the unquenching of the quark model for hadrons is a way to take these components into account .
pioneering works on the unquenching of quark model were done by trnqvist and collaborators , who used an unitarized qm @xcite , while van beveren and rupp used an t - matrix approach @xcite .
these methods were used ( with a few variations ) by several authors to study the influence of the meson - meson ( meson - baryon ) continuum on meson ( baryon ) observables .
these techniques were applied to study of the scalar meson nonet ( @xmath4 , @xmath5 , etc . ) of ref .
@xcite in which the loop contributions are given by the hadronic intermediate states that each meson can access .
it is via these hadronic loops that the bare states become `` dressed '' and the hadronic loop contributions totally dominate the dynamics of the process . on the other hand , isgur and coworkers in ref .
@xcite demonstrated that the effects of the @xmath0 sea pairs in meson spectroscopy is simply a renormalization of the meson string tension .
also , the strangeness content of the nucleon and electromagnetic form factors were investigated in @xcite , whereas capstick and morel in ref .
@xcite analyzed baryon meson loop effects on the spectrum of nonstrange baryons . in meson sector , eichten _ et al .
_ explored the influence of the open - charm channels on the charmonium properties using the cornell coupled - channel model @xcite to assess departures from the single - channel potential - model expectations . in this contribution
, we discuss some of the latest applications of the uqm ( the approach is a generalization of the unitarized quark model @xcite ) to study the flavor asymmetry and strangeness of the proton , in wich the effects of the quark - antiquark pairs were introduced into the constituent quark model ( cqm ) in a systematic way and the wave fuctions were given explicitly .
finally , the uqm is applied to describe meson observables and the spectroscopy of the charmonium and bottomonium .
in the unquenched quark model for baryons @xcite and mesons @xcite , the hadron wave function is made up of a zeroth order @xmath6 ( @xmath0 ) configuration plus a sum over the possible higher fock components , due to the creation of @xmath1 @xmath0 pairs .
thus , we have @xmath7 ~ , \end{aligned}\ ] ] where @xmath8 stands for the @xmath1 quark - antiquark pair - creation operator @xcite , @xmath9 is the baryon / meson , @xmath10 and @xmath11 represent the intermediate state hadrons , see figures [ figbaryon ] and [ figmeson ] .
@xmath12 , @xmath13 and @xmath14 are the corresponding energies , @xmath15 and @xmath16 the relative radial momentum and orbital angular momentum between @xmath10 and @xmath11 and @xmath17 is the total angular momentum .
it is worthwhile noting that in refs .
@xcite , the constant pair - creation strength in the operator ( [ eqn : psi - a ] ) was substituted with an effective one , to suppress unphysical heavy quark pair - creation .
two diagrams can contribute to the process @xmath18 .
@xmath19 and @xmath19 stand for the various initial ( i = 1 - 4 ) and final ( i = 5 - 8) quarks or antiquarks , respectively . ] two diagrams can contribute to the process @xmath18 .
@xmath19 and @xmath19 stand for the various initial ( i = 1 - 4 ) and final ( i = 5 - 8) quarks or antiquarks , respectively . ] in the uqm the matrix elements of an observable @xmath20 can be calculated as @xmath21 where @xmath22 is the state of eq .
( [ eqn : psi - a ] ) .
the result will receive a contribution from the valence part and one from the continuum component , which is absent in naive qm calculations .
the introduction of continuum effects in the qm can thus be essential to study observables that only depend on @xmath0 sea pairs , like the strangeness content of the nucleon electromagnetic form factors @xcite or the flavor asymmetry of the nucleon sea @xcite .
in other cases , continuum effects can provide important corrections to baryon / meson observables , like the self - energy corrections to meson masses @xcite or the importance of the orbital angular momentum in the spin of the proton @xcite .
the first evidence for the flavor asymmetry of the proton sea was provided by nmc at cern @xcite . the flavor asymmetry in the proton
is related to the gottfried integral for the difference of the proton and neutron electromagnetic structure functions @xmath23 ~.\end{aligned}\ ] ] under the assumption of a flavor symmetric sea , one obtains the gottfried sum rule @xmath24 .
the final nmc value is @xmath25 at @xmath26 ( gev / c)@xmath27 for the gottfried integral over the range @xmath28 @xcite , which implies a flavor asymmetric sea .
the violation of the gottfried sum rule has been confirmed by other experimental collaborations @xcite .
theoretically , it was shown in ref .
@xcite , that the coupling of the nucleon to the pion cloud provides a natural mechanism to produce a flavor asymmetry .
comparison the value of gottfried sum rule calculated within uqm with the experimental data from nmc 1994 , nmc 1997 , hermes , and e866 . figure taken from ref .
@xcite ; aps copyright .
, width=226 ] in the uqm , the flavor asymmetry can be calculated from the difference of the probability to find @xmath29 and @xmath30 sea quarks in the proton @xmath31 ~. \label{asym}\end{aligned}\ ] ] note that , even in absence of explicit information on the ( anti)quark distribution functions , the integrated value can be obtained directly from the left - hand side of eq .
( [ asym ] ) .
our result is shown in fig .
[ protonasy ] .
the results for the two strangeness observables were obtained in a calculation involving a sum over intermediate states up to four oscillator shells for both baryons and mesons @xcite . in the uqm formalism , the strange magnetic moment of the proton is defined as the expectation value of the operator @xmath32\end{aligned}\ ] ] on the proton state of eq .
( [ eqn : psi - a ] ) , which represents the contribution of the strange quarks to the magnetic moment fo the proton ; @xmath33 is the magnetic moment of the quark i times a projector on strangeness and the strange quark magnetic moment is set as in ref .
our result is @xmath34(see fig.[smagnetic ] ) .
similarly , the strange radius of the proton is defined as the expectation value of the operator @xmath35 on the proton state of eq .
( [ eqn : psi - a ] ) , where @xmath36 is the electric charge of the quark @xmath37 times a projector on strangeness , @xmath38 and @xmath39 are the coordinates of the quark @xmath37 and of the intermediate state center of mass , respectively .
the expectation value of @xmath40 on the proton is equal to @xmath41 . in fig .
[ sradio ] our result is compared with the experimental data .
comparison between our resulting value for the strange radius of the proton in the uqm .
figure taken from ref .
@xcite ; aps copyright . ]
comparison between our resulting value for the strange radius of the proton in the uqm .
figure taken from ref .
@xcite ; aps copyright . ]
in refs . @xcite , the method was used by some of us to compute the charmonium ( @xmath42 ) and bottomonium ( @xmath43 ) spectra with self - energy corrections , due to continuum coupling effects . in the uqm , the physical mass of a meson , @xmath44 is given by the sum of two terms : a bare energy , @xmath12 , calculated within a potential model @xcite , and a self energy correction , @xmath45 computed within the uqm formalism . bottomonium spectrum with self energies corrections .
black lines are theoretical predictions and blue lines are experimental data available .
figure taken from ref .
@xcite ; aps copyright . ]
bottomonium spectrum with self energies corrections .
black lines are theoretical predictions and blue lines are experimental data available .
figure taken from ref .
@xcite ; aps copyright . ] our results for the self energies corrections of charmonia @xcite and bottomonia @xcite spectrums , are shown in figures [ charm ] and [ botton ] .
in the baryon sector , our results for asymmetry and `` strangeness '' observables , as shown in figures [ protonasy ] , [ smagnetic ] and [ sradio ] , are in agreement with the experimental data .
these observables can only be understood when continuum components in the wave function are included .
our results in the meson sector for the self energies corrections of charmonium and bottomonium spectra , see figures [ charm ] and [ botton ] , show that the pair - creation effects on the spectrum of heavy mesons are quite small . specifically for charmonium and bottomonium states ,
they are of the order of @xmath46 and @xmath47 , respectively . the relative mass shifts , i.e. the difference between the self energies of two meson states , are in the order of a few tens of mev .
however , as qm s can predict the meson masses with relatively high precision in the heavy quark sector , even these corrections can become significant .
these results are particularly interesting in the case of states close to an open - flavor decay threshold , like the @xmath48 and @xmath49 mesons . in our picture
the @xmath48 can be interpreted as a @xmath42 core [ the @xmath50 , plus higher fock components due to the coupling to the meson - meson continuum . in ref .
@xcite , we showed that the probability to find the @xmath48 in its core or continuum components is approximately @xmath51 and @xmath52 , respectively . in conclusion ,
the flavor asymmetry in the proton can be well described by the uqm .
the effects of the continuum components on the `` strangeness '' observables of the proton are found to be negligible
. nevertheless , our results are compatible with the latest experimental data and recent lattice calculations . in the meson sector
our self energies corrections for charmonia and bottomonia are found to be significant .
this work is supported in part by papiit - dgapa , mexico ( grant in107314 ) and infn sezione di genova .
100 eichten e , gottfried k , kinoshita t , b kogut j , lane d k and -m yan t 1975 _ phys .
lett . _ * 34 * 369 ; eichten e , gottfried k , kinoshita t , d lane k and -m yan t 1978 _ phys . rev .
_ d * 17 * 3090 ; 1980 _ phys .
_ d * 21 * 203 . godfrey s and isgur n 1985 _ phys
_ d * 32 * 189 .
capstick s and isgur n 1986 _ phys .
_ d * 34 * 2809 .
ferraris m , giannini m m , pizzo m , santopinto e and tiator l 1995 _ phys .
. _ b * 364 * 231 ; santopinto e , iachello f and giannini m m 1998 _ eur . phys .
j. _ a * 1 * 307 ; santopinto e and giannini m m 2012 _ phys . rev .
_ c * 86 * 065202 ; giannini m m , santopinto e 2015 _ chin . j. phys . _ * 53 * 020301 ; aiello
m 1996 _ et al .
phys.lett._ b 387 , 215 ; aiello _ et al .
_ 1988 _ j. of phys .
_ g * 24 * 753 ; bijker r , iachello f , santopinto e 1988 _ j. of phys .
_ a * 31 * 9041 ; m.
de sanctis _ et al .
_ 2007 _ phys .
rev . _ c * 6 * 062201 .
santopinto e 2005 _ phys .
_ c * 72 * 022201 ; ferretti j , vassallo a and santopinto e 2011 _ phys .
_ c * 83 * , 065204 ; de sanctis m , ferretti j , santopinto e and vassallo a arxiv:1410.0590 . de sanctis , m , ferretti j 2011 santopinto e _
_ c * 84 * 055201 .
van beveren e , rijken t a , k. metzger , dullemond c , rupp g and ribeiro j e 1986 _ z. phys .
_ c * 30 * 615 .
ono s and trnqvist n a 1984 _ z. phys .
_ c * 23 * 59 ; heikkila k , ono s and trnqvist n a 1984 _ phys .
_ d * 29 * 110 [ 1984 erratum - ibid .
* 29 * 2136 ] ; ono s , sanda a i and trnqvist n a 1986 _ phys .
_ d * 34 * 186 .
ferretti j , galat g and santopinto e 2014 _ phys .
_ d * 90 * 054010 .
kalashnikova y s 2005 _ phys .
_ d * 72 * 034010 .
amaudruz p _
_ 1991 _ phys .
_ * 66 * 2712 ; arneodo m _
_ 1997 _ nucl .
_ b * 487 * 3 . | in this contribution , we briefly analyze the formalism of the unquenched quark model ( uqm ) and its application to the description of several observables of hadrons . in the uqm ,
the effects of @xmath0 sea pairs are introduced explicitly into the quark model through a qcd - inspired @xmath1 pair - creation mechanism .
we present our description of flavour asymmetry and strangeness in the proton when baryon - meson components are included . in the meson sector , the charmonium and bottomonium spectra with self - energy corrections due to the coupling to the meson - meson components . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Two Floods and You Are Out of the
Taxpayers' Pocket Act of 1999''.
SEC. 2. FLOOD LOSS REDUCTION FOR REPETITIVE FLOOD INSURANCE CLAIM
PROPERTIES.
Section 1366 of the National Flood Insurance Act of 1968 (42 U.S.C.
4104c) is amended--
(1) in subsection (a), by inserting after the first
sentence the following new sentence: ``In awarding grants under
this section for mitigation activities, the Director shall give
priority to properties for which repetitive flood insurance
claim payments have been made.'';
(2) in the last sentence of subsection (c), by inserting
before the period the following: ``, and shall address
properties in the area for which repetitive flood insurance
claim payments have been made''.
(3) in subsection (f), by striking paragraph (3) and
inserting the following new paragraph:
``(3) Waiver.--The Director may waive the dollar amount
limitations under paragraphs (1) and (2) for any State or
community--
``(A) for any 5-year period when a major disaster
or emergency declared by the President (pursuant to the
Robert T. Stafford Disaster Relief and Emergency
Assistance Act (42 U.S.C. 5121 et seq.)) as a result of
flood conditions is in effect with respect to areas in
the State or community; or
``(B) whenever the Director determines that the
State or community has properties for which repetitive
flood insurance claim payments have been made and that
waiver of the cost limitations is cost-effective and in
the best interests of the National Flood Insurance
Fund.''.
SEC. 3. NATIONAL FLOOD MITIGATION FUND.
(a) Credits.--Section 1367(b) of the National Flood Insurance Act
of 1968 (42 U.S.C. 4104d(b)) is amended--
(1) by striking paragraph (1) and inserting the following
new paragraph:
``(1) amounts from the National Flood Insurance Fund, in
amounts not exceeding $70,000,000 in each of fiscal years 2000,
2001, 2002, and 2003, of which all amounts made available under
this paragraph in excess of $20,000,000 in each such fiscal
year shall be used only under section 1366 for mitigation
activities for properties for which repetitive flood insurance
claim payments have been made, such sums to remain available
until expended;'';
(2) in paragraph (2), by striking ``and'' at the end;
(3) in paragraph (3), by striking the period at the end and
inserting ``; and''; and
(4) by adding at the end the following new paragraph:
``(4) any amounts which may be appropriated for the Fund,
which are authorized to be appropriated in amounts not
exceeding $50,000,000 in each of fiscal years 2000, 2001, 2002,
and 2003, which amounts shall be used only under section 1366
for mitigation activities for properties for which repetitive
flood insurance claim payments have been made, such sums to
remain available until expended.''.
SEC. 4. CONSOLIDATION OF AUTHORIZATIONS.
(a) In General.--The National Flood Insurance Act of 1968 is
amended as follows:
(1) Borrowing authority.--In the first sentence of section
1309(a) (42 U.S.C. 4016(a)), by striking ``through September''
and all that follows through ``, and'' and inserting the
following: ``through the date specified in section 1319, and''.
(2) Authority for contracts.--In section 1319 (42 U.S.C.
4026), by striking ``after'' and all that follows and inserting
``after September 30, 2004.''.
(3) Emergency implementation.--In section 1336(a) (42
U.S.C. 4056(a)), by striking ``during the period'' and all that
follows through ``in accordance'' and inserting ``during the
period ending on the date specified in section 1319, in
accordance''.
(4) Authorization of appropriations for studies.--In
section 1376(c) (42 U.S.C. 4127(c)), by striking ``through''
and all that follows and inserting the following: ``through the
date specified in section 1319.''.
SEC. 5. CHARGEABLE PREMIUM RATES.
(a) Actuarial Rate Properties.--Section 1308 of the National Flood
Insurance Act of 1968 (42 U.S.C. 4015) is amended by striking
subsection (c) and inserting the following new subsection:
``(c) Actuarial Rate Properties.--Subject only to the limitation
provided under paragraph (1), the chargeable rate shall not be less
than the applicable estimated risk premium rate for such area (or
subdivision thereof) under section 1307(a)(1) with respect to the
following properties:
``(1) Post-firm properties.--Any property the construction
or substantial improvement of which the Director determines has
been started after December 31, 1974, or started after the
effective date of the initial rate map published by the
Director under paragraph (2) of section 1360 for the area in
which such property is located, whichever is later, except that
the chargeable rate for properties under this paragraph shall
be subject to the limitation under subsection (e).
``(2) Repetitive claim payments properties.--Any property
for which the Director determines that repetitive flood
insurance claim payments have been made and the owner of which
has refused a buyout, elevation, or other flood mitigation
measure funded in whole or in part by the Federal Emergency
Management Agency.
``(3) Certain leased coastal and river properties.--Any
property leased from the Federal Government (including
residential and nonresidential properties) that the Director
determines is located on the river-facing side of any dike,
levee, or other riverine flood control structure, or seaward of
any seawall or other coastal flood control structure.''.
(b) Applicability of Annual Limitation on Premium Increases.--
Section 1308(e) of the National Flood Insurance Act of 1968 (42 U.S.C.
4015(e)) is amended by striking ``Notwithstanding'' and inserting
``Except with respect to properties described under paragraph (2) or
(3) of subsection (c) and notwithstanding''.
SEC. 6. REMOVING REPETITIVE CLAIM PROPERTIES FROM FEDERAL DISASTER
ASSISTANCE RESPONSIBILITY.
(a) In General.--Section 582 of the National Flood Insurance Reform
Act of 1994 (42 U.S.C. 5154a) is amended--
(1) by redesignating subsections (d) and (e) as subsections
(e) and (f), respectively; and
(2) by inserting after subsection (c) the following new
subsection:
``(d) Unmitigated Repetitive Claim Properties.--Notwithstanding any
other provision of law, no Federal disaster relief assistance made
available in a flood disaster area may be used to make a payment
(including any loan assistance payment) for repair, replacement, or
restoration for damage to any property in the area for which--
``(1) repetitive flood insurance claim payments have been
made; and
``(2) in accordance with such requirements as the Director
may establish, mitigation assistance under section 1366 of this
Act or section 404 of the Robert T. Stafford Disaster Relief
and Emergency Assistance Act (42 U.S.C. 5170c) has been offered
to the owner of the property, before or after the occurrence of
the flood loss events, which was refused by the owner.''.
(b) Effective Date.--Notwithstanding subsection (f) of section 582
of the National Flood Insurance Reform Act of 1994 (as so redesignated
by paragraph (1)(A) of this subsection), the amendment made by
paragraph (1) shall apply to disasters declared after the date of the
enactment of this Act.
SEC. 7. MITIGATION GRANTS FOR REPETITIVE CLAIM PROPERTIES.
(a) In General.--Chapter I of the National Flood Insurance Act of
1968 is amended by adding after section 1322 (42 U.S.C. 4029) the
following new section:
``grants for repetitive claim properties
``Sec. 1323. The Director may provide funding for mitigation
actions that reduce flood damages to repetitive flood insurance claim
payments properties, if the Director determines that--
``(1) such activities are in the best interest of the
National Flood Insurance Fund; and
``(2) such activities can not be funded under the program
under section 1366 because--
``(A) the State or community in which the property
is located can not comply with the requirements of
section 1366(g); or
``(B) the State or community does not have the
capacity to manage such activities.''.
(b) Availability of National Flood Insurance Fund Amounts.--Section
1310(a) of the National Flood Insurance Act of 1968 (42 U.S.C. 4017(a))
is amended--
(1) in paragraph (7), by striking ``and'' at the end;
(2) in paragraph (8), by striking the period at the end and
inserting ``; and''; and
(3) by adding at the end the following new paragraph:
``(9) for funding for mitigation actions under section
1323.''.
SEC. 8. USE RESTRICTIONS ON ACQUIRED PROPERTY.
Section 1366(e)(5)(C) of the National Flood Insurance Act of 1968
(42 U.S.C. 4104c(e)(5)(C)) is amended by striking ``for public use, as
the Director determines is consistent with sound land management and
use in such area'' and inserting the following: ``except that the
Director may not provide amounts under this section for use for
acquisition of properties unless the State or community agrees, to the
satisfaction of the Director, that the instrument for acquisition of
the property will convey to the United States a future interest in all
right, title, and interest in and to all property acquired with the
amounts under this section that is contingent upon the condition that
the property acquired ceases to be dedicated and maintained for use
that is compatible with open space, recreational, or wetlands
management practices.''.
SEC. 9. DEFINITION OF REPETITIVE CLAIM PROPERTIES.
Section 1370(a) of the National Flood Insurance Act of 1968 (42
U.S.C. 4121(a)) is amended--
(1) in paragraph (7), by inserting after the paragraph
designation the following: ``for purposes of sections
1304(b)(1), 1315(a)(2)(A)(i), and 1366(e)(4),'';
(2) in paragraph (13), by striking ``and'' at the end;
(3) in paragraph (14), by striking the period at the end
and inserting ``; and''; and
(4) by adding at the end the following new paragraph:
``(15) the term `repetitive flood insurance claim payments'
means, with respect to a property, that claim payments for
losses to the property have been made under flood insurance
coverage under this title on more than one occasion, without
regard to the amount or timing of the payment or the ownership
of the property.''. | Increases amounts credited to the National Flood Mitigation Fund from the National Flood Insurance Fund, such amounts to be used only for repetitive claim properties.
Extends through FY 2004 the authority to enter into flood insurance contracts and the authorization of appropriations for the national flood insurance program.
Provides chargeable national flood insurance premium rates for: (1) repetitive claim properties; and (2) certain coastal and river properties leased from the Government. Authorizes annual premium increases with respect to such properties.
Amends the National Flood Insurance Reform Act of 1994 to prohibit Federal disaster relief assistance from being used for repair, replacement, or restoration of any property in the area for which: (1) repetitive claim payments have been made; and (2) Federal mitigation assistance has been offered to, but refused by, the property owner.
Authorizes the Director to provide for funding for mitigation actions that reduce flood damages to repetitive claim properties, under certain conditions. Provides funding for such assistance from the National Flood Insurance Fund. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Western New York Redevelopment Act
of 2005''.
SEC. 2. RENEW REPLACEMENT POWER.
Section 1 of the Act of August 21, 1957 (Public Law 85-859; 71
Stat. 401; 16 U.S.C. 836) is amended in subsection (b)(3) by striking
out ``for a period ending not later than the final maturity date of the
bonds initially issued to finance the project work herein specifically
authorized,''.
SEC. 3. ECONOMIC RECOVERY.
(a) License Conditions.--The Federal Energy Regulatory Commission
(FERC) shall include among the conditions imposed on any license issued
subsequent to the original license for the project authorized by
section 1 of the Act of August 21, 1957 (Public Law 85-859; 71 Stat.
401; 16 U.S.C. 836) in addition to those deemed necessary and required
under the terms of the Federal Power Act, the following:
(1) Annual payment.--(A) In order to render financial
assistance to the host governments in which any feature of the
Niagara Power Project is located, the New York Power Authority
(NYPA) shall make a mandatory annual payment from its gross
proceeds to the Erie Canal Harbor Development Corporation in
the City of Buffalo and the County of Erie in the amount of
$10,000,000 for each 12-month period of the new license. For
every 12-month period after the first such period after the
license is issued and continuing for the life of the new
license and any subsequent licenses, the annual payment shall
include an additional 3 percent of the amount of the payment
made during the preceding 12-month period.
(B) Prior to the establishment of the Erie Canal Harbor
Development Corporation, the payment described in subparagraph
(A) shall be held in escrow by the NYPA for transfer to the
corporation upon its establishment. Such payment shall be used
by the Erie Canal Harbor Development Corporation only for the
development, design, engineering and construction of projects
at the Inner and Outer Harbor in Buffalo, and Erie County, New
York, including transportation infrastructure improvements and
Skyway Bridge alternatives. Other qualified uses may include
brownfield remediation, greenway trail design and construction
and other waterfront environmental restoration projects.
(C) At the expiration of the Erie Canal Harbor Development
Corporation the annual payments shall be made to the Erie
County Industrial Development Agency for the uses and purposes
set forth in subparagraph (B).
(2) Additional annual payment to counties.--(A) In order to
achieve the yet unrealized regional economic benefits that the
New York Power Authority contracted to deliver on when it was
awarded exclusive control of the Niagara Power Project, the
Federal Energy Regulatory Commission shall include as a
condition on any new and subsequent license, the payment of 1
percent of gross proceeds to be split evenly by the Industrial
Development Agencies for each of the counties of Niagara, Erie,
Chautauqua, and Cattaraugus, New York.
(B) Such funds shall be distributed by such agencies to
high-load industries and businesses committed to incremental
capital investment and job retention and creation in each such
county. The proceeds shall be disbursed to such western New
York industries and businesses and used by such industries and
businesses to offset the high cost of energy in New York State
and retain current employment levels.
(C) The payment of funds under this paragraph to Erie,
Chautauqua, Cattaraugus, and Niagara counties shall be
additional to, and shall not affect the obligation of the New
York Power Authority to pay, any other funds to those counties
under the terms of any judicial decree or settlement of an
action brought by one or more of such counties against the
NYPA.
(D) The term ``gross proceeds'', as used in this paragraph,
means the total gross proceeds derived by the licensee from the
sale of power for the preceding fiscal year, excluding power
used by the Corporation or sold or delivered to any other
department or agency of the State of New York for any purpose
other than resale.
SEC. 4. TRANSPARENCY.
The Secretary of Energy, acting through the Office of Inspector
General, Office of Audit Services, shall conduct an audit of Niagara
Power Project finances and operations since project inception in order
to provide consistent and timely information concerning the true
economic impact of the Niagara Power Project and its revenue and
disbursements and shall conduct subsequent annual audits to verify
payments to host communities and others.
SEC. 5. PHYSICAL SECURITY AND SAFETY.
(a) In General.--In order to improve the physical security of the
Niagara Power Project, the Federal Energy Regulatory Commission shall
include among the conditions imposed on any license issued subsequent
to the original license for the project authorized by section 1 of the
Act of August 21, 1957 (Public Law 85-859; 71 Stat. 401; 16 U.S.C. 836)
in addition to those deemed necessary and required under the terms of
the Federal Power Act, a requirement that the licensee shall acquire by
contract or other agreement property or interests therein sufficient to
provide an appropriate effective zone of separation between all project
control, switching or generating facilities and any privately owned
real property not used for the generation, transmission, or control of
electric energy. Any such acquisition by the licensee shall be carried
out pursuant to such terms as may be necessary to ensure replacement of
any residential, educational, recreational, and community services and
facilities acquired or adversely affected by such acquisition, and that
such replacement facilities or services are of equivalent character,
value, and number to those so acquired, while meeting contemporary
standards for construction, operation, and level of service.
(b) Resources for First Responders.--The New York Power Authority
shall provide to First Responders serving the local jurisdictions in
which the Niagara Power Project facilities are located adequate
financial and other resources and assistance to acquire, operate,
maintain, and replace, through the term of any license granted pursuant
to this Act, the equipment and other assets needed to protect human
life and property from harm should any feature or facility of the
Niagara Power Project be subject to damage of any type because of an
act of terror or other criminal behavior.
SEC. 6. HOLD HARMLESS.
Nothing in this Act authorizes any increase in the rates and
charges for electric energy under the Replacement Power program.
SEC. 7. SEVERABILITY.
If any provision of this Act, or amendment made by this Act, or the
application of this Act or such amendments to any person or
circumstance is determined by a court to be invalid, the validity of
the remainder of this Act and the amendments made by this Act and the
application of such provision to other persons and circumstances shall
not be affected by such determination. | Western New York Redevelopment Act of 2005 - Modifies the statutory licensing conditions governing operation of a power project by the New York Power Authority regarding utilization of the federal share of the water of the Niagara River whose use is permitted by international agreement.
Directs the Federal Energy Regulatory Commission to include among additional conditions imposed on any license issued subsequent to the original license: (1) specified mandatory annual payments from the gross proceeds of the Authority to the Erie Canal Harbor Development Corporation; (2) the payment of one percent of gross proceeds, to be split evenly by the Industrial Development Agencies for the counties of Niagara, Erie, Chautauqua, and Cattaraugus, New York; and (3) a requirement that the licensee acquire property or interests sufficient to provide an effective zone of separation between all project control, switching or generating facilities, and any privately owned real property not used for the generation, transmission, or control of electric energy.
Requires the Secretary of Energy to conduct: (1) an audit of Niagara Power Project finances and operations since project inception; and (2) subsequent annual audits to verify payments to host communities.
Requires the New York Power Authority to provide to First Responders serving the local jurisdictions in which the Niagara Power Project facilities are located adequate resources and assistance to acquire, operate, maintain, and replace the assets needed to protect human life and property from harm should any feature or facility of the Niagara Power Project be subject to damage of any type because of an act of terror or other criminal behavior. |
The sandwich contained at least 100g of cocaine
A man caught with a ham and cheese sandwich stuffed with drugs has been arrested in the holiday resort of Benidorm.
Spanish police detained the Colombian national after discovering the snack, which contained more than 100g (3.5 ounces) of cocaine.
A photo of the sandwich-stash was tweeted by @policia.
It shows a piece of bread containing slices of ham and melted cheese, as well as nine cylindrical packages wrapped in plastic.
The tweet revealed the man was arrested at Benidorm bus station and was suspected of trafficking.
The tweet posted by @policia
Officers reportedly searched the 29-year-old's home and discovered "more than a kilo of cocaine powder and pieces, as well as marijuana and tools to handle drugs".
Spanish police confirmed another suspected trafficker, reportedly a 20-year-old Colombian, had been arrested.
Reports suggest the two are housemates in the coastal Mediterranean town.
Spain's ties to Latin America have made it a common entry point for cocaine into Europe. ||||| Today's headlines
The total value of Cañete's oil shares stands at €320,000, an asset declaration report he presented in the Spanish Parliament suggests. Photo: Miguel Riopa/AFP EU's new green energy man flogs his oil shares The EU's newly appointed Climate Action and Energy commissioner - Spain's controversial politician Miguel Arias Cañete - has decided to sell €320,000 ($414,000) worth in shares in two oil companies owned by his wife's family to avoid a 'conflict of interests'. READ
Catalonia crisis Margallo also gave his opinion that a 'yes' vote in Thursday's Scottish independence referendum would lead to a "Balkanization" of Europe. Photo: ISHARA S.KODIKARA / AFP 'We'll axe their autonomy but won't send in tanks' José Manuel García-Margallo, Spain's Foreign Minister, has said that his government would use "all means necessary" to stop the planned independence referendum in Catalonia, including stripping the region of its autonomous powers, but ruled out 'sending in the tanks' because it was 'not in the constitution'. READ
VIDEO The driver reportedly ignored passenger complaints and only stopped texting when he saw that he was being filmed. Screen Grab: YouTube/El Periódico Spanish bus driver busted texting at wheel The driver of a shuttle bus service between Barcelona's airport and city centre is facing disciplinary action after a passenger videoed him sending WhatsApp messages on his mobile phone while steering through traffic. READ
Village authorities had paid the man to make the bear walk with a music band and pretend to play the trumpet. Photo: FAADA 'Drunk' tamer ties bear to lamp post in hail storm A Spanish animal rights group is suing a circus worker who left a bear tied to a lamp post in the middle of a hail storm while he went to a bar to have a drink. READ
The square is in the wealthy Salamanca district and was previously un-named. Photo: GERRY PENNY / AFP / Google Street View Madrid unveils Margaret Thatcher square The Spanish capital became the first city in the world to name a square after Margaret Thatcher, the controversial former British Prime Minister whose political legacy still divides opinion. READ
Business If the acquisition goes ahead, Orange expects synergies worth €1.3 billion from the deal. Photo: Eric Piermont/AFP Orange buys up Spanish rival for €3.4 billion French telecom giant Orange on Monday made a €3.4-billion offer ($4.4 billion) to acquire Spanish fixed-line operator Jazztel. READ
Experts weren't sure the paintings could withstand the breath and germs of visitors. Handout photo: Museo Arqueológico Nacional de España Spain's prehistoric cave art open to lucky few A cave in Spain containing some of the world's most precious prehistoric art will stay open for trial visits after decades closed for conservation reasons. READ
Jordi Pujol Ferrusola is suspected of collecting millions of euros by winning bids on government contracts and hiding the money in foreign bank accounts. Photo: Dani Pozo/AFP Catalan leader's son denies graft allegations The son of a former president of Catalonia denied tax fraud and money-laundering allegations in court on Monday in a probe that has tarnished the region's campaign for independence from Spain. READ
The case has remained tied up in legal red tape despite DNA evidence linking a convicted British killer and rapist to the crime. Photo: Wikipedia Commons/Maggie Bartlett, NHGRI Dutchman in Spanish jail waits for DNA justice A Dutchman convicted of sexually assaulting three women in Spain has been in prison for the last 11 years despite DNA results 7 years ago suggesting he wasn't the culprit. READ | – In other circumstances, lunch might consist of a sandwich and a Coke, but Spanish police say they have arrested a Colombian national living in a resort town for his variant on the theme—the cocaine sandwich. As Sky News reports, the concoction "contained ham, melted cheese, and nine cylinders of cocaine weighing more than 100g." The man was arrested with it at a bus station, and a search of his house yielded "more than a kilo of cocaine powder and pieces, as well as marijuana and tools to handle drugs." A housemate, also a Colombian national, was also detained. Meanwhile, the Local reports that Latin American cocaine stuffed in innocuous food items seems to be a recent trend: Police in the Spanish port city of Algeciras last week confiscated a shipment of Costa Rican pineapples crammed full of 5,500 pounds of cocaine. |
osteoporosis is characterized by the depletion of bone mineral mass , combined with bone micro - architecture deterioration , greater bone fragility and a resultant increased fracture risk ; a 10% loss of vertebral bone mass can double the risk of a vertebral fracture .
osteoporosis is one of the most prevalent skeletal disorders and has a similar lifetime risk to coronary heart disease .
it affects approximately 3 million people in the uk and worldwide , an osteoporotic fracture occurs every 3 seconds .
osteoporosis has enormous public health consequences due to the morbidity and mortality of the resulting fractures and the associated healthcare expenditure , particularly as aging populations increase in many parts of the world . as there is no cure
, it is important to identify early life influences on later bone mineral density , which may aid the development of interventions to optimize bone health and reduce osteoporosis risk .
this article discusses the developmental origins of osteoporosis and outlines some of the modifiable and non - modifiable risk factors in both intrauterine and postnatal life that contribute to the later onset of osteoporosis .
bone mineral content ( bmc ) and bone mineral density ( bmd ) in adulthood depends predominantly on growth and mineralization of the skeleton and the resultant peak bone mass achieved and then , to a lesser extent , on the subsequent loss .
reduced peak bone mass in childhood is associated with increased fracture riskand has been proposed as one of the most accurate predictors of later life fracture risk .
genetic predisposition accounts for up to 50% of the variance in bone mass and gender also influences bone composition with males attaining greater bone mass than females .
environmental influences during both childhood and adulthood , such as smoking , corticosteroid use and exercise are also important .
it is likely that much of this remaining variation results from the programming of systems controlling skeletal growth trajectory and so ultimately influencing peak bone mass during critical growth periods [ 11 , 12 ] . the developmental origins of health and disease ( dohad ) hypothesis suggests that nutritional imbalance during critical windows in early life can permanently influence or programme long - term development and disease in later life , ( see table 1 ) .
much of the original work was by barker and colleagues who reported the relationship with low birth weight ( used as a proxy for fetal growth ) with coronary heart disease [ 14 , 15 ] .
further studies suggested , however , that these mechanisms and effects were not restricted to fetal life and that nutrition and growth in infancy ( and perhaps in later childhood ) were also crucial , leading to the incorporation of elements of evolutionary biology and the adoption of the term dohad .
factors that affect early life bone development have not been fully elucidated and the lack of prospective research in this area has been highlighted .
some of the greatest insights into programming of bone come from large epidemiological studies , either those with detailed early exposure information and prolonged follow - up , or mother - offspring cohorts .
examples of large longitudinal studies include the avon longitudinal study of parents and children ( alspac ) [ 18 , 19 ] , which consists of a cohort of approximately 14000 women anticipated to give birth in 1991 or 1992 .
utilising frequent questionnaires and clinic assessments of the mothers and their offspring , detailed information is available on early - life exposures and subsequent skeletal development .
other studies include the hertfordshire cohort study , consisting of 3000 men and women aged 60 - 75 years old living in hertfordshire , who were recruited to a study to determine influences of birth weight and infant growth on adult disease ; and the finnish cohort of over 7000 people born in helsinki university hospital between 1924 and 1933 and still residing in finland in 1971 .
the finnish cohort is unique in that early life data is linked to later hip fracture rather than relying on proxy markers of fracture risk such as bone mineral density , as assessed by non - invasive dxa methods .
the southampton women s survey is a mother - offspring cohort and is the only study in europe where the mothers were interviewed before conception ; up to 2000 offspring have now been followed up at 10 years of age .
the potential for confounding in observational studies can make establishing causation difficult . for example , poor nutrition is an inevitable consequence in the sickest neonate who in turn will be more likely to have a poorer metabolic outcome .
similarly , socioeconomic status may be an important confounder when investigating the effects of programming , as socioeconomic status in itself is known to have a important effect on bmd , therefore it is vital that such potential confounders are adjusted for .
another challenge in longitudinal cohort studies , especially involving children , is that of attritional losses over time introducing a risk of bias . a 2011 meta - analysis
stated that research from a variety of populations may help clarify inconsistencies concerning the relationship between early life events and subsequent bone health .
further evidence for the role of programming comes from twin studies , where statistically significant differences in the relationship between birth weight and bone mineral content were found between monozygous twin pairs . in this study
, associations were largely accounted for by environmental factors independent of maternal factors ( gestation , smoking , nutrition etc . ) and were largely mediated by skeletal size and especially adult height
. it can be technically challenging to account for bone size at different stages of growth when interpreting dxa bone mineral density data , particularly in longitudinal studies .
bone strength is based on size as well as mineral content and as a child grows , their bones will change in shape and size , and the body will also change in size and composition .
some scanners automatically produce t scores based on the bmd with reference to a healthy adult- clearly this is completely meaningless in a young child .
dxa measures the total amount of bone mineral content contained within the skeletal region scanned and the two- dimensional projected bone area in order to calculate the areal bmd in grams / metre squared , rather than by calculating the mineral content within the volume .
therefore if the bone is larger , the areal bmd will appear greater although the true bmd would be the same .
this is particularly important in children with chronic diseases who are often small as their bmd may be underestimated by dexa .
this problem is further compounded by pubertal delay , which is often seen in chronic disease .
therefore it is vital to consider height , bone age ( rather than chronological age ) and puberty when interpreting results and the conclusions of studies and there are several different methods available to help adjust for bone size and growth .
bone mineral accrual is greatest during the last trimester of pregnancy , when growth velocity is rapid .
birth weight and birth length are a reflection of intra - uterine growth and therefore are also affected by environmental influences during pregnancy . a study by harvey et al .
using the southampton cohort showed that intrauterine growth was strongly associated with childhood bone size and density at 4 years of age .
change in femur length between 19 and 34 weeks gestation was associated with childhood skeletal size at 4 years of age , while changes in fetal abdominal circumference predicted bone density .
there are however , conflicting data regarding the influence of birth weight on later bmd .
baird et al . performed a systematic review and meta - analysis to determine whether birth weight predicted bone mass in adulthood .
most showed a consistent association between higher birth weight and greater adult bone mineral content at both the lumbar spine and the hip but found that birth weight was not a strong predictor of later lumbar spine or hip bmd . the hertfordshire cohort study ( which formed the basis for several of the studies of barker et al . ) showed that birth weight was independently associated with bmd in men , but not women , at 63 years of age .
baird suggests that in most of the studies , the weak association between birth weight and bmd was likely to be a result of other postnatal factors such as childhood physical activity and pubertal timing as well as genetic variation playing a more influential role .
studies in the alspac cohort demonstrated independent effects of birth weight and weight at one year on bone size and strength during the sixth and seventh decades after adjustment for confounding lifestyle factors .
this provides strong evidence for both programming and tracking of bone mass throughout the lifecourse .
although another study found no association with preterm birth itself and peak bone mass , an effect of being small for gestational age was apparent , suggesting that a proportion of later bone mass is determined by fetal growth .
some studies suggests that very low birth weight infants , whether preterm or not , attain a sub - optimal peak bone mass in part due to their small size and subnormal skeletal mineralization .
endocrine mechanisms may be responsible for the programming , for example via the growth hormone ( gh)-insulin - growth factor 1 ( igf-1 ) axis , which regulates both growth , and bone remodeling . in support of this , javaid et al .
showed that the concentration of igf-1 in the umbilical cord correlates strongly with birth weight ( after adjustment for gestational age ) and bone mineral content .
similarly , there is evidence that birth weight and infant weight predict gh and cortisol levels during adulthood , which in turn are determinants of later bone loss [ 31 , 32 ] .
for example in a study by cooper et al . , those who were lightest at 1 year of age had the lowest bmc . in a further study ,
weight gain during the first two years of life predicted bmd at age 9 - 14 years in children who were born preterm .
the finnish cohort showed that low rates of childhood growth were a major determinant of later hip fracture risk .
studies as part of the hertfordshire cohort study found that birth weight and weight at 1 year of age were strongly associated with measures of bone strength at both the radius and tibia , but that low weight in infancy was also associated with reduced femoral neck width , independently of bone mineral content ( bmc ) .
this supports the theory that intrauterine and postnatal growth influences later fracture risk not only by affecting bone mass , but also by effects on bone geometry .
eighty percent of fetal bone mineral accumulation occurs during the last trimester of pregnancy , with a surge in placental transfer of calcium , magnesium and phosphorus to the fetus .
a preterm infant who spends this period without the placenta and the associated endocrine and physical maternally controlled environments is therefore more susceptible to a lower bmd and bmc than an infant born at term .
there is , however , conflicting data regarding the long - term consequences of preterm birth on the skeleton and the potential for peak bmd compared to their term counterparts .
premature infants are known to have a lower bone mass , bmd and bmc at the corrected age of term , as well as a lower weight and ponderal index .
a study of 7-year - old boys showed greater measures of cortical thickness , whole body bmc and hip bmd in term compared to preterm boys after adjustment for weight , height and age .
these differences remained after adjustment for birth weight , length of neonatal hospital stay and current activity level .
a study by fewtrell et al . showed that former preterm infants who were followed up at around 10 years of age were shorter , lighter and had lower bmc than controls .
these differences continue through childhood and appear to persist until puberty [ 38 , 39 ] , although results are difficult to interpret due to the confounding effects of the endocrine changes during puberty , and the interaction with bone size and later bmd . in a study by backstrom et al .
lower bone strength was demonstrated at the distal tibia and radius compared to age and sex matched controls .
several studies have failed to demonstrate an association between preterm birth and later bone strength , although all of these [ 28 , 39 , 41 ] were undertaken in relatively small cohorts . a possible explanation for the variation in study results may be in the timing of
follow - up as catch up in bone mineralization may occur primarily in late childhood and adolescence .
other studies have found that although preterm born individuals were smaller , their bmd was appropriate for size .
as some studies may not have made appropriate adjustments for current size it is difficult to determine whether bmd is appropriate for current size or not .
several other studies in infants have shown the influence of early growth on later bone health in those born preterm . in a study by cooper et al . , those who were lightest at 1 year of age had the lowest bmc . in a further study ,
weight gain during the first two years of life predicted bmd at age 9 - 14 .
fewtrell et al . suggested that preterm infants with the most substantial increase in height ( length ) between birth and 8 - 12 years of age showed the greatest bone mass at follow - up , ( see table 2 ) .
they also demonstrated that birth length alone was a strong predictor of later bone mass , suggesting that optimising linear growth in early life may be beneficial to later bone health .
however , although conducted with a large cohort ( n=201 ) , few measurements were taken after discharge and dual - energy x - ray absorptiometry ( dxa ) was only performed at 8 - 12 years . as a result , the impact of changes in growth and corresponding bone mass at potentially critical epochs of infancy could not be assessed .
there are scarce data looking at the effects of birth length and ponderal index on later bone health ; further work in this area would add additional insight into the effects of growth . promoting adequate growth during nicu remains a challenge and the initial dramatic fall off in growth centiles , followed by a period of rapid growth acceleration , represents a pattern that is very different to that seen following normal pregnancies .
whether this type of growth trajectory represents an independent risk for later adverse metabolic outcome requires further study , but highlights that growth rather than absolute size is the key variable determining longer - term health . optimising early growth through nutritional interventions generates positive and lasting effects on bone mineralization and it is hypothesized that this may partially counteract preterm bone deficits .
a systematic review by kuschel and harding in 2009 showed that fortifying the nutrition of preterm babies improves growth and bone mineral accretion .
there is conflicting evidence as to whether breastfeeding has a protective role in the primary prevention of osteoporosis .
in some studies , such as that of fewtrell , breast milk consumption was found to result in higher adult bmd despite the milk being unfortified and having a lower mineral content than formula .
this suggests a possible beneficial role for non - nutrient components such as growth factors . in another study ,
bone mass at follow - up age of approximately 10 years was positively associated with the duration of breastfeeding , yet other studies have shown no benefits at a similar age [ 45 , 46 ] .
other studies have not demonstrated an ongoing relationship into adulthood between breastfeeding and bone mass . given the known benefits of breastfeeding and the lack of proven negative association
, it seems prudent to strongly encourage breastfeeding , despite the slower infant growth trajectories that may be seen compared to preterm infants fed using artificial milk formula .
it is clear that there is a strong genetic predisposition to osteoporosis and although the genes that regulate bone mass have not been completely established , responsible gene variants include vitamin d receptor polymorphisms , collagen-1 receptor and oestrogen receptor variants .
there is also some evidence that gene - environment interactions may play an important role . using vitamin d receptor polymorphisms as an example ,
dennison et al . showed that the relationship between lumbar spine bmd and vdr genotype varied according to birth weight and remained after adjusting for adult body weight , and that a significant statistical interaction occurred between birth weight and vdr genotype .
many of the long - term effects on bone health may be modulated by epigenetic mechanisms - mitotically heritable alterations in gene expression that are not caused by changes in dna sequence .
the classic examples are dna methylation and histone acetylation [ 51 , 52 ] and result in differences in gene expression and transcription , but may also involve post - transcriptional effects on other processes such as protein translation .
early life growth and nutritional exposures appear to affect the cellular memory and result in variation in later life phenotypes . much of this work is preliminary but initial data suggest that epigenetic mechanisms may underlie the process of developmental plasticity and its effect on the risk of osteoporosis .
one of the models that has been postulated is the role of maternal vitamin d status and postnatal calcium transfer
. early work on methylation and vitamin d receptors and placental calcium transporters suggests that epigenetic regulation might explain how maternal vitamin d levels affect bone mineralization in the neonate .
much of the current research is in animal models , but if the changes can be replicated in humans , epigenetic or other biomarkers may provide risk assessment tools to enable targeted intervention to those at greatest risk of osteoporosis , ( see fig .
1 ) . mother - offspring cohorts enable the observation of environmental influences and characteristics of pregnant women in relation to the bone mass of their offspring .
adverse environmental conditions such as smoking during pregnancy , poor diet [ 54 , 55 ] , low fat stores [ 56 , 57 ] and low maternal vitamin d levels [ 5 , 33 ] are all associated with suboptimal bone mineral density in later life .
studies using the alspac cohort show that bone development of the child is clearly related to the in utero environment , but that some of the associations can be explained in part by the shared associations with bone and body size , although maternal vitamin d status exerts persisting effects on bone mass development .
findings from the southampton women survey also corroborate the role of low maternal vitamin d levels on offspring bone mass and suggests that the mechanism is related to umbilical venous calcium concentrations [ 56 , 58 ] .
further work from that group also suggests that maternal vitamin d stores can influence fetal femoral development at as early as 19 weeks gestation .
maternal protein restriction in rats results in a reduction in bone area and bone mineral content , probably by programming the skeletal growth trajectory through modification of the responsiveness of epiphyseal cells [ 60 , 61 ] .
supplemental calcium in adolescent rats does not rescue the reduction in bmc associated with placental restriction of rats in utero , suggesting that early life environment is critical for bone programming , and dietary phosphate restriction in neonatal pigs results in reduced growth and bone mineral content .
perinatal dietary deficiency of essential fatty acids , and accompanied by reduced leptin and igf-1 levels also influenced bone density of adult rats .
the critical role of leptin in regulating bone metabolism is now also acknowledged in humans and there are clinical trials currently investigating the role of leptin treatment on bone mineral density , both in hypoleptinaemic and post - menopausal women .
it highlights the importance of osteoporosis prevention at all stages of the life course , including optimising the in utero environment and maternal nutrition , and the importance of infant nutrition as preventative strategies for future osteoporosis .
it is important to continue to determine the mechanisms behind skeletal programming to further aid the development of preventative strategies .
part of this article has been previously published in international journal of endocrinology volume 2013 ( 2013 ) , article i d 902513 , 7 pages | osteoporosis is one of the most prevalent skeletal disorders and has enormous public health consequences due to the morbidity and mortality of the resulting fractures .
this article discusses the developmental origins of osteoporosis and outlines some of the modifiable and non - modifiable risk factors in both intrauterine and postnatal life that contribute to the later onset of osteoporosis .
evidence for the effects of birth size and early growth in both preterm and term born infants are discussed and the role of epigenetics within the programming hypothesis is highlighted .
this review provides compelling evidence for the developmental origins of osteoporosis and highlights the importance of osteoporosis prevention at all stages of the life course . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Social Security Commission Act of
2014''.
SEC. 2. ESTABLISHMENT.
There is established in the legislative branch a commission to be
known as the ``Commission on Long Term Social Security Solvency'' (in
this Act referred to as the ``Commission'').
SEC. 3. DUTY OF THE COMMISSION.
Not later than 1 year after the initial meeting of the Commission,
the Commission shall transmit to Congress a special message that
includes recommendations and proposed legislation for achieving
solvency in each of the Federal Old-Age and Survivors Insurance Trust
Fund and the Federal Disability Insurance Trust Fund for a period of at
least 75 years beginning on the date that is 1 year after the initial
meeting of the Commission. Such message shall be approved by at least 9
members of the Commission.
SEC. 4. MEMBERS.
(a) Number and Appointment.--The Commission shall be composed of 13
members. Of the members of the Commission--
(1) 1 shall be appointed by the President;
(2) 3 shall be appointed by the Speaker of the House of
Representatives;
(3) 3 shall be appointed by the Minority Leader of the
House of Representatives;
(4) 3 shall be appointed by the Majority Leader of the
Senate; and
(5) 3 shall be appointed by the Minority Leader of the
Senate.
(b) Qualifications for Congressional Appointees.--Of the members of
the Commission appointed by the Congress, at least 1 appointed by each
political party shall be an expert who is not an elected official or an
officer or employee of the Federal Government or of any State.
(c) Timing of Appointments.--Each of the appointments made under
subsection (a) shall be made not later than 45 days after the date of
the enactment of this Act.
(d) Terms; Vacancies.--Each member shall be appointed for the life
of the Commission, and a vacancy in the Commission shall be filled in
the manner in which the original appointment was made.
(e) Compensation.--
(1) In general.--Members of the Commission shall serve
without pay.
(2) Travel expenses.--Each member shall receive travel
expenses, including per diem in lieu of subsistence, in
accordance with applicable provisions under subchapter I of
chapter 57 of title 5, United States Code.
SEC. 5. OPERATION AND POWERS OF THE COMMISSION.
(a) Chair and Co-Chair.--The member of the Commission appointed by
the President under section 4(a) shall serve as the chair of the
Commission. A co-chair of the Commission shall be designated by the
Speaker of the House of Representatives at the time of the appointment.
(b) Meetings.--The Commission shall meet not later than 30 days
after the members of the Commission have been appointed, and at such
times thereafter as the chair or co-chair shall determine.
(c) Rules of Procedure.--The chair and co-chair shall, with the
approval of a majority of the members of the Commission, establish
written rules of procedure for the Commission, which shall include a
quorum requirement to conduct the business of the Commission.
(d) Hearings.--The Commission may, for the purpose of carrying out
this Act, hold hearings, sit and act at times and places, take
testimony, and receive evidence as the Commission considers
appropriate.
(e) Obtaining Official Data.--The Commission may secure directly
from any department or agency of the United States, including the
Congressional Budget Office and the Government Accountability Office,
any information or technical assistance necessary to enable it to carry
out this Act. Upon request of the chair or co-chair of the Commission,
the head of that department or agency shall furnish that information or
technical assistance to the Commission.
(f) Contract Authority.--The Commission may contract with and
compensate government and private agencies or persons for any purpose
necessary to enable it to carry out this Act.
(g) Mails.--The Commission may use the United States mails in the
same manner and under the same conditions as other departments and
agencies of the United States.
SEC. 6. PERSONNEL.
(a) Director.--The Commission shall have a Director who shall be
appointed by the Commission. The Director shall be paid at a rate of
pay equivalent to the annual rate of basic pay for a comparable
position paid under the Executive Schedule, subject to the approval of
the chair and the co-chair.
(b) Staff.--The Director may appoint and fix the pay of additional
staff as the Director considers appropriate.
(c) Experts and Consultants.--The Commission may procure temporary
and intermittent services under section 3109(b) of title 5, United
States Code, but at rates for individuals not to exceed the daily
equivalent of the annual rate of basic pay for a comparable position
paid under the Executive Schedule.
(d) Staff of Federal Agencies.--Upon request of the Commission, the
head of any Federal department or agency may detail, without
reimbursement, any of the personnel of that department or agency to the
Commission to assist it in carrying out its duties under this Act.
(e) Administrative Support Services.--Upon the request of the
Commission, the Administrator of General Services shall provide to the
Commission, on a reimbursable basis, the administrative support
services necessary for the Commission to carry out its responsibilities
under this Act.
(f) Gifts, Bequests, and Devises.--The Commission may accept, use,
and dispose of gifts, bequests, or devises of services or property,
both real and personal, for the purpose of aiding or facilitating the
work of the Commission. Gifts, bequests, or devises of money and
proceeds from sales of other property received as gifts, bequests, or
devises shall be deposited in the Treasury and shall be available for
disbursement upon order of the Commission.
SEC. 7. TERMINATION.
The Commission shall terminate not later than 60 days after the
submission of the report described in section 3.
SEC. 8. AUTHORIZATION OF APPROPRIATIONS.
There is authorized to be appropriated not more than $2,000,000 to
carry out this Act.
SEC. 9. EXPEDITED CONSIDERATION OF COMMISSION RECOMMENDATIONS.
(a) Expedited Consideration.--
(1) Introduction of approval bill.--The majority leader of
each House or a designee shall (by request) introduce an
approval bill as described in subsection (c) not later than the
third day of session of that House after the date of receipt of
a special message transmitted to the Congress under Section 3.
(2) Consideration in the house of representatives.--
(A) Referral and reporting.--Any committee of the
House of Representatives to which an approval bill is
referred shall report it to the House without amendment
not later than the third legislative day after the date
of its introduction. If a committee fails to report the
bill within that period or the House has adopted a
concurrent resolution providing for adjournment sine
die at the end of a Congress, such committee shall be
automatically discharged from further consideration of
the bill and it shall be placed on the appropriate
calendar.
(B) Proceeding to consideration.--Not later than 3
legislative days after the approval bill is reported or
a committee has been discharged from further
consideration thereof, it shall be in order to move to
proceed to consider the approval bill in the House.
Such a motion shall be in order only at a time
designated by the Speaker in the legislative schedule
within two legislative days after the day on which the
proponent announces an intention to the House to offer
the motion provided that such notice may not be given
until the approval bill is reported or a committee has
been discharged from further consideration thereof.
Such a motion shall not be in order after the House has
disposed of a motion to proceed with respect to that
special message. The previous question shall be
considered as ordered on the motion to its adoption
without intervening motion. A motion to reconsider the
vote by which the motion is disposed of shall not be in
order.
(C) Consideration.--If the motion to proceed is
agreed to, the House shall immediately proceed to
consider the approval bill in the House without
intervening motion. The approval bill shall be
considered as read. All points of order against the
approval bill and against its consideration are waived.
The previous question shall be considered as ordered on
the approval bill to its passage without intervening
motion except 4 hours of debate equally divided and
controlled by the proponent and an opponent and one
motion to limit debate on the bill. A motion to
reconsider the vote on passage of the approval bill
shall not be in order.
(3) Consideration in the senate.--
(A) Committee action.--The appropriate committee of
the Senate shall report without amendment the approval
bill not later than the third session day after
introduction. If a committee fails to report the
approval bill within that period or the Senate has
adopted a concurrent resolution providing for
adjournment sine die at the end of a Congress, the
Committee shall be automatically discharged from
further consideration of the approval bill and it shall
be placed on the appropriate calendar.
(B) Motion to proceed.--Not later than 3 session
days after the approval bill is reported in the Senate
or the committee has been discharged thereof, it shall
be in order for any Senator to move to proceed to
consider the approval bill in the Senate. The motion
shall be decided without debate and the motion to
reconsider shall be deemed to have been laid on the
table. Such a motion shall not be in order after the
Senate has disposed of a prior motion to proceed with
respect to the approval bill.
(C) Consideration.--If a motion to proceed to the
consideration of the approval bill is agreed to, the
Senate shall immediately proceed to consideration of
the approval bill without intervening motion, order, or
other business, and the approval bill shall remain the
unfinished business of the Senate until disposed of.
Consideration on the bill in the Senate under this
subsection, and all debatable motions and appeals in
connection therewith, shall not exceed 10 hours equally
divided in the usual form. All points of order against
the approval bill or its consideration are waived.
Consideration in the Senate on any debatable motion or
appeal in connection with the approval bill shall be
limited to not more than 1 hour. A motion to postpone,
or a motion to proceed to the consideration of other
business, or a motion to recommit the approval bill is
not in order. A motion to reconsider the vote by which
the approval bill is agreed to or disagreed to is not
in order.
(4) Amendments prohibited.--No amendment to, or motion to
strike a provision from, an approval bill considered under this
section shall be in order in either the Senate or the House of
Representatives.
(5) Coordination with action by other house.--
(A) In general.--If, before passing the approval
bill, one House receives from the other a bill--
(i) the approval bill of the other House
shall not be referred to a committee; and
(ii) the procedure in the receiving House
shall be the same as if no approval bill had
been received from the other House until the
vote on passage, when the bill received from
the other House shall supplant the approval
bill of the receiving House.
(B) Exception.--This paragraph shall not apply to
the House of Representatives.
(b) Limitation.--Subsection (a) shall apply only to an approval
bill described in subsection (c) and introduced pursuant to subsection
(a)(1).
(c) Approval Bill Described.--For purposes of subsection (a), a
bill described in this paragraph is a bill--
(1) which consists of the proposed legislation which is
included in such report to carry out the recommendations made
by the Commission in the report; and
(2) the title of which is as follows: ``A bill to carry out
the recommendations of the Commission on Long Term Social
Security Solvency.''.
(d) Extended Time Period.--If Congress adjourns at the end of a
Congress and an approval bill was then pending in either House of
Congress or a committee thereof, or an approval bill had not yet been
introduced with respect to a special message, then within the first 3
days of session of the next Congress, the Commission shall transmit to
Congress an additional special message containing all of the
information in the previous, pending special message. An approval bill
may be introduced within the first five days of session of such next
Congress and shall be treated as an approval bill under this section,
and the time periods described in paragraphs (2) and (3) of subsection
(a) shall commence on the day of introduction of that approval bill. | Social Security Commission Act of 2014 - Establishes in the legislative branch the Commission on Long Term Social Security Solvency to make recommendations to Congress, including proposed legislation, for achieving solvency in the Social Security trust funds for a period of at least 75 years. Requires expedited consideration of any proposed legislation approving Commission recommendations. |
Mark Girardeau positioned his drone directly above the newborn calf and its massive mother, his eyes glued to the screen of his remote control as he watched the pair of gray whales while aboard Newport Coastal Adventure’s small inflatable boat.
The baby – thought to be just days old – suddenly veered away from its mother and headed toward whale watchers on Sunday afternoon.
“They’re coming right toward us, guys,” said Girardeau, a photographer who frequents whale-watching charters to capture footage of creatures in the wild. “Wait, the baby just split up, the baby is coming straight toward us! Oh, wow. The mom is coming right toward us too!”
It was a rare moment off Laguna’s coastline captured on video as the baby, born during the southbound migration to Mexico, glided beneath the boat. Even rarer was when the mom had to come fetch the baby and get back on track, the mother’s 45-foot-long body swimming just below the 23-foot inflatable boat.
“They’re right here, they’re right under the boat! No freaking way! No way!” Girardeau could be heard screaming on the video. “They’re right under the boat! Oh my God! No freaking way!”
Girardeau on Monday was still reeling from the experience. He said he didn’t want to take his eyes from the drone screen he clutched in his hands, even when the calf popped its head up next to the bow.
“I really wanted to stop and enjoy it. But if I took my eyes off the iPad, I miss the shot and no one will believe me,” he said.
There were several boats watching the pair during their migration, but it was the Newport Coastal Adventure boat the whales found intriguing.
As the whale duo approached, the boat’s captain, Taylor Thorne, cut the engine both for safety and as part of whale etiquette.
“We were just sitting still in the water. The whales had the ability to do anything they wanted,” Girardeau said. “They made the decision to change course and check out our boat.”
It’s unclear what prompted the baby whale to check out the small boat.
“I don’t know if the baby got confused and mistook our boat for another whale. Maybe since the baby was less than a few days old, the baby was still trying to figure out what’s happening in the world,” Girardeau said.
The mom didn’t seem as curious.
“It seemed like the mom just wanted to put the baby back on track to go toward Mexico,” he said. “I’m sure she’s seen lots of boats before. She was more like, ‘Hey, get back here.’”
Was Thorne nervous when the mother whale – about twice the size of the boat he was helming – swam underneath?
“I felt safe and I was comfortable knowing the whales were very gentle,” he said. “They were just calm, they didn’t feel like any threat or anything.”
Thorne said when the baby popped up its head next to the bow, “it was one of the most unreal experiences I’ve had out there ... you literally could have reached out and touched it.”
Down in warmer-water lagoons in Baja, gray whales come up close to the boats and charter passengers can hold their hands out to touch their heads or even pet their tongues. But it’s not a behavior typically seen in Southern California.
Thorne said moms tend to be protective of their calves, but this mother seemed to be allowing her calf to explore the world.
“It let the baby do what it wanted to do. I think it led to the experience being a lot less scary and a lot more ... exciting,” Thorne said.
He said it all happened so fast, it took a moment to realize the magnitude of what happened.
“It took us until they left the boat to process what was going on,” he said. “It was so exciting and so thrilling, time went so fast. None of us were ready for it.”
Thorne knows an encounter like this is a “once-in-a-lifetime kind of thing.”
Contact the writer: [email protected] ||||| Stefanie Neumann/plainpicture
Mothers hold their children more on the left and wild mammals seem to keep their young more on that side too, at least when fleeing predators.
Now it seems many mammal babies prefer to approach their mother from one side too – and the explanation may lie in the contrasting talents of each half of the brain.
Advertisement
In mammals, the brain’s right hemisphere is responsible for processing social cues and building relationships. It is also the half of the brain that receives signals from the left eye.
Some researchers think this explains why human and ape mothers tend to cradle their babies on the left: it is so they can better monitor their facial expressions with their left eye.
Now, Janeane Ingram at the University of Tasmania, Australia, and her colleagues have looked at whether animal infants also prefer to observe their mum from one side.
The team studied 11 wild mammals from around the world: horses, reindeer, antelopes, oxen, sheep, walruses, three species of whale and two species of kangaroo.
Whenever an infant approached its mother from behind, the researchers noted whether it positioned itself on its mum’s left or right side. They recorded almost 11,000 position choices for 175 infant-mother pairs.
Infants of all species were more likely to position themselves so that their mother was on their left. This happened about three-quarters of the time.
The observations tally with a recent human study, which found that when children approached adults, they tended to do so in a way that kept the adults on their left.
Better bonding
Ingram and her colleagues found that mammal infants who keep their mother on their left are better able to keep track of her and hence increase their chance of survival.
When baby whales and horses move around with their mother on their left, for example, they are more likely to bond with her by rubbing up against her body, and less likely to be accidentally left behind.
However, if a threat emerges, the roles often reverse, Ingram says. “Infants keep their mother on their left in normal situations such as moving forward or suckling,” she says. “But when faced with stressful situations such as when fleeing, mothers prefer their infant on their left side so they can better monitor them.”
Human mothers cradle their babies on their left side while they are young and vulnerable, but this may switch as the children age and become more independent, Ingram says.
The consistent use of the right hemisphere in mother-infant interactions across all studied mammals hints that it has an evolutionary advantage, she says.
Lesley Rogers at the University of New England in Armidale, Australia, agrees. “If you’ve got different functions to perform, you can do that more effectively if you allocate different kinds of processing to each brain hemisphere,” she says. “So it makes sense for the right hemisphere to be dedicated to social behaviour.”
Journal reference: Nature Ecology & Evolution, DOI: 10.1038/s41559-016-0030 ||||| Image copyright Thinkstock Image caption A baby kangaroo keeps a close eye on its mother
Scientists say they have solved the mystery of why mothers tend to cradle newborn babies on the left.
This position activates the right hemisphere of the brain, which is involved in functions that help in communication and bonding, they say.
The "positional bias" is not unique to humans, with their advanced brains, but is also found in animals, according to researchers in Russia.
Similar behaviour has been seen in baby mammals following their mothers.
They include kangaroos and horses on land and walruses and orcas in the sea.
Dr Yegor Malashichev of Saint Petersburg State University, said the position helped in survival and social bonding.
"If there is no eye contact, or it is wrong, there is no activation of the right hemisphere of the infant... the right hemisphere is responsible for social interactions," he told BBC News.
"All the [11] species we studied demonstrated the lateral bias.
"We suggest that this bias is even more widespread and may be a characteristic of all mammals, with few exceptions. "
Eye contact
It has long been known that humans and great apes tend to cradle their babies on the left, particularly during the first weeks of an infant's life.
Various explanations have been proposed, including physical contact - so an infant can hear their mother's heart beat - or practical benefits to the mother, who can keep a hand free for other tasks (if right-handed).
Alternatively, some have proposed it could be related to eye contact and its effect on the brain.
Image copyright SPL Image caption Wild horse and foal
When a mother cradles her baby to the left and face-to-face, the left eyes of the mother and infant are directed towards each other, say the researchers.
Thus, the visual information goes mostly to their right hemisphere of the brain, the side involved in functions such as attention, memory, reasoning, and problem solving (all of which contribute to effective communication).
The researchers looked at humans and 10 wild animals:
feral horses
walruses
reindeer
antelope
musk ox
sheep
whales
orca
kangaroos
The scientists found the young animals kept close to the right side of their mother.
This meant they watched her mainly with their left eye, activating the right hemisphere of the brain.
While doing this, they were less likely to get separated from their mother and more likely to be able to find her again if they got lost.
Animal mothers tended to move to monitor their young with their left eye at times of stress.
For example, the researchers found that orca mothers swam to the right of their infants when the researchers approached them in a boat.
This happened regardless of the side from which the boat was approaching.
This runs contrary to the expectation that the mother would "defend" the infant by placing her body in between the calf and the boat, they say.
Dr Malashichev said the "cradling or positional bias" was not unique to humans or species with advanced brains such as whales but was "really widespread"; so the mechanism was likely to be "ancient and really basic".
The research could also have implications in studying development disorders associated with reduced eye contact between mother and infant, such as autism spectrum disorder, he added.
The study, by teams based in Russia, the US and Australia, is published in Nature Ecology & Evolution.
Follow Helen on Twitter. | – It's long been observed that mothers tend to cradle their infants on their left side, and this has long been at least informally attributed to handedness (so that right-handed mothers have the right hand free). Now researchers report in the journal Nature Ecology & Evolution that "positional bias" is in fact observed in multiple mammal species, and they say the reason for it is likely neurological, as the BBC reports. Turns out that this positioning on the left activates the brain's right hemisphere, responsible for processing social activities like communication and bonding. "We suggest that this bias is even more widespread and may be a characteristic of all mammals, with few exceptions," one researcher says. Studying wild animals—including horses, walruses, reindeer, sheep, and kangaroos—the team also found a "positional bias," but it varied depending on the situation. They found, for example, that young animals tend to keep their mothers on their left and watch her with mainly their left eye. In dangerous situations, however, the mothers would swap spots to keep their young on their left for better monitoring, reports New Scientist. The scientists recorded nearly 11,000 position choices across 175 pairs of infants and mothers. That the positional bias is so widespread among mammals suggests, as one researcher puts it, that the mechanism is "ancient and really basic." Meanwhile, the Orange County Register reports on a thrilling drone sighting of a mother and calf whale migrating south—with predictable positioning. (Here's why viruses go easier on females.) |
These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.The goal is to fix all broken links on the web . Crawls of supported "No More 404" sites. ||||| The Frankfurt-based lender blamed "human error" for the blunder, which saw 5 billion euros ($5.4 billion) being moved to accounts with four other banks on February 20.
The Bloomberg news cited people familiar with the incident as saying the total amount transferred could have been as high as 6 billion euros.
The German business daily "Handelsblatt" said the error was pointed out by the German central bank, the Bundesbank.
KfW said it immediately detected and rectified the error and recalled the money without suffering any financial damage.
KfW said an experienced programmer made an input error that caused its software to automatically make multiple payments to the four banks.
The development bank said it "launched a comprehensive international and external investigation" so that it can avoid a repeat of the error.
History repeating?
The mistake isn't the first time KfW has had to go scurrying after erroneously transferred money. In 2008, during the financial crisis, the bank transferred 320 million euros to Lehman Brothers on the day the US investment bank filed for bankruptcy.
It was derided as "Germany's dumbest bank" by the mass-market daily "Bild," and the error became a political scandal.
KfW is not alone though in what the industry often calls "fat finger" incidents. In 2015, Deutsche Bank managed to make an erroneous payment of nearly six billion dollars to a hedge fund. The money was returned the next day.
On its website, KfW said it's the world's safest bank, citing an award by the magazine "Global Finance."
mm/sms (dpa, Handelsblatt, Bloomberg) | – To err is human, and a major bank is reminding the world of this for the second time after once again accidentally transferring a massive sum of money it shouldn't have. The blunder occurred on Feb. 20, when Frankfurt-based lender KfW, a German government-owned development bank, transferred $5.4 billion to four other banks, reports Deutsche Welle. Germany's central bank identified the mishap, which was caused by an "experienced" programmer's "configuration error" made while working on the bank's payment software, the FT reports. That contributed to the creation of an "automatic loop" in which duplicate payments were made. While the bank says it "launched a comprehensive international and external investigation" to identify the exact error and avoid ones like it moving forward, it says all the money was returned promptly. Two side notes most media are reporting: One, that newspaper Bild called KfW "Germany's dumbest bank" in 2008 after it transferred 320 million euros to Lehman Brothers on the day the US investment bank filed for bankruptcy. Second, that the German bank doesn't stand alone. In 2015 Deutsche Bank AG wrongly sent $6 billion to a hedge fund client, though it got the money back the next day. It happened when a junior foreign-exchange trader's boss was on vacation. Read about the mishap here. |
prostate specific antigen ( psa ) is a serine protease secreted by all types of prostatic tissue - normal , benign hyperplastic and cancerous , and not the prostate cancer ( pca ) specific antigen . under normal conditions , the epithelial layer , the basal cell layer , and the basement membrane separate the intraductal contents from the lymphatic system and the psa level in the prostatic fluid are a million times higher than in the serum .
when tumor or disease interferes with this barrier , psa enters the lymph and then the circulatory system .
so , as a blood marker , psa has some practical utilities in detecting early prostate cancer and monitoring response to therapy .
nevertheless , because it is produced by all types of prostatic tissue , it lacks sufficient sensitivity and specificity to be the
perfect tumor marker for the detection of early prostate cancer ( 1 , 2 ) .
about 75% of men with psa levels greater than 4.0ng / ml do not have pca ( 3 ) , and psa has an extremely low positive predictive value ( ppv ) in diagnosing prostate cancer ( 4 ) .
a serious of conditions can lead the elevation of serum psa level(12 ) , such as benign diseases including benign prostate hyperplasia ( bph ) , prostatitis , and trauma / necrosis of the prostate gland , and operating on prostate gland including digital rectal examination ( dre ) and transrectal ultrasound ( trus ) et al .
in addition , race and patient s age can affect the psa level as well .
to further improve the utility of psa as a screening tool for prostate cancer , oesterling et al.(5 ) developed , for white americans of the united states , age - related reference ranges for serum psa .
they demonstrated that the serum psa concen - trations correlated directly with age and the recommended upper limit of normal serum psa levels for white men are : 2.5 ng / ml for 40 - 49 years , 3.5 ng / ml for 50 - 59 years , 4.5ng / ml for 60 - 69 years , and 6.5ng / ml for 70 - 79 years .
after that , some articles ( 68 ) reported the recommended reference ranges for serum psa for men from different races . even in the same race , however , the reference ranges for serum psa has some differences , such as studies on chinese men from taiwan , shaanxi , shanghai , and shandong ( 912 ) .
( 13 ) compared three studies from different areas of the united states , and suggested that using age- and race - related reference ranges for normal psa level should be specific to the population where the norms are derived .
the serum psa levels increased with age and the baseline psa and psa velocity in young chinese men without prostate cancer differ from those of african american and caucasian american men .
( 14 ) the aim of this study was to establish the reference ranges for psa in men from beijing area without prostate cancer , and spin - offs include comparisons with white americans and other chinese men of different geographic regions .
in this cross sectional study , from april 2010 to october 2011 , 1611 chinese men , aged 40 to 91 years , undergoing a routine health check - up in our hospital were recruited into the study .
all the subjects were stratified into 10-year age groups : 40 to 49 years , 50 to 59 years , 60 to 69 years , 70 to 79 years , and older than 80 years .
the institutional review board committee approved the research protocols , and all patients provided informed consent .
men were excluded if they presented with a prior history of prostate cancer / surgery , urinary tract infection / obstruction , ejaculation 48 hours before the psa test , or had a serum psa level greater than 20ng / ml . all the subjects received three examinations including serum psa test , dre and trus to determine their health status .
the abnormal psa levels were defined as the concentration>4.0 ng / ml , the abnormal dre findings were defined as palpable induration , nodularity , irregularity , or asymmetry , and the abnormal trus findings were defined as capsular irregularity , deformation , or existence of a hypoechoic region / nodule ( 15 ) .
we set up the criteria for biopsy as below : men with any two abnormal results of the upper three examinations .
men with any two normal results or although had two abnormal results but with a negative biopsy were regarded as men without cancer .
serum psa levels were measured using the elecsys 2010 immunoassay ( roche diagnostics gmbh , mannheim , germany ) .
trus were performed with acuson sequoia512 ( siemens medical solutions usa , inc , mountain view , california , usa ) and iu22 ( philips ultrasound , bothell , washington , usa ) scanners .
trus - guided sextant biopsies were performed , and , if trus or dre revealed abnormal findings , we performed additional one to two biopsies in the suspicious areas ( fig .
flow chart of this study men with any two normal results of the three examinations ( psa test , dre and trus ) or although had two abnormal results but with a negative biopsy were recruited in this study .
descriptive statistics including the mean , median , 5th , 25th , 75th , and 95th percentiles of the psa level were calculated for each 10-year age groups .
spearman rank correlation coefficients were calculated to measure the association between serum psa and age .
the 95th percentile was determined as the upper limit of normal ( reference range ) for the midpoint of each 10-year age group for the serum psa level .
analysis was performed using the statistical package for social sciences software program ( spss for windows ver.11.5 ) .
in this cross sectional study , from april 2010 to october 2011 , 1611 chinese men , aged 40 to 91 years , undergoing a routine health check - up in our hospital were recruited into the study .
all the subjects were stratified into 10-year age groups : 40 to 49 years , 50 to 59 years , 60 to 69 years , 70 to 79 years , and older than 80 years .
the institutional review board committee approved the research protocols , and all patients provided informed consent .
men were excluded if they presented with a prior history of prostate cancer / surgery , urinary tract infection / obstruction , ejaculation 48 hours before the psa test , or had a serum psa level greater than 20ng / ml .
all the subjects received three examinations including serum psa test , dre and trus to determine their health status .
the abnormal psa levels were defined as the concentration>4.0 ng / ml , the abnormal dre findings were defined as palpable induration , nodularity , irregularity , or asymmetry , and the abnormal trus findings were defined as capsular irregularity , deformation , or existence of a hypoechoic region / nodule ( 15 ) .
we set up the criteria for biopsy as below : men with any two abnormal results of the upper three examinations .
men with any two normal results or although had two abnormal results but with a negative biopsy were regarded as men without cancer .
serum psa levels were measured using the elecsys 2010 immunoassay ( roche diagnostics gmbh , mannheim , germany ) .
trus were performed with acuson sequoia512 ( siemens medical solutions usa , inc , mountain view , california , usa ) and iu22 ( philips ultrasound , bothell , washington , usa ) scanners .
trus - guided sextant biopsies were performed , and , if trus or dre revealed abnormal findings , we performed additional one to two biopsies in the suspicious areas ( fig .
men with any two normal results of the three examinations ( psa test , dre and trus ) or although had two abnormal results but with a negative biopsy were recruited in this study .
descriptive statistics including the mean , median , 5th , 25th , 75th , and 95th percentiles of the psa level were calculated for each 10-year age groups .
spearman rank correlation coefficients were calculated to measure the association between serum psa and age .
the 95th percentile was determined as the upper limit of normal ( reference range ) for the midpoint of each 10-year age group for the serum psa level .
analysis was performed using the statistical package for social sciences software program ( spss for windows ver.11.5 ) .
of the 1611subjects , 99 were biopsied and 37 cases of pca , two cases of prostatitis , and 60 cases of bph and prostatic intraepithelial neoplasia ( pin ) were diagnosed .
table 1 shows the proportion of men with various serum psa levels according to age .
proportion of men without prostate cancer with various serum psa levels according to age among the 1572 subjects , 1416 ( 90.1% ) had a serum psa level less than 4ng / ml , 144 ( 9.2% ) greater than 4ng / ml but less than 10ng / ml , and only 12 ( 0.7% ) greater than 10 ng / ml .
the serum psa levels among each age groups had statistics significant ( between group 40 - 49 and group 50 - 59 , p<0.05 ; among other groups , p<0.01)and over the entire age range , the serum psa level directly correlated with age ( r0.314p<0.001 ) .
table 2 shows the mean , median and percentiles of serum psa levels according to age .
frequency of prostate - specific antigen ( psa ) in chinese men without prostate cancer the mean , median and percentiles of serum psa according to age
since psa had been discovered by ablin ( 16 ) in 1970 , it has been the most widely used tumor marker nowadays for detecting early prostate cancer and monitoring response to therapy .
therefore , to determine the normal range for serum psa level is important , and 4.0ng / ml as the upper limit of serum psa level has been widely accepted nowadays . however , a meta - analysis ( 3 ) showed that among men whose serum psa level greater than 4.0ng / ml , more than 70 percent did not have pca . as man ages
so , for the younger men without bph or with slight bph , the notion of normal range for serum psa level should be different from the older ones with heavy bph .
17 ) recently reported that patients age , psa , prostate volume and dre status were the independent variables to predict a positive initial prostate biopsy , while trus was not . in our study , we set up the criteria for biopsy as men with any two abnormal results of the three tests ( dre / trus / psa ) for some reasons .
firstly , evidence from autopsies study showed that the prevalence of prostate cancer was 15 to 60 percent among men 60 to 90 years old and increased with age ( 18 ) , and considering that a man s risk of death from prostate cancer is 3 to 4 percent and his lifetime risk of the diagnosis of prostate cancer is 16.7 percent , it is apparent that many prostate cancers detected in routine practice may be clinically unimportant , and lowing the cutoff would increase the risks of overdiagnosing and overtreating clinically unimportant disease ( 19 ) . secondly , the detection rate of abnormal ndings on dre and trus was 14.4% and 9.5% respectively with psa levels of 4.0ng / ml or less , adding trus to dre in the screening program of subjects with psa levels of 2.0 to 4.0 ng / ml increased the detection rate of prostate cancer to 30.8% ( 15 ) .
thirdly , 72% to 82% of patients who undergo biopsy based on dre findings will not have prostate cancer ( 20 ) .
frothily , trus has not been used in the first - line screening examination for prostate cancer because of it lacks the ability to diagnose the prostate cancer in early stage , while it has been widely used as a screening and following - up tool for prostate cancer in china , because it is much cheaper .
finally , papers have been reported that even some patients with prostate cancer can not be treated immediately at the time it was diagnosed , and they can undergo follow - ups with a specific surveillance program , called active surveillance ( as ) , which is the practical way of avoiding possible overtreatment of prostate cancer ( 21 , 22 ) . in our study , more than 90 percent of the patients psa levels were lower than 4.0ng / ml , and for the subjects with only one positive finding out of the three tests ( dre / trus /psa ) , we take them in a following - up program , monitoring the psa level , dre and trus periodically , which may avoid the unnecessarily biopsies and not miss the prostate cancers . in current study
, the proportion of men with psa level less than 4.0g / ml was 100% for 40 - 49 year - old age group , 98.7% for 50 - 59 year - old age group , and the upper limit ( 95 percentile ) for the two groups was 1.565 and 2.920 respectively .
if we still use 4.0ng / ml as the upper limit of serum psa level , then we can image the sensitivity will be very low in these two groups in the pca screening test .
similarly , the upper limits for the three groups old than 60 years were higher than 4.0 ng / ml , and the traditional cutoff of 4.0 ng / ml would cause too many unnecessarily biopsies . compared with the results for white americans ( 5 ) and chinese men from different regions , the upper limit of normal serum psa levels for chinese men in this study were lower than in white men and in taiwanese ( 9 ) , higher than in chinese shaanxi ( 10 ) , and similar to that in chinese shanghai ( 11 ) and shandong ( 12 ) ( table 3 ) .
comparison of serum psa reference ranges among white americans and chinese from different regions in current study , the reference ranges was lower than in white americans ( 5 ) , this must be caused by the differences among races , dietary and environment factors .
interestingly , the reference ranges for psa level in chinese men in taiwan , shanghai , shandong and beijing ( current study ) are all higher than in shaanxi , and geographically , the former four locate in the east of china and the latter one locates in the west of china .
the environment differences between east and west of china are conspicuous , and this phenomenon may suggest that geography and the environmental factors might play an important role in the development of pca .
moreover , the dietary among the five regions has some differences , and which will be another factor in the development of pca .
in addition , although china is a country with multiple nationalities , but more than 90 percent of the populations are han nationality , moreover , because of the dynasties alternation , wars and marriages between different nationalities in the past thousands of years , the ethnical disparity inside chinese are vague now .
so the reference ranges for psa level caused by different nationalities among the five studies maybe very small and can be ignored .
, the reference ranges for psa level for chinese men in different regions should be different , just like table 3 shows .
the goal of screening is to detect clinically significant prostate cancers at a stage when intervention reduces morbidity and mortality .
this study shows that over the entire age range , the serum psa level directly correlated with age ( r0.314p<0.001 ) , coincidences with prior studies ( 612 ) . but not only age can cause the elevation of psa level , a serious of conditions can lead the elevation of serum psa level ( 1 , 2 ) , as we mentioned before , and all these factors should be considered in screening .
in addition , the family history of prostate cancer and previous negative biopsies should be involved also .
even so , it is more appropriate to practice the age - related reference ranges rather than use the single cutoff of 4.0ng / ml for men of all age groups .
first , the data of current study were not derived from a community - based population , but came from the same hospital . yet , in this study cohort , all subjects were the healthy people who came to our hospital for routine health examination instead of the patients from the out - patient or in - patient department , and we also excluded the patients with urinary tract infection or obstruction , so the bias were decreased to the minimum .
second , the definition of men without cancer is men with any two negative results of the three examinations ( psa , dre and trus ) or had two abnormal results but with a negative biopsy , which may miss some pca cases , and also the sextant biopsies would miss some pca cases ( 23 ) .
to avoid such bias , all the subjects should be biopsied and undergo more than 10 to 12 biopsies , which may increase the risks of overdiagnosing and overtreating clinically unimportant disease ( 3 ) , and it can not be easily accomplished also .
the aim to determine the reference ranges for age - related psa levels is to improve the diagnostic accuracy rate in the pca screening test , i.e. , increase the sensitivity for the relatively young men who may be diagnosed earlier and benefit from aggressive treatment intervention , and increase the specificity for the older men to avoid the unnecessarily biopsies which can decrease the patient suffering from the biopsies and alleviate the burden on the patient s mind and economy .
our study shows that the serum psa level increases as man ages and use the age - related range for psa increases the sensitivity in younger men and decreases the biopsy rate in older patients .
ethical issues ( including plagiarism , informed consent , misconduct , data fabrication and/or falsification , double publication and/or submission , redundancy , etc . ) have been completely observed by the authors . | abstractbackgroundto determine the normal ranges of serum age - related prostate - specific antigen ( psa ) level in men from beijing area without cancer.methodsin this cross sectional study , form april 2010 to october 2011 , 1611 healthy men undergoing a routine health check - up in our hospital and all men received three examinations including serum psa test , digital rectal ex - amination and transrectal ultrasound .
men with any two abnormal results of the three examinations were undergone a prostate biopsy .
men with any two normal results of the three examinations or with negative biopsy were defined as men without cancer .
men with a prior history of prostate cancer / surgery or with urinary tract infection / obstruction were excluded .
1572 men without cancer were recruited into the study finally and were stratified into 10-year age groups : 40 to 49 , 50 to 59 , 60 to 69 , 70 to 79 , and older than 80.resultsthe median psa value ( 95th percentile range ) was 0.506(1.565 ) , 1.04(2.920 ) , 1.16(4.113 ) , 1.34(5.561)and 2.975 ( 7.285 ) for each age group respectively , and the 25th percentile to 75 percentile was 0.343 to 0.923 , 0.663 to 1.580 , 0.693 to 2.203 , 0.789 to 2.368 and 1.188 to 4.295 respectively . the serum psa value is directly correlated with age ( r=0.314 , p0.001).conclusionsuse the age - related range for psa increases the sensitivity in younger men and decreases the biopsy rate in older patients . |
SECTION 1. SHORT TITLE AND TABLE OF CONTENTS.
(a) Short Title.--This Act may be cited as the ``Catastrophic
Wildfire Prevention Act of 2012''.
(b) Table of Contents.--The table of contents of this Act is as
follows:
Sec. 1. Short title and table of contents.
Sec. 2. Purposes.
Sec. 3. Definitions.
Sec. 4. Authorized wildfire prevention projects.
Sec. 5. Public review and environmental analysis.
Sec. 6. Administrative and judicial review.
Sec. 7. Threatened and endangered species designations.
SEC. 2. PURPOSES.
The purposes of this Act are as follows:
(1) Expedite wildfire prevention projects to reduce the
chances of wildfire, including catastrophic wildfire, on
certain Federal lands.
(2) Reduce threats to endangered species from wildfires.
(3) Provide efficiency tools to the Secretary of
Agriculture and the Secretary of the Interior to streamline
projects to reduce the potential for wildfires.
SEC. 3. DEFINITIONS.
In this Act:
(1) At-risk community.--The term ``at-risk community'' has
the meaning given that term in section 101 of the Healthy
Forests Restoration Act of 2003 (16 U.S.C. 6511).
(2) At-risk forest.--The term ``at-risk forest'' means--
(A) Federal land where there exists a high risk of
losing an at-risk community, key ecosystem, wildlife,
or wildlife habitat to wildfire, including catastrophic
wildfire and post-fire disturbances, as documented by
the Secretary concerned; or
(B) Federal land in condition class II or III, as
those classes were developed by the Forest Service
Rocky Mountain Research Station in the general
technical report titled ``Development of Coarse-Scale
Spatial Data for Wildland Fire and Fuel Management''
(RMRS-87) and dated April 2000 or any subsequent
revision of the report.
(3) Authorized wildfire prevention project.--The term
``authorized wildfire prevention project'' means the measures
and methods developed for a project to be carried out in an at-
risk forest or on threatened and endangered species habitat by
the Secretary concerned for the purpose of hazardous fuels
reduction, forest health, forest restoration, watershed
restoration, or threatened and endangered species habitat
protection. An authorized wildfire prevention project may
include livestock grazing and timber harvest projects carried
out for one or more of such purposes.
(4) Federal land.--
(A) Covered land.--The term ``Federal land''
means--
(i) land of the National Forest System (as
defined in section 11(a) of the Forest and
Rangeland Renewable Resources Planning Act of
1974 (16 U.S.C. 1609(a))); or
(ii) public lands (as defined in section
103 of the Federal Land Policy and Management
Act of 1976 (43 U.S.C. 1702)).
(B) Excluded land.--The term does not include land
in which the removal of vegetation is specifically
prohibited by Federal law unless the land is in an
inventoried roadless area or wilderness study area.
(5) Secretary concerned.--The term ``Secretary concerned''
means--
(A) the Secretary of Agriculture, in the case of
National Forest System land; and
(B) the Secretary of the Interior, in the case of
public lands.
(6) Threatened and endangered species habitat.--The term
``threatened and endangered species habitat'' means Federal
land regarding which natural fire regimes are identified as
being important for, or wildfire is identified as a threat to,
an endangered species, a threatened species, or habitat of an
endangered species or threatened species in--
(A) a species recovery plan prepared under section
4 of the Endangered Species Act of 1973 (16 U.S.C.
1533); or
(B) a notice published in the Federal Register
determining a species to be an endangered species or a
threatened species or designating critical habitat for
an endangered species or a threatened species.
SEC. 4. AUTHORIZED WILDFIRE PREVENTION PROJECTS.
(a) Projects Authorized.--As soon as practicable after the date of
the enactment of this Act, the Secretary concerned shall implement
authorized wildfire prevention projects in at-risk forests and on
threatened and endangered species habitat in a manner that focuses on
surface, ladder, and canopy fuels reduction activities.
(b) Project Elements.--
(1) Threatened and endangered species habitat.--In the case
of an authorized wildfire prevention project carried out on
threatened and endangered species habitat, the project shall be
carried out--
(A) to provide enhanced protection from wildfire,
including catastrophic wildfire, for the endangered
species, threatened species, or habitat of the
endangered species or threatened species; and
(B) in compliance with any applicable guidelines
specified in the species recovery plan prepared under
section 4 of the Endangered Species Act of 1973 (16
U.S.C. 1533).
(2) At-risk forests.--In the case of an authorized wildfire
prevention project carried out in an at-risk forest, the
project shall be carried out to move Federal land in condition
class II or III toward condition class I.
(c) Grazing.--Domestic livestock grazing may be used in an
authorized wildfire prevention project to reduce surface fuel loads and
to recover burned areas. Utilization standards shall not apply when
domestic livestock grazing is used in an authorized wildfire prevention
project.
(d) Timber Harvesting and Thinning.--Timber harvesting and thinning
may be used in an authorized wildfire prevention project to reduce
ladder and canopy fuel loads to prevent wildfire, including
catastrophic wildfire.
(e) Relation to Land and Resource Management Plans and Land Use
Plan.--Nothing in this section requires the Secretary concerned, as a
condition of conducting an authorized wildfire prevention project, to
revise or amend the land and resource management plan applicable to the
National Forest System lands or the land use plan applicable to the
public lands on which the project will be conducted.
(f) Consideration of Public Petitions.--Not later than 60 days
after receiving a public petition for the designation of Federal land
as an at-risk forest or as threatened and endangered species habitat,
the Secretary concerned shall--
(1) review the petition; and
(2) make a determination regarding such designation.
SEC. 5. PUBLIC REVIEW AND ENVIRONMENTAL ANALYSIS.
(a) Public Notice and Comment.--
(1) Proposed projects.--The Secretary concerned shall
publish in the Federal Register notice of a proposed authorized
wildfire prevention project. The public may submit to the
Secretary specific written comments that relate to the project
within 30 days after the date of publication of the notice.
(2) Final decision.--Not later than 60 days after the date
on which notice was published under paragraph (1) with regard
to a proposed authorized wildfire prevention project and after
taking into account any comments received under such paragraph,
the Secretary concerned shall designate the final project and
publish in the Federal Register notice of final designated
project. Only persons who submitted comments regarding the
proposed project under paragraph (1) may submit to the
Secretary specific written comments that relate to the final
designated project. Any comments regarding the final designated
prevention project must be submitted within 30 days after the
date of the publication of the notice.
(b) Environmental Analysis Generally.--Except as otherwise provided
in this Act, the Secretary concerned shall comply with the National
Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.) and other
applicable laws in planning and conducting an authorized wildfire
prevention project.
(c) Interagency Cooperation.--The informal consultation
requirements of the Endangered Species Act of 1973 (16 U.S.C. 1531 et
seq.), as codified in section 402.05 of title 50, Code of Federal
Regulations shall apply to an authorized wildfire prevention project.
(d) Special Rules for Certain Projects.--
(1) Covered projects; deadline.--If an authorized wildfire
prevention project includes timber harvesting or grazing, the
Secretary concerned shall prepare an environmental assessment
within 30 days after the date on which notice was published
under subsection (a)(1) for the proposed agency action under
section 102(2) of the National Environmental Policy Act of 1969
(42 U.S.C. 4332(2)).
(2) Effect of failure to meet deadline.--The authorized
wildfire prevention project shall be deemed compliant with all
requirements of the National Environmental Policy Act of 1969
if the Secretary concerned fails to meet the deadline specified
in paragraph (1).
(3) Project lengths.--In the case of a livestock grazing
project, the environmental assessment shall be deemed
sufficient for a minimum of 10 years. In the case of a timber
harvest project, the environmental assessment shall be deemed
sufficient for a minimum of 20 years.
(4) Alternatives.--Nothing in this section requires the
Secretary concerned to study, develop, or describe any
alternative to the proposed agency action in the environmental
assessment conducted under paragraph (1).
(e) Effect of Compliance.--Compliance with this section shall be
deemed to satisfy the requirements of the National Environmental Policy
Act of 1969 (42 U.S.C. 4331 et seq.), section 14 of the National Forest
Management Act of 1976 (16 U.S.C. 472a), the Endangered Species Act of
1973 (16 U.S.C. 1531 et seq.), and the Multiple-Use Sustained-Yield Act
of 1960 (16 U.S.C. 528 et seq.).
SEC. 6. ADMINISTRATIVE AND JUDICIAL REVIEW.
(a) Administrative Review.--Administrative review of an authorized
wildfire prevention project shall occur in accordance with the special
administrative review process established under section 105 of the
Healthy Forests Restoration Act of 2003 (16 U.S.C. 6515).
(b) Judicial Review.--Judicial review of an authorized wildfire
prevention project shall occur in accordance with section 106 of the
Healthy Forests Restoration Act of 2003 (16 U.S.C. 6516).
SEC. 7. THREATENED AND ENDANGERED SPECIES DESIGNATIONS.
Before listing any species under the Endangered Species Act of 1973
(16 U.S.C. 1531 et seq.), the Secretary concerned shall conduct
research to find what impact a listing would have on forest fuel loads,
both forage and timber. Endangered species recovery plans and critical
habitat determinations shall include wildfire risk assessment analysis. | Catastrophic Wildfire Prevention Act of 2012 - Authorizes the Secretary of Agriculture (USDA), with respect to National Forest System lands, and the Secretary of the Interior, with respect to public lands, (the Secretaries) to implement authorized wildfire prevention projects in at-risk forests and threatened and endangered species in a manner that focuses on surface, ladder, and canopy fuels reduction activities.
Requires projects carried out on threatened and endangered species habitat to: (1) provide enhanced protection from wildfire, including catastrophic wildfire, for the endangered species, threatened species, or their habitat; and (2) comply with applicable recovery plan guidelines.
Requires projects carried out in at-risk forests to move the federal land from condition class II or III toward condition class I.
Permits use in a project of: (1) domestic livestock grazing to reduce surface fuel loads and to recover burned areas; and (2) timber harvesting and thinning to reduce ladder and canopy fuel loads for the prevention of wildfire, including catastrophic wildfires.
Directs the Secretaries to review public petitions for, and make determinations with respect to, the designation of federal lands as at-risk forests or as threatened and endangered species habitats.
Requires notice in the Federal Register of proposed projects and final designated projects and permits public comment on projects as specified.
Instructs the Secretaries to prepare an environmental assessment for projects that include timber harvesting or grazing.
Instructs the Secretaries to research what impact any listing of a species under the Endangered Species Act of 1973 would have on both forage and timber forest fuel loads. Requires endangered species recovery plans and critical habitat determinations to include a wildfire risk assessment analysis. |
there is much current interest in the use of coherent control in order to generate novel matter - radiation states in cavity qed and atom - optics systems @xcite .
in addition , the field of cavity qed has caught the interest of workers in the field of solid - state nanostructures , since effective two - level systems can be fabricated using semiconductor quantum dots , organic molecules and even naturally - occuring biological systems such as the photosynthetic complexes lhi and lhii and in biological imaging setups involving fret ( fluoresence resonance energy transfer ) @xcite .
such nanostructure systems could then be embedded in optical cavities or their equivalent , such as in the gap of a photonic band - gap material @xcite .
we refer to ref .
@xcite for a discussion of the size and energy - gaps of the artificial nanostructure systems which can currently be fabricated experimentally . in a parallel development ,
phase transitions in quantum systems are currently attracting much attention within the solid - state , atomic and quantum information communities @xcite .
most of the focus within the solid - state community has been on phase transitions in electronic systems such as low - dimensional magnets @xcite while in atomic physics there has been much interest in phase transitions in cold atom gases and in atoms coupled to a cavity . in particular , a second - order phase transition , from normal to superradiant , is known to arise in the dicke model which considers @xmath0 two - state atoms ( i.e. ` spins ' or ` qubits ' @xcite ) coupled to an electromagnetic field ( i.e. bosonic cavity mode ) @xcite .
the dicke model itself has been studied within the atomic physics community for fifty years , but has recently caught the attention of solid - state physicists working on arrays of quantum dots , josephson junctions , and magnetoplasmas @xcite .
its extension to quantum chaos @xcite , quantum information @xcite and other exactly solvable models has also been considered recently @xcite .
it has also been conjectured that superradiance could be used as a mechanism for quantum teleportation @xcite . here
we extend our discussion in ref .
@xcite on the exploration of novel phase transitions in atom - radiation systems exploiting the current levels of experimental expertise in the area of coherent control .
the corresponding experimental set - up can be a cavity - qed , atom - optics , or nanostructure - optics system , whose energy gaps and interactions are tailored to be the required generalization of the well - known dicke model @xcite .
we show that , according to the values of these control parameters , the phase transitions be driven to become first - order .
the well - known dicke model from atom - optics ignores interactions between the constituent two - level systems or ` spins ' @xcite . in atomic systems where each ` spin ' is an atom ,
this is arguably an acceptable approximation if the atoms are neutral _ and _ the atom - atom separation @xmath1 where @xmath2 is the atomic diameter .
however there are several reasons why this approximation is unlikely to be valid in typical solid - state systems .
first , the ` spin ' can be represented by any nanostructure ( e.g. quantum dot ) possessing two well - defined energy levels , yet such nanostructures are not typically neutral . hence there will in general be a short - ranged ( due to screening ) electrostatic interaction between neighbouring nanostructures .
second , even if each nanostructure is neutral , the typical separation distance @xmath3 between fabricated and self - organised nanostructures is typically the same as the size of the nanostructure itself .
hence neutral systems such as excitonic quantum dots will still have a significant interaction between nearest neighbors @xcite .
motivated by the experimental relevance of ` spin
spin ' interactions , we introduce and analyze a generalised dicke hamiltonian which is relevant to current experimental setups in both the solid - state and atomic communities @xcite . we show that the presence of transverse spin spin coupling terms , leads to novel first - order phase transitions associated with super - radiance in the bosonic cavity field . a technologically important example within the solid - state community would be an array of quantum dots coupled to an optical mode .
this mode could arise from an optical cavity , or a defect mode in a photonic band gap material @xcite .
however we emphasise that the @xmath0 ` spins ' may correspond to _ any _ two - level system , including superconducting qubits and atoms @xcite .
the bosonic field is then any field to which the corresponding spins couple @xcite .
apart from the experimental prediction of novel phase transitions , our work also provides an interesting generalisation of the well - known dicke model .
the method of solution that we present here is in fact valid for a wider class of hamiltonians incorporating spin spin and spin boson interactions @xcite .
we follow the method of wang and hioe @xcite , whose results also proved to be valid for a wider class of dicke hamiltonians .
we focus on the simple example of the dicke hamiltonian with an additional spin
spin interaction in the @xmath4 direction .
@xmath5 following the discussion above , the experimental spin spin interactions are likely to be short - ranged and hence only nearest - neighbor interactions are included in @xmath6 .
the operators in eqs . 1 and 2 have their usual , standard meanings .
to obtain the thermodynamical properties of the system , we first introduce the glauber coherent states @xmath7 of the field @xcite where @xmath8 , @xmath9 .
the coherent states are complete , @xmath10 . in this basis
, we may write the canonical partition function as : @xmath11 as in ref . @xcite , we adopt the following assumptions : 1 .
@xmath12 and @xmath13 exist as @xmath14 ; 2 .
@xmath15 can be interchanged
we then find @xmath16 where @xmath17 we first rotate about the @xmath4-axis to give @xmath18 we note here that the resulting hamiltonian is of the type of an ising hamiltonian with a transverse field , and it exhibits a divergence in concurrence at its quantum phase transition ( see , e.g. , @xcite ) .
this particular model is instrumental in understanding the nature of coherence in quantum systems .
going back to the calculations , we may now diagonalise @xmath19 by performing a jordan - wigner transformation , passing into momentum - space and then performing a bogoliubov transformation ( see , for example , ref .
we then have , in terms of momentum - space fermion operators @xmath20 , the diagonalised @xmath19 : @xmath21 with @xmath22 we may then write @xmath23 where @xmath24 from the transformation , we may associate the spin - up state with an empty orbit on the site and a spin - down state with an occupied orbital . using the commutation relations for the @xmath20 and the fact that @xmath25 ( see , for example , ref .
@xcite ) , we obtain @xmath26 writing @xmath27 , @xmath28 and integrating out @xmath29 we obtain @xmath30 + \log ( 2 ) \right\ } } \ .\ ] ] we now let @xmath31 . writing @xmath32 as @xmath33 , yields @xmath34 where @xmath35 + \log ( 2 ) \right\ } \end{aligned}\ ] ] and @xmath36 from here on , we omit the @xmath37 term in @xmath38 since it only contributes an overall factor to @xmath39 . laplace s method now tells us that @xmath40 \right\}.\ ] ] denoting @xmath41 $ ] by @xmath42 , we recall that the super - radiant phase corresponds to @xmath42 having its maximum at a non - zero @xmath43 @xcite . if there is no transverse field , i.e. , if @xmath44 , and the temperature is fixed , then the maximum of @xmath42 will split continuously into two maxima symmetric about the origin as @xmath45 increases .
hence the process is a continuous phase transition .
however the case of non - zero @xmath46 is qualitatively different from @xmath44 . as a result of the frustration induced by the tranverse nearest - neighbour couplings , _
there are regions where the super - radiant phase transition becomes first - order .
_ hence _ the system s phase transition can be driven to become first - order by suitable adjustment of the nearest - neighbour couplings_. this phenomenon of first - order phase transitions is revealed by considering the functional shape of @xmath42 , as shown in fig .
[ pt ] .
figure [ x - lambda ] shows the value of @xmath43 that maximises @xmath42 at fixed @xmath47 and two different values of @xmath46 . from the two lines , we can see that the spin
spin coupling actually acts to inhibit the phase transition .
as we increase @xmath46 from @xmath48 to @xmath49 we can see that the value to which we have to increase @xmath50 to induce a phase transition is higher .
figure [ maximiser ] plots the maximiser of @xmath42 with @xmath50 fixed at a value of 1.3 . for small @xmath46 ,
the local ( non - zero ) maximum of @xmath42 converges to zero as we increase @xmath47 and the system is no longer super - radiant .
this is no longer the case if @xmath46 is increased . in this case
, @xmath42 has a global maximum when @xmath47 is small ; however as @xmath47 increases , the non - zero local maxima becomes dominant and as a result a first - order phase transition occurs .
we note that the barriers between the wells are infinite in the thermodynamic limit , hence we expect that the sub - radiant state is metastable as @xmath47 increases .
this observation also suggests the phenomenon of hysteresis , which awaits experimental validation . in fig .
[ op ] we consider the order parameter of the transition , @xmath51 .
following the same method as above , we may calculate this to be equivalent to @xmath52 with an additional @xmath53 term that comes from the imaginary part of the coherent states of the radiation field @xcite .
we can see from the figure that as we lower @xmath54 we drive the system first through a first order phase transition and then through a continuous phase transition .
thus we are able to achieve both a first and second order phase transition by varying the one parameter , @xmath54 .
in conclusion , we have shown that the experimentally relevant spin - spin interaction in the dicke model transforms it into an ising - hamiltonian with a photon - field dependent transverse field , which allows for an existence of both first - order and second - order phase transitions as parameters vary .
our results highlight the importance of spin - spin coupling terms in spin - boson systems and opens up the possibility of coherently controlling the competition between the sub - radiant and super - radiant states in experimental atom - radiation systems @xcite .
99 t. vorrath and t. brandes , phys . rev .
b * 68 * , 035309 ( 2003 ) ; w.a . al - saidi and d. stroud , phys . rev .
b * 65 * , 224512 ( 2002 ) ; x. zou , k. pahlke and w. mathis , quant - ph/0201011 ; s. raghavan , h. pu , p. meystre and n.p . bigelow , cond - mat/0010140 ; n. nayak , a.s .
majumdar and v. bartzis , j. nonlinear optics * 24 * , 319 ( 2000 ) ; t. brandes , j. inoue and a. shimizu , cond - mat/9908448 and cond - mat/9908447 .
n. lambert , c. emary and t. brandes , phys .
lett . * 92 * , 073602 ( 2004 ) ; s. schneider and g.j .
milburn , quant - ph/0112042 ; g. ramon , c. brif and a. mann , phys .
a * 58 * , 2506 ( 1998 ) ; a. messikh , z. ficek and m.r.b .
wahiddin , quant - ph/0303100 .
e. hagley et al . , phys . rev . lett .
* 79 * , 1 ( 1997 ) ; a. rauschenbeutel et al . ,
science * 288 * , 2024 ( 2000 ) ; a. imamoglu et al .
lett . * 83 * , 4204 ( 1999 ) ; s.m .
dutra , p.l .
knight and h. moya - cessa , phys .
a * 49 * , 1993 ( 1994 ) ; y. yamamoto and r. slusher , physics today , june ( 1993 ) , p. 66 ; d.k .
young , l. zhang , d.d .
awschalom and e.l .
hu , phys . rev .
b * 66 * , 081307 ( 2002 ) ; g.s .
solomon , m. pelton and y. yamamoto , phys .
lett . * 86 * , 3903 ( 2001 ) ; b. moller , m.v .
artemyev and u. woggon , appl .
* 80 * , 3253 ( 2002 ) ; n. f. johnson , j. phys .
matter 7 , 965 ( 1995 ) . | we propose the use of coherent control of a multi - qubit cavity qed system in order to explore novel phase transition phenomena in a general class of multi - qubit cavity systems .
in addition to atomic systems , the associated super - radiant phase transitions should be observable in a variety of solid - state experimental systems , including the technologically important case of interacting quantum dots coupled to an optical cavity mode . |
Sen. Ted Cruz, (R-Texas), said on the Senate floor Tuesday he would stand against the president's health care law "until I'm no longer able to stand." (The Washington Post)
Sen. Ted Cruz, (R-Texas), said on the Senate floor Tuesday he would stand against the president's health care law "until I'm no longer able to stand." (The Washington Post)
Continuing his vow to keep speaking against the new federal health-care law "until I am no longer able to stand," Sen. Ted Cruz (R-Tex.) continued with his marathon speech modeled on old-fashioned filibusters Tuesday evening in hopes of slowing debate over a short-term spending measure.
"I rise today in opposition to Obamacare," Cruz announced as he began his remarks Tuesday afternoon, saying he would be speaking on behalf of millions of Texans and Americans opposed to the new health-care law.
"A great many Texans, a great many Americans feel they do not have a voice, and so I hope to play some very small role in providing the voice," he said.
(Watch: Highlights from Cruz's marathon speech (so far) )
Shortly after 8 p.m., Cruz announced he would begin reading "bedtime stories" to his two young daughters, who he said were back home in Texas watching with his wife. Cruz started with the Bible, quoting from "King Solomon's Wise Words" from the Book of Proverbs. Then he read the Dr. Seuss classic, "Green Eggs and Ham," in its entirety, noting that it was one of his favorite children's books.
When he was done reading, Cruz told his daughters: "I love with you all my heart. It's bed time, give mommy a hug and a kiss, brush your teeth, say your prayers and daddy's going to be home soon to read to you in person."
By holding the floor, Cruz and his allies are launching what most Americans know as a traditional filibuster, or the practice of holding the chamber for several hours on end by speaking continuously, an exercise perhaps best epitomized by actor Jimmy Stewart in the movie "Mr. Smith Goes to Washington."
But even if Cruz were physically capable of speaking for more than 24 hours -- the longest filibuster in U.S. history is 24 hours, 18 minutes by the late Sen. Strom Thurmond (S.C.) and other Southern senators opposed to civil rights laws -- there are already parliamentary procedures in place that dictate that Cruz will have to yield the floor by Wednesday afternoon at the latest.
At that point, the Senate is scheduled to hold a key procedural test vote that is near certain to pass with bipartisan support.
Cruz, a freshman senator, was joined in his efforts by several other Republican senators, including Mike Lee (Utah), David Vitter (La.), Rand Paul (Ky.), Pat Roberts (Kan.), Jeff Sessions (Ala.) and Marco Rubio (Fla.). Paul even sent a callout on Twitter asking supporters to send him questions that he said he would ask Cruz later on the Senate floor. Rep. Paul Broun (R-Ga.), who is running for an open U.S. Senate seat next year, also visited the Senate to watch Cruz speak.
Aides to the senators gave no sense of when Cruz and his allies would conclude, but Cruz is likely angling to match the duration of two other recent filibusters. One, led by Paul in March, lasted nearly 13 hours, while a filibuster by Texas Democratic state Sen. Wendy Davis in June lasted just under 11 hours.
After a little more than two hours, Cruz had discussed an unusual mix of subjects, ranging from opposition to the health-care law; the unemployment rate among African American teenagers; how his father, Rafael Cruz, used to make green eggs and ham for breakfast; a recent awards show acceptance speech by actor Ashton Kutcher; and the fast-food restaurants Denny's, Benihana and White Castle.
When he yielded briefly to take questions, the other senators would give extended remarks on their opposition to the health-care law and then ask Cruz questions that set up the Texas senator to continue his remarks. Meanwhile junior Democratic senators, who by tradition are tasked with presiding over the chamber, sat at the front of the room watching the exchange. Sen. Joe Manchin III (D-W. Va.) sat in the chair watching intently, while Sen. Elizabeth Warren (D-Mass.) was seen using an iPad.
Cruz and Lee have vowed to use whatever Senate procedural tactics are available to slow debate on the legislation. Their marathon speech is the culmination of a strategy they began developing in the summer, when Lee started looking for allies in a move to defund the health-care law by using annual spending bills for federal agencies as potential leverage.
Last week, the House passed a spending measure that would continue funding government operations by also defunding Obamacare, thus avoiding a government shutdown. The bill is now in the Senate, where Majority Leader Harry M. Reid (D-Nev.) has vowed to remove language defunding the law before calling a final vote. Cruz and his allies are hoping to stop Reid from doing so.
But Cruz and Lee have clashed with other Senate Republicans, who strongly objected to their plans in the days leading up to the start of the marathon speech. During their weekly luncheon earlier Tuesday, fellow Republicans urged Cruz to stand down and agree to quickly pass the spending measure and send it back to the House for potential amendments, according to two senators in the room.
Cruz refused the request, the senators said, meaning the Senate likely will have to continue debating the measure through the weekend, giving the House just a few hours to pass a new spending bill by the Oct. 1 deadline or face a government shutdown.
Asked as he left the lunch how long he planned to speak on the Senate floor, Cruz told reporters: "We shall see."
Moments later, Sen. Orrin Hatch (R-Utah) told reporters that he disagreed with Lee's assertions that there is considerable grass-roots support to defund the health-care law.
"There’s a lot of people upset on both sides, and I just don’t believe anybody benefits from shutting the government down, and certainly Republicans don’t. We learned that in 1995," Hatch said, referring to the last time congressional Republicans clashed with a Democratic president over federal spending.
“We’re in the minority, we have to find a way of standing up for our principles without immolating ourselves in front of everybody, in a way when we don’t have the votes to do it," he said.
Minority Leader Mitch McConnell (R-Ky.) said he also agreed with other Republicans that the Senate should move quickly to pass the spending measure and return it to the House.
"My own view is that it would be to the advantage of our colleagues in the House, who are in the majority, to shorten the process, and if the majority leader were to ask us to shorten the process, I would not object," McConnell told reporters.
"If the House doesn't get what we send over there until Monday, they're in a pretty tough spot," McConnell said later. "My own view is the House, having passed a bill that I really like and that I support, I hate to put them in a tough spot."
Rosalind S. Helderman and Lori Montgomery contributed to this report.
1 of 53 Full Screen Autoplay Close Skip Ad × Ted Cruz exits the presidential race View Photos Looking back at the Texas senator’s presidential bid. Caption Looking back at the Texas senator’s presidential bid. May 3, 2016 Sen. Ted Cruz speaks with his wife, Heidi, by his side during a primary night campaign event in Indianapolis. Cruz ended his presidential campaign, eliminating the biggest impediment to Donald Trump’s march to the Republican nomination. Darron Cummings/AP Buy Photo Wait 1 second to continue.
More on this story:
The Fix: How McConnell and Cornyn burst Ted Cruz's bubble
Cruz happy to be outcast in the showdown shutdown
Federal workers could lose pay if the government shuts down ||||| Sen. Ted Cruz (R-Texas) on Tuesday promised to speak until he can't stand up anymore, which launched an immediate debate on whether his speech is a filibuster or just a really long speech.
Cruz's defenders said the speech about the need to defund ObamaCare is a filibuster. But Democrats repeated throughout the day that it's just a really long speech.
So which is it? It depends who you ask — parliamentary experts say there is no precise definition of "filibuster."
ADVERTISEMENT
Many see a filibuster as talking on the Senate floor for an extended period of time in order to prevent action on measure. If that's the case, Cruz's speech today seems to fall short, because regardless of his remarks, the Senate will vote by Wednesday on whether to end debate on a motion to proceed to the House-passed continuing resolution.In other words, he can talk and talk, but that vote will happen on Wednesday at the latest because Senate Democrats filed cloture on the motion to proceed. Under Senate rules, filing for cloture sets up a firm vote after two days.Senate Majority Leader Harry Reid (D-Nev.) made it clear today that this is his preferred definition of "filibuster.""I want to disabuse everyone," Reid said this morning. "There will be no filibuster today. Filibusters stop people from voting, and we are going to vote tomorrow."But some say the term "filibuster" can be used to describe any dilatory tactic that delays the legislative process. If that's the case, Cruz's comments can be seen as a filibuster, because his opposition to moving ahead with the bill is preventing senators from reaching a unanimous consent agreement to vote earlier on ending debate on the motion to proceed.The Senate's own website seems to agree that any delaying tactic is a filibuster. The website defines the word as an "informal term for any attempt to block or delay Senate action on a bill or other matter by debating it at length, by offering numerous procedural motions, or by any other delaying or obstructive actions."So while Cruz didn't start his remarks by saying he is filibustering the continuing resolution, experts say he can make that claim. And then, people can disagree with that claim.Under Senate rules, Reid's decision on Monday to file cloture on the motion to proceed means two days must pass before a vote on the motion is held. That vote would be on whether to end debate on the motion to proceed.If the vote succeeds on Wednesday, that will start a 30-hour clock, and when that time expires, a vote on the actual motion to proceed is due.The two-day and 30-hour timelines are firm, but senators can always agree by unanimous consent to speed them up when everyone agrees. This week, many Senate Republicans were seeking to do just that, to give House Republicans more time to deal with the Senate-passed resolution. ||||| Tea party conservative Sen. Ted Cruz on Tuesday vowed to speak in opposition to President Barack Obama's health care law until he's "no longer able to stand," even though fellow Republicans privately urged him to back down from his filibuster for fear of a possible government shutdown in a week.
This image from Senate video show Sen. Ted Cruz, R-Texas, speaking on the Senate floor at the U.S. Capitol in Washington, Tuesday, Sept. 24, 2013. Cruz says he will speak until he's no longer able to... (Associated Press)
Senate Minority Leader Mitch McConnell of Ky. returns to his office on Capitol Hill in Washington, Tuesday, Sept. 24, 2013, after speaking on the floor of the Senate. In a break with Sen. Ted Cruz, R-Texas,... (Associated Press)
This image from Senate video show Sen. Ted Cruz, R-Texas, speaking on the Senate floor at the U.S. Capitol in Washington, Tuesday, Sept. 24, 2013. Cruz says he will speak until he's no longer able to... (Associated Press)
"This grand experiment is simply not working," the Texas freshman told a largely empty chamber of the president's signature domestic issue. "It is time to make D.C. listen."
Egged on by conservative groups, the potential 2016 presidential candidate excoriated Republicans and Democrats in his criticism of the three-year-old health care law and Congress' willingness to gut the law. Cruz supports the House-passed bill that would avert a government shutdown and defund Obamacare, as do many Republicans.
However, they lack the votes to stop Senate Majority Leader Harry Reid, D-Nev., from moving ahead on the measure, stripping the health care provision and sending the spending bill back to the House.
That didn't stop Cruz' quixotic filibuster. Standing on the Senate floor, with conservative Sen. Mike Lee of Utah nearby, Cruz talked about the American revolution, Washington critics and the impact of the health care law.
"The chattering class is quick to discipline anyone who doesn't fall in line," complained Cruz, who led a small band of opponents within Republican ranks.
Senate Minority Leader Mitch McConnell, R-Ky., and the GOP's No. 2, Sen. John Cornyn of Texas, opposed Cruz' tactic, and numerous Republicans stood with their leadership rather than Cruz. Sen. John Thune, the third-ranking Republican, declined to state his position.
"I think we'd all be hard-pressed to explain why we were opposed to a bill that we're in favor of," McConnell told reporters. "And invoking cloture on a bill that defunds Obamacare, it doesn't raise taxes, and respects the Budget Control Act strikes me as a no brainer."
One Senate Republican said McConnell had suggested in a meeting of rank and file senators that they not speak as long as the rules permit on the legislation, for fear it would give them little time to try to turn the political tables on Democrats or to avoid a possible shutdown.
The lawmaker spoke on condition of anonymity because the meeting was private.
"Delaying the opportunity for the House to send something back, it seems, plays right into the hands of Senate Democrats," Sen. Bob Corker, R-Tenn., said. "If I'm Harry (Reid), what I'd hope would happen is you wait until the very last minute to send something over to the House."
Asked whether there were any efforts in the GOP meeting to persuade Cruz and Lee to speed up Senate debate, Corker said, "The discussion came up about the advantage of having House Republicans weigh in again. And there were two senators who did not like that idea, not to name who they are."
The bill would keep the government operating until Dec. 15 and gut Obamacare.
Sen. Dick Durbin, the Senate's No. 2 Democrat, said the chamber may come out in favor of a smaller patch for bankrolling the government than the one envisioned in the House bill. The idea would be to get Congress working sooner than mid-December on a more sweeping piece of legislation _ known as an omnibus spending bill _ that he hopes would reverse some automatic spending cuts known as sequestration.
Despite Cruz' effort, a test vote was set for Wednesday. Reid had filed a motion to proceed to the measure, and under Senate rules lawmakers will vote even if Cruz speaks for hours and keeps the Senate in session overnight. Delaying tactics could push a final vote into the weekend, just days before the new fiscal year begins on Oct. 1.
The Cruz filibuster, which began at 2:41 p.m., echoed the effort of Sen. Rand Paul, R-Ky., who waged a nearly 13-hour filibuster of John Brennan's nomination for CIA director over the president's authority to use drones in the United States. The Senate eventually confirmed Brennan.
Outside conservative groups that have been targeting Republican incumbents implored their members to call lawmakers and demand that they stand with Cruz and his attack on Obamacare.
"This is the ultimate betrayal," the Senate Conservatives Fund said of McConnell and Cornyn in an email Tuesday morning. They pressed their members to "melt the phones," arguing that "we can't let these turncoats force millions of Americans into this liberal train wreck."
The Club for Growth and the Madison Project also pressed lawmakers to back Cruz' effort.
The issue has roiled the Republican Party, exacerbating the divide between tea party conservatives and GOP incumbents who repeatedly have voted against the health care law but now find themselves on the defensive. Republican senators said defunding Obamacare simply won't happen with a Democratic president and Democrats controlling the Senate.
"It will be a cold day in Gila Bend, Ariz., before we defund Obamacare," said Sen. John McCain, R-Ariz., the party's 2008 presidential nominee. "A very cold day. In fact there may be a snowstorm. ... I know how this movie ends. I don't know all the scenes before it ends, but I know how it ends. We don't defund Obamacare."
Sen. Saxby Chambliss, R-Ga., said that as long as Obama has the power to veto legislation, "the fate of that bill is pretty much in his control, and we know what he's going to do."
___
Associated Press writers Andrew Taylor, Alan Fram, David Espo and Laurie Kellman contributed to this report. ||||| Ted Cruz finally released his grip on the Senate floor after more than 21 hours of speaking about the need to defund Obamacare.
The Texas Republican seized control of the Senate floor on Tuesday about 2:42 p.m. vowing to “speak in support of defunding Obamacare until I am no longer able to stand.” Cruz could have spoken all the way up to a 1 p.m. procedural vote on moving spending bill forward, but he relented at noon.
Text Size -
+
reset Top 10 Cruz floor quotes Debt ceiling: By the numbers
“It is my it intention to accept the end of this at noon,” Cruz said.
After his 20th hour holding the floor, Cruz asked Senate Majority Leader Harry Reid (D-Nev.) to come to the floor to listen to a pair of requests that sparked a bizarre exchange. Cruz asked to waive Wednesday’s vote and move a high-stakes procedural vote to Friday rather than Saturday to allow more people to watch.
“I think it is better for this country that this vote is visible,” Cruz said. “Sticking it on Saturday in the middle in the middle of football games would disserve that objective.”
Reid ignored Cruz’s requests and asked for far more time to be yielded back to allow the House more time to consider what the Senate will send back.
“There’s a possibility that they may not accept what we send them and they may want to send us something back,” Reid said.
Cruz cut off Reid, accusing him of “making a speech” rather than asking Cruz a question.
Despite his Ironman stand on the floor of the upper chamber, Cruz could not stop a Senate already in motion from eventually returning a clean continuing resolution to the House scant days before a government shutdown is scheduled to take effect on Oct. 1. Under Senate rules, the latest the upper chamber could take the first procedural vote on a House spending bill that defunds Obamacare is 1 p.m. on Wednesday — a reality Majority Leader Harry Reid (D-Nev.) broadcast to the world Tuesday morning when he opened the Senate and again on Wednesday.
(PHOTOS: Longest filibusters in history)
“This is not a filibuster. This is an agreement that he and I made that he could talk,” Reid said Wednesday.
In other words, from the beginning it was all over save for the theatrics. But Cruz offered plenty Tuesday by holding the Senate floor for hours about why Obamacare should cease to exist. He was flanked at times by Republican Sens. David Vitter of Louisiana, Mike Lee of Utah, Jeff Sessions of Alabama, Pat Roberts of Kansas, Mike Enzi of Wyoming, James Inhofe of Oklahoma, Jim Risch of Idaho, Marco Rubio of Florida and Rand Paul of Kentucky, who recommended Cruz wear comfortable shoes and not eat food on national television.
Cruz touched on a wide variety of subjects during his marathon, from Dr. Seuss to college kids’ inability to find White Castle burgers during the wee hours because of Obamacare. He read tweets from constituents and related stories from a “lost generation” of young people plagued by the Affordable Care Act.
(PHOTOS: Key quotes from Ted Cruz)
He also read bedtime stories to his children, who he said were at home watching him on C-SPAN. One was the Seuss tale “Green Eggs and Ham.”
Cruz was even joined by Democrats, including Sen. Tim Kaine of Virginia and Majority Whip Dick Durbin of Illinois, who pointedly questioned Cruz both Tuesday night and Wednesday morning. Kaine sparred with Cruz for 30 minutes and argued some voters had sent people to Washington to preserve Obamacare. Kaine won his Senate seat in 2012 by besting former Sen. George Allen (R-Va.), who had staked his election on repeal.
Durbin on Tuesday explained why he voted for the health care law and argued the law made it easier for one of his constituents to qualify for Medicaid. Durbin asked if — given Cruz’s Ivy League education — he knew he did not have the votes to defund Obamacare in the Senate.
(WATCH: Cruz’s speech: 10 colorful quotes)
“Certainly the senator realizes that it takes 60 votes,” Durbin told Cruz.
“I would note that I’m quite familiar with what is necessary to defund Obamacare,” Cruz shot back.
Cruz bashed his colleagues in Washington for accepting that stopping Obamacare is impossible and charged that the outcome of the drama in the Senate this week is predetermined, comparing it to professional wrestling.
He also dinged anonymous staffers and Republican lawmakers for criticizing him in the press, yet also divulged a conversation between Lee and an anonymous House member. According to Cruz, that lawmaker told Lee of the House sending over its defund bill: “You guys should be grateful. We gave you your vote.”
(QUIZ: Do you know Ted Cruz?)
“Why should we feel gratitude for a vote that’s destined to lose?” Cruz asked of other Republicans, referring to Reid’s procedural upper-hand. “Symbolic votes are great for getting elected.”
A potential 2016 presidential candidate, Cruz is playing to the GOP base as much as he is bashing business as usual inside the Capitol. He said some lawmakers are too concerned with hitting the D.C. cocktail circuit, taking show votes and giving speeches to change the way Washington works, asking at one point: “Where is the outrage?”
“A lot of members of this body have — at least so far — not showed up to battle,” Cruz said, repeatedly referring to the empty chairs in the Senate chamber while he talked. “The chattering class is quick to discipline anyone who doesn’t fall in line.”
(Also on POLITICO: Hillary Clinton defends Obamacare, slams defunding efforts)
Cruz said his speech was meant to “celebrate” the American democratic system and that if senators listened to their constituents back home, he and his conservative allies truly could win.
“The vote would be 100-0 to defund Obamacare,” he said | – It's filibuster-y, but apparently not a filibuster. Sen. Ted Cruz took the Senate floor about 2:40pm Eastern today and promised to speak against ObamaCare until he is "no longer able to stand," reports AP. It sounds like an "old-fashioned talking filibuster," says the Washington Post, although, technically speaking, it's not a filibuster in the eyes of many parliamentarians. The Hill explains: A filibuster holds the promise of a lawmaker speaking long enough to block a vote, but that can't happen here. No matter what Cruz does, the Senate will vote on a procedural motion tomorrow on the House resolution that defunds ObamaCare. "I want to disabuse everyone," said Harry Reid before Cruz started. "There will be no filibuster today. Filibusters stop people from voting, and we are going to vote tomorrow." But even if it amounts to nothing more than a really long speech, Cruz is taking advantage. "I rise today in opposition to ObamaCare,” he began. "This grand experiment is simply not working. It is time to make DC listen." Fellow conservative Mike Lee of Utah gave Cruz his first break after about an hour. Politico's take: "A potential 2016 presidential candidate, Cruz is playing to the GOP base as much as he is bashing business as usual inside the Capitol." |
Once upon a time 20 years ago, the average San Francisco home cost the inflation-adjusted equivalent of $400,000. All one had to do was own a million-dollar home (or $1.5 million today) and they could be confidently among San Francisco’s wealthy elite. Ah, it was a simpler time.
But now? Heck, every house costs at least a million dollars. They even advertise it. And the average income for both individuals and families is climbing every year. It takes a lot more riches just to be rich anymore.
So who's rich and who's not in this crazy market? To alleviate this burning question, the local branch of investment firm Charles Schwab commissioned a survey of a 1,000 Bay Area residents to see what they think.To wit: "The survey of 1,000 Bay Area residents found that local residents think it takes about $2.5 million in most areas of the United States to be considered wealthy while a net worth of more than $6 million is what it takes to be wealthy in the Bay Area."
Ignoring for a moment the idea that $2 million doesn’t qualify as rich most places anymore, imagine how embarrassing it must be for anyone worth $3 million to find out that they only rate as middle class in the Bay Area. What will the neighbors think?
Of course, this was only a survey of people’s opinions, meaning it’s less a hard numbers look at the cost of living so much as a glimpse into how people are feeling these days. Eighty-six percent of those polled say that the cost of living in the region is too high, 55 percent say they have trouble meeting their financial goals, and 68 percent say the Bay Area has one of the worst housing markets in the country.
(The other 32 percent are presumably sellers.) ||||| Charles Schwab study concludes that you need $6 million to be 'wealthy' in the Bay Area
If you don't already feel poor living in the Bay Area, you will now.
A new Charles Schwab survey says that in order to be considered "wealthy" in the Bay Area, you need a net worth of at least $6 million. A net worth of $1 million is the baseline for being "comfortable."
Excuse us while we take a quick break to cry into the $8 artisanal coffee that we really can't afford.
Charles Schwab surveyed 1,000 Bay Area residents aged 21 to 75 in Alameda, Contra Costa, Marin, San Francisco, San Mateo, Santa Clara and Solano counties. The survey asked what residents considered enough money to be "wealthy" vs. "comfortable."
Respondents said they believed $2.5 million was the average needed to be wealthy in other parts of the country.
Unsurprisingly, the survey also found that locals are shocked by the cost of living here. Eighty-six percent said the cost of living is "unreasonable" and 55-percent said living in the Bay Area makes it "difficult to reach their financial goals." Only two percent agreed strongly that the cost of living in the Bay Area is reasonable. Who these people are, exactly, remains a question.
Off-setting those depressing statistics are upbeat outlooks on job opportunities. Eighty percent believe the region is a great place for career growth, 88-percent say it's a prime place for innovation and 70-percent say the Bay Area's economy is better than the national one.
To read the full Charles Schwab study, click here. | – Residents of the Bay Area in California think it takes more than twice as much money to be considered wealthy there than in the rest of the country, the San Francisco Chronicle reports. A survey of 1,000 residents by Charles Schwab found you need to be worth more than $6 million in the Bay Area to be considered wealthy, but only $2.5 million in the rest of the country. They say it also takes $1 million to even be "comfortable" in the Bay Area. The survey found 86% of residents believe the Bay Area's cost of living to be unreasonable. The whole thing is bad news for both actual poor residents and "Bay Area poor" residents. "Imagine how embarrassing it must be for anyone worth $3 million to find out that they only rate as middle class in the Bay Area," Curbed San Francisco notes. "What will the neighbors think?" |
the detailed analysis of the available high quality cosmological data ( type ia supernovae @xcite , @xcite ; cmb @xcite , @xcite , etc . ) leads to the conclusion that we live in a flat and accelerating universe . in order to investigate the cosmic history of the observed universe
, we have to introduce a general cosmological model which contains cold dark matter to explain the large scale structure clustering and an extra component with negative pressure , the vacuum energy ( or in a more general setting the `` dark energy '' ) , to explain the observed accelerated cosmic expansion ( refs . @xcite and references therein ) .
the nature of the dark energy is one of the most fundamental and difficult problems in physics and cosmology .
there are many theoretical speculations regarding the physics of the above exotic dark energy , such as a cosmological constant ( vacuum ) , quintessence , @xmath1essence , vector fields , phantom , tachyons , chaplygin gas and the list goes on ( see @xcite and references therein ) .
such studies are based on the general assumption that the real scalar field @xmath2 rolls down the potential @xmath3 and therefore it could resemble the dark energy @xcite .
this is very important because scalar fields could provide possible solutions to the cosmological coincidence problem . in this framework
, the corresponding stress - energy tensor takes the form of a perfect fluid , with density @xmath4 and pressure @xmath5 . from a cosmological point of view , if the scalar field varies slowly with time , so that @xmath6 , then @xmath7 , which means that the scalar field evolves like a vacuum energy .
of course in order to investigate the overall dynamics we need to define the functional form of the potential energy .
the simplest example found in the literature is a scalar field with @xmath8 ( see for review @xcite , @xcite ) and it has been shown that the time evolution of this scalar field is dominated by oscillations around @xmath9 . of course , the issue of the potential energy has a long history in scalar field cosmology ( see @xcite and references therein ) and indeed several parameterizations have been proposed ( exponential , power law , hyperbolic etc ) .
the aim of the present work is to investigate the observational consequences of the overall dynamics of a family of flat cosmological models by using a hyperbolic scalar field potential which appears to act both as dark matter and dark energy @xcite .
to do so , we use the traditional hamiltonian approach .
in fact , the idea to build cosmological models in which the dark energy component is somehow linked with the dark matter is not new in this kind of studies .
recently , alternative approaches to the unification of dark energy and dark matter have been proposed in the framework of the generalized chaplygin gas @xcite and in the context of supersymmetry @xcite .
the structure of the paper is as follows .
the basic theoretical elements of the problem are presented in section 2 by solving analytically [ for spatially flat unified dark matter ( udm ) scalar field models ] the equations of motion . in section 3
, we present the functional forms of the basic cosmological functions [ @xmath10 , @xmath11 and @xmath12 . in section 4 we place constraints on the main parameters of our model by performing a joint likelihood analysis utilizing the snia data @xcite and the observed baryonic acoustic oscillations ( bao ) @xcite and @xcite .
in particular , we find that the matter density at the present time is @xmath13 while the corresponding scalar field is @xmath14 in geometrical units ( 0.084 in planck units ) .
section 5 outlines the evolution of matter perturbations in the udm model .
also we compare the theoretical predictions provided by the udm scenario with those found by three different type of dark energy models namely @xmath0 cosmology , parametric dark energy model and variable chaplygin gas .
we verify that at late times ( after the inflection point ) the dynamics of the udm scalar model is in a good agreement , with those predicted by the above dark energy models although there are some differences especially at early epochs : ( i ) the udm equation of state parameter takes positive values at large redshifts , ( ii ) it behaves well with respect to the cosmic coincidence problem , and ( iii ) before the inflection point the cosmic expansion in the udm model is much more decelerated than in the other three dark energy models implies that the large scale structures ( such as galaxy clusters ) are more bound systems with respect to those cosmic structures which produced by the other three dark energy models .
finally , we draw our conclusions in section 6 .
within the framework of homogeneous and isotropic scalar field cosmologies it can be proved ( see ref .
@xcite ) that the main cosmological equations ( the so called friedmann - lemaitre equations ) can be obtained by a lagrangian formulation : @xmath15 + 3ka \label{langaf}\ ] ] where @xmath10 is the scale factor of the universe , @xmath11 is the scalar field , @xmath3 is the potential energy and @xmath16 is the spatial curvature . indeed
the equations of motion take the following forms which corresponds to @xmath17 . for planck units we have to set @xmath18 with @xmath19 . for s.i units we have @xmath20 . ] : @xmath21&= & \frac{\dot{\phi}^2}{2}+v(\phi ) \label{efe1 } \\ 2 \left(\frac{\ddot{a}}{a}\right)+\left(\frac { \dot{a } } { a }
\right)^2+\frac{k}{a^{2}}&=&-\frac{\dot{\phi}^2}{2}+v(\phi ) \label{efe2 } % \end{aligned}\ ] ] and @xmath22 where the over - dot denotes derivatives with respect to time while prime denotes derivatives with respect to @xmath2 .
we would like to stress here that in this work we consider a spatially homogeneous scalar field @xmath2 , ignoring the possible coupling to other fields and quantum - mechanical effects . on the other hand , introducing in the global dynamics a new degree of freedom , in a form of the scalar field @xmath2 , it is possible to make the vacuum energy a function of time ( see @xcite , @xcite , @xcite ) .
note of course that the geometry of the space - time is described by the friedmann - lemaitre - robertson - walker ( flrw ) line element . in order to study the above system of differential equations we need to define explicitly the functional form of the scalar field potential energy , @xmath3 , which is not an easy task to do . indeed , in the literature , due to the unknown nature of the dark energy
, there are many forms of potentials proposed by several authors ( for a review see @xcite , @xcite , @xcite ) which describe differently the physics of the scalar field .
it is worth pointing out that for some special cases analytical solutions have been found ( refs . @xcite and references therein ) . as an example , if the potential @xmath3 is modeled as a power law @xmath23 , then the energy density of the scalar field evolves like @xmath24 which means that , for @xmath25 or @xmath26 the corresponding energy density behaves either like non relativistic or relativistic matter . in this work ,
we have used a functional form of @xmath3 ( see @xcite ) for which we solve the previous dynamical problem analytically .
this potential corresponds to the so called _ unified dark matter _
( hereafter udm ) scenario @xcite , @xcite , @xcite : @xmath27 following the bertacca et al .
@xcite nomenclature , the real constants in eq .
( [ potent ] ) are selected such as @xmath28 . as expected
there is one minimum at the point @xmath9 , which reads @xmath29 we would like to point out that as long as the scalar field is taking negative and large values the udm model has the attractive feature due to @xmath30 @xcite .
this property simply says that the energy density in @xmath2 tracks @xcite the radiation ( matter ) component .
in fact the udm potential was designed to mimic both the dark matter and the dark energy . indeed , performing a taylor expansion to the potential around its minimum we get ,
@xmath31 which means that at an early enough epoch the cosmic
fluid behaves like radiation @xcite ( @xmath32 ) , then evolves to the matter epoch ( @xmath33 ) and finishes with a phase that looks like a cosmological constant ( see also @xcite ) .
changing now the variables from @xmath34 to @xmath35 using the relations : @xmath36 with @xmath37 the lagrangian ( [ langaf ] ) is written : @xmath38 \nonumber \\
+ \frac{1}{2}\left[3^{4/3 } k ( x_2 ^ 2-x_1 ^ 2)^{1/3}\right ] \;\;.\end{aligned}\ ] ] the scale factor ( @xmath39 ) in the udm scenario is now given by : @xmath40 ^{1/3 } \;\ ; , \label{alcon}\ ] ] which means that the new variables have to satisfy the following inequality : @xmath41 .
it is straightforward now from the lagrangian ( [ langxy ] ) to write the corresponding hamiltonian : @xmath42 \nonumber \\
-\frac{1}{2}\left[3^{4/3 } k ( x_2 ^ 2-x_1 ^ 2)^{1/3}\right ] \;\;.
\end{aligned}\ ] ] where @xmath43 , @xmath44 denote the canonical momenta and @xmath45 , @xmath46 are the oscillators `` frequencies '' with units of inverse of time and @xmath47 the dynamics of the closed frlw scalar field cosmologies has been investigated thoroughly in @xcite .
in particular , for a semi - flat geometry ( @xmath48 ) we have revealed cases where the dynamics of the system ( see section 3.1 in @xcite , orbit 5 in fig . 1 scale factor vs. time and fig.4 ) is close to the concordance @xmath0-cosmology , despite the fact that for the semi flat udm model there is a strong indication of a chaotic behavior . in this paper
we would like to investigate the potential of a spatially flat udm scenario ( @xmath49 ) since the analysis of the cosmic microwave background ( cmb ) anisotropies have strongly suggested that the spatial geometry of the universe is flat @xcite . technically speaking , in the new coordinate system
our dynamical problem is described well by two independent hyperbolic oscillators and thus the system is fully integrable . indeed in the new coordinate system
the corresponding equations of motion can be written as @xmath50 and it is routine to perform the integration to find the analytical solutions : @xmath51 where @xmath52 , @xmath53 , @xmath54 and @xmath55 are the integration constants of the problem . with the aid of eq.([trans11 ] ) and assuming that the total energy ( @xmath56 ) of the system is zero the above constants satisfy the following restriction : @xmath57 as expected the phase space of the current dynamical problem is simply described by two hyperbolas @xmath58 whose axes have a ratio @xmath59 ( @xmath60 ) . plane .
the contours correspond to 1@xmath61 , 2@xmath61 and 3@xmath61 confidence levels .
the thick ( thin ) contours correspond to the snia ( baos ) likelihoods while the solid square is the best fit solution : @xmath62 and @xmath63 . _
insert panel : _ the solutions within @xmath64 contours in the @xmath65 plane , where @xmath66 is the present value of the scalar field in geometrical units ( @xmath67 ) . note that for planck units ( @xmath18 ) we have to multiply the @xmath66-axis with @xmath68 .
using the best fit solution we find @xmath69 or @xmath70 in planck units . ]
in this section , with the aid of the basic hyperbolic functions , we analytically derive the predicted time dependence of the main cosmological functions in the udm cosmological model .
if we combine eq.([trans11 ] ) together with the initial parameterization [ see eq.([trans ] ) ] we immediately obtain the following expressions : @xmath71 and after some algebra the evolution of the scalar field becomes : @xmath72 \;\;.\end{aligned}\ ] ] using eqs .
( [ potent ] ) and ( [ phi11 ] ) one can prove that @xmath73 \;\;.\end{aligned}\ ] ] now the range of @xmath74-values for which the udm scalar field is well defined [ due to eq.([phi11 ] ) ] is : @xmath75 .
evidently , when the system reaches at the critical point @xmath9 then @xmath76 ( or @xmath77 ) . for this to be the case we must have @xmath78 and therefore , @xmath79 .
cosmology ( short dashed line ) , vcg ( dot dashed ) and cpl ( long dashed ) dark energy models .
_ bottom panel : _ the evolution of the scalar field . in the insert panel
we present the behavior of the potential normalized to unity at the present time .
note , that @xmath80gyr is the present age of the universe . ]
inserting eq.([trans11 ] ) into eq.([alcon ] ) the scale factor , normalized to unity at the present epoch , evolves in time as @xmath81^{1/3}\end{aligned}\ ] ] where @xmath82 is the present age of the universe in billion years .
the constant @xmath54 is related to @xmath83 because at the singularity ( @xmath84 ) , the scale factor has to be exactly zero [ see eq.([a11 ] ) ] .
after some algebra , we find that @xmath85 furthermore , we investigate the circumstances under which an inflection point exists and therefore have an acceleration phase of the scale factor .
this crucial period in the cosmic history corresponds to @xmath86 which implies that the condition @xmath87 should contain roots which are real and such that @xmath88 .
the above equation is solvable because for @xmath89 the potential energy [ @xmath3 ] takes only positive values .
knowing the integration constants ( @xmath91 ) , of the current dynamical problem we can calculate the inflection point by solving numerically the following equation : @xmath92 \left[1-\psi^{2}(t_{i})\right]=0\end{aligned}\ ] ] with @xmath93 where @xmath94 and @xmath95 . in addition , the hubble function predicted by the udm model can be viewed as the sum of basic hyperbolic functions : @xmath96 with @xmath97 where @xmath98 is the hubble constant . in this work we use @xmath99kms@xmath100mpc@xmath100 with @xmath101 @xcite or @xmath102gyr@xmath100 corresponding to @xmath80gyr .
also we can relate the frequency @xmath103 of the hyperbolic oscillator in the @xmath104 axis with the well known cosmological parameters .
indeed , @xmath103 is given by @xmath105 while @xmath106 has no units .
notice that , @xmath107 is the matter density at the present time .
in this work we use the so called baryonic acoustic oscillations ( baos ) in order to constrain the current cosmological models .
baos are produced by pressure ( acoustic ) waves in the photon - baryon plasma in the early universe , generated by dark matter overdensities .
first evidence of this excess was recently found in the clustering properties of the luminous sdss red - galaxies @xcite , @xcite and it can provide a
standard ruler with which we can put constraints on the cosmological models . for a spatially flat flrw we use the following estimator : @xmath108^{1/3 } } \left[\int_{a_{s}}^{1 } \frac{{\rm d}y}{y^{2}h(y)/h_{0 } } \right]^{2/3}\ ] ] measured from the sdss data to be @xmath109 , where @xmath110 [ or @xmath111 .
therefore , the corresponding @xmath112 function is simply written @xmath113^{2}}{0.017^{2}}\ ] ] where @xmath114 is a vector containing the cosmological parameters that we want to fit .
also , we additionally utilize the sample of 192 supernovae of davies et al . @xcite . in this case
, the @xmath115 function becomes : @xmath116^{2 } \;\;.\ ] ] where @xmath117 is the observed scale factor of the universe , @xmath118 is the observed redshift , @xmath119 is the distance modulus @xmath120 and @xmath121 is the luminosity distance @xmath122 where @xmath123 is the speed of light ( @xmath124 here ) .
finally we can combine the above probes by using a joint likelihood analysis : @xmath125 in order to put even further constraints on the parameter space used .
note , that we define the likelihood estimator as : @xmath126 $ ] . without wanting to appear too pedagogical , we remind the reader of some basic elements of the concordance @xmath0-cosmology . in this framework ,
the normalized scale factor of the universe is @xmath127 the hubble function is written as @xmath128 comparing the @xmath0 model with the observational data ( we sample @xmath129 $ ] in steps of 0.01 ) we find that the best fit value is @xmath130 with @xmath131 ( @xmath132 ) in a very good agreement with the 5 years wmap data @xcite . the inflection point takes place at @xmath133^{1/3}\end{aligned}\ ] ] therefore , we estimate @xmath134gyr@xmath100 , @xmath135 and @xmath136 .
the deceleration parameter at the present time is @xmath137 . , @xmath138 model ) of 192 snia data ( open points ) from @xcite as a function of redshift . for comparison
we plot the @xmath139 ( solid line ) , @xmath140 ( long dashed line ) and @xmath141 ( dot dashed line ) . ] in this case we use a simple parameterization for the dark energy equation of state parameter which is based on a taylor expansion around the present time ( see chevallier & polarski @xcite and linder @xcite , hereafter cpl ) @xmath142 the hubble parameter is given by : @xmath143^{1/2}\ ] ] where @xmath144 and @xmath145 are constants .
we sample the unknown parameters as follows : @xmath146 $ ] and @xmath147 $ ] in steps of 0.01 .
we find that for @xmath138 the overall likelihood function peaks at @xmath148 and @xmath149 while the corresponding @xmath150 is 193.6 ( @xmath151 ) .
the deceleration parameter at the present time is @xmath152 .
let us consider now a completely different model namely the variable chaplygin gas ( herafter vcg ) which corresponds to a born - infeld tachyon action @xcite .
recently , an interesting family of chaplygin gas models was found to be consistent with the current observational data @xcite . in the framework of a spatially flat flrw metric
, it can be shown that the hubble function takes the following formula : @xmath153^{1/2}\ ] ] where @xmath154 is the density parameter for the baryonic matter @xcite and @xmath155 $ ] in steps of 0.01 and @xmath156 $ ] in steps of 0.02 .
the corresponding effective equation of state parameter @xmath157 is related to @xmath158 according to @xmath159 while the effective matter density parameter is : @xmath160 .
we find that the best fit parameters are @xmath161 and @xmath162 ( @xmath163 ) with @xmath164 ( @xmath151 ) and the present value of the deceleration parameter is @xmath165 . in order to predict
analytically the time evolution of the main cosmological functions [ @xmath11 , @xmath10 , @xmath166 and @xmath167 we have to define the corresponding unknown constants of the problem ( @xmath91 ) . at the same time , from the restrictions found in section 3 ( see eqs .
[ om11 ] , [ con2 ] and [ omm ] ) , we can reduce the parameter space to @xmath168 .
we do so by fitting the predictions of the udm cosmological model and recent observational data . here
, we use @xmath129 $ ] and @xmath169 $ ] in steps of 0.01 .
figure 1 ( thin dashed lines ) shows the 1@xmath61 , 2@xmath61 and 3@xmath61 confidence levels in the @xmath170 plane when using baos .
obviously , the @xmath54 parameter is not constrained by this analysis and all the values in the interval @xmath171 are acceptable
. however , the baos statistical analysis puts constraints on the matter density parameter @xmath13 .
therefore , in order to put further constraints on @xmath54 we additionally utilize the snia data . in figure 1 ( thick solid lines ) , we present the snia likelihood contours and we find that the best fit solution is @xmath172 and @xmath173 . the joint likelihood function peaks at @xmath174 and @xmath175 ( @xmath176 ) with @xmath177 ( @xmath151 ) . note that the errors of the fitted parameters represent @xmath64 uncertainties . in the insert plot of figure 1 we provide the solutions ( circles - baos and triangles - snia ) within @xmath64
contours in the @xmath65 plane , where @xmath66 is the present value of the scalar field .
the corresponding best fit value of the scalar field ( see eqs .
[ psi11 ] and [ phi11 ] ) is @xmath69 or @xmath70 in planck units ( @xmath18 ) , while the frequencies are @xmath178gyr@xmath100 and @xmath179gyr@xmath100 .
it is interesting to mention that although the frequency ( @xmath180 ) of the hyperbolic oscillator in the @xmath181 axis is somewhat less than the present expansion rate of the universe , the @xmath103 is equal to the value predicted by the @xmath0 cosmology ( see section 4.1 ) . .
note , that @xmath182 is the kinetic energy of the scalar and @xmath3 is the potential . _
right panel : _ the functional form of the equation of state parameter for various udm models .
the upper line corresponds to @xmath183 while the bottom line corresponds to @xmath184 .
note that the solid thick line corresponds to the best - fit parameters @xmath185 .
we find that initially all the udm models [ @xmath186 , @xmath187 start from @xmath188 and they reach @xmath189 close to the present time . ]
knowing now the parameter space ( @xmath91 ) we investigate , in more detail , the correspondence of the udm model with the different dark energy models ( see sections 4.1 , 4.2 and 4.3 ) in order to show the extent to which they compare .
our analysis provides an evolution of the udm scale factor seen in the upper panel of figure 2 as the solid line , which closely resembles , especially at late times ( @xmath190 ) , the corresponding scale factor of the @xmath0 ( short dashed ) , vcg ( dot dashed ) , and cpl ( long dashed ) .
note that the udm deceleration parameter at the present time is @xmath191 .
however , for @xmath192 , the cpl and the vcg scale factors evolve more rapidly than the other two models ( udm and @xmath0 cosmology ) .
also it is clear that an inflection point [ @xmath86 ] is present in the evolution of the udm scale factor .
the udm inflection point is located at @xmath194 which corresponds to @xmath195 and is somewhat different than the value predicted from the usual @xmath0 cosmology ( see section 4.1 ) . before the inflection point
, the udm appears to be more decelerated from the other three dark energy cosmological models due to the fact that the second term [ @xmath196 ) ] in eq.([a11 ] ) plays an important role . from figure 2 it becomes clear that the udm model reaches a maximum deviation from the other three dark energy models prior to @xmath197 ( @xmath198 ) . in order to investigate whether the expansion of the observed universe follows such a possibility we need a visible distance indicator ( better observations ) at redshifts @xmath199 .
the evolution of the scalar field is presented in the bottom panel of figure 2 , while in the insert figure we plot the scalar field dependence of the potential energy normalized to unity at the present time . as we have stated in section 3.1
there is one minimum at @xmath9 that corresponds to @xmath200 .
to conclude , we plot in figure 3 the relative deviations of the distance modulus , @xmath201 , of the dark energy models used here from the traditional @xmath0 cosmology .
notice that the open points represent the following deviation : @xmath202 . within the snia redshift range @xmath203 ( @xmath204 ) the vcg distance modulus is close to the @xmath0 one .
the largest deviations of the distance moduli occur at redshifts around 0.5 - 1 for the udm and 1.1 - 1.5 for the cpl model respectively .
we would like to end this section with a discussion on the dark energy equation of state . as we have stated already in the introduction
, there is a possibility for the equation of state parameter to be a function of time rather than a constant ratio between the pressure and the energy density .
within the framework of the scalar field cosmology the equation of state parameter is derived from the field model and in general it is a complicated function of time , even when the potential is written as a simple function of the scalar field . in our case
we have @xmath205 or else ( see eqs.[phi11 ] , [ potent11 ] ) @xmath206[1-\psi^{2}(t ) ] } { { \dot \psi}^{2}(t)+\omega^{2}_{1}[\kappa-\psi^{2}(t)][1-\psi^{2}(t ) ] } \;\;.\end{aligned}\ ] ] note that @xmath207models can be described by scalar models with @xmath208 strictly equal to -1 . using our best fit parameters we present in left panel of figure 4 ,
the equation of state parameter as a function of the scale factor for the different dark energy models .
the udm model ( solid line ) is the only case that provides positive values for the equation of state parameter at early epochs .
we have checked the udm scenario against the cosmic coincidence problem ( why the matter energy density and the dark energy density are of the same order at the present epoch ) by utilizing the basic tests proposed by @xcite .
these are : ( a ) at early enough times the equation of state parameter tends to its maximum value , @xmath188 , which means that the dark energy density initially takes large values .
so as long as the scalar field rolls down the potential energy , @xmath3 , decreases rapidly and the kinetic energy @xmath209 takes a large value , ( b ) then @xmath2 continues to roll down , the dark energy density decreases and the equation of state parameter remains close to unity for a quite long period of time ( @xmath210 ) and ( c ) for @xmath211 the equation of state parameter is a decreasing function of time and it becomes negative at @xmath212 . before that epoch , the potential energy of the scalar field remains less than the kinetic energy ( see the insert plot in the left panel of figure 4 ) and the equation of state parameter ( or the scalar field ) resembles background matter . in a special case where @xmath213 [ or @xmath214 the equation of state behaves exactly like that of pressure - less matter . for @xmath215
we reach the same expansion as in an open universe , because the dark energy density evolves as @xmath216 and has no effect on @xmath217 .
in fact , we verify that prior to the inflection point @xmath218 , which means that after @xmath219 the accelerating expansion of the universe starts . finally , @xmath220 close to the present epoch @xmath221 and the scalar field is effectively frozen ( the same situation seems to hold also in the limit @xmath222 ) .
this is to be expected because at this period the scalar field varies slowly with time ( see the insert panel of figure 4 ) , so that @xmath223 and the dark energy fluid asymptotically reaching the de - sitter regime ( cosmological constant ) .
( short dashed ) and cpl ( long dashed ) models .
_ bottom panel : _ the deviation @xmath224 of the growth factor for various dark energy models with respect to the @xmath0 solution .
the points represent the comparison : ( a ) udm-@xmath0 ( open triangles ) , ( b ) vcg-@xmath0 ( solid squares ) and ( c ) cpl-@xmath0 ( open circles ) . ] in order to conclude this discussion , it is interesting to point out that we also investigate the sensitivity of the above results to the matter density parameter . as an example , in the right panel of figure 4 we present the evolution of the equation of state parameter for @xmath183 [ upper line ] and @xmath184 [ bottom line ] .
we confirm that in the range @xmath186 and @xmath225 the general behavior ( described before ) of the functional form of the equation of state parameter is an intermediate case between the above lines for @xmath226 and thus it depends weakly on the values of the parameter space ( @xmath227 ) .
therefore , our main cosmological results for the udm scenario persist for all physical values of @xmath107 and it strongly indicates that the udm model over - passes the cosmic coincidence problem .
in this section we attempt to study the dynamics at small scales by generalizing the basic linear and non - linear equations which govern the behavior of the matter perturbations within the framework of a udm flat cosmology .
also we compare our predictions with those found for the dark energy models used in this work ( see sections 4.1 , 4.2 and 4.3 ) .
this can help us to understand better the theoretical expectations of the udm model as well as the variants from the other dark energy models .
the evolution equation of the growth factor for models where the dark energy fluid has a vanishing anisotropic stress and the matter fluid is not coupled to other matter species is given by ( @xcite , @xcite , @xcite ) : @xmath228 where @xmath229 and @xmath230 .
useful expressions of the growth factor can be found for the @xmath0cdm cosmology in @xcite , for the quintessence scenario ( @xmath231 ) in @xcite , @xcite , @xcite , @xcite , for dark energy models with a time varying equation of state in @xcite and for the scalar tensor models in @xcite . in the upper panel of figure 5
we present the growth factor evolution which is derived by solving numerically eq .
( [ deltatime1 ] ) , for the four dark energy models ( including the udm ) .
note that the growth factors are normalized to unity at the present time .
the behavior of the udm growth factor ( solid line ) has the expected form , i.e. it is an increasing function of the scale factor .
also we find that the growth factor in the udm model is almost an intermediate case between the vcg ( dot dashed line ) and cpl ( long dashed line ) models respectively . in the bottom panel of figure 5
we show the deviation , @xmath224 , of the growth factors @xmath232 for the current dark energy models with respect to the @xmath0 solution @xmath233 .
assuming now that clusters have formed prior to the epoch of @xmath234 ( @xmath235 ) , in which the most distant cluster has been found @xcite , the udm scenario ( open triangles ) deviates from the @xmath0 solution by @xmath236 while the cpl ( open circles ) and vcg ( solid squares ) deviates by @xmath237 and @xmath238 respectively .
also at the @xmath0-inflection point ( @xmath136 ) we find the following results : ( i ) udm-@xmath0 @xmath239 , ( ii ) cpl-@xmath0 @xmath240 and ( iii ) vcg-@xmath0 @xmath241 . to conclude this discussion it is obvious that for @xmath242 the udm growth factor tends to the @xmath0 solution ( the same situation holds for the cpl model but with @xmath243 ) . ) and @xmath0 cosmology ( @xmath138 ) respectively . ]
the so called spherical collapse model , which has a long history in cosmology , is a simple but still a fundamental tool for understanding how a small spherical patch [ with radius @xmath244 of homogeneous overdensity forms a bound system via gravitation instability @xcite .
from now on , we will call @xmath245 the scale factor of the universe where the overdensity reaches its maximum expansion ( @xmath246 ) and @xmath247 the scale factor in which the sphere virializes , while @xmath248 and @xmath249 the corresponding radii of the spherical overdensity . note that in the spherical region
, @xmath250 is the matter density while @xmath251 will denote the corresponding density of the dark energy . in order to address the issue of the dark energy in the gravitationally bound systems ( clusters of galaxies ) we can consider the following assumptions : ( i ) clustered dark energy considering that that the whole system virializes ( matter and dark energy ) , ( ii ) the dark energy remains clustered but now only the matter virializes and ( iii ) the dark energy remains homogeneous and only the matter virializes ( for more details see @xcite , @xcite @xcite and @xcite ) .
note , that in this work we are using the third possibility . here
we review only some basic concepts of the problem based on the assumption that the dark energy component under a scale of galaxy clusters can be treated as being homogeneous : @xmath252 , @xmath253 and @xmath254 .
in general the evolution of the spherical perturbations as the latter decouple from the background expansion is given by the raychaudhuri equation : @xmath255\;\ ; { \rm here}\;\;4\pi g\equiv 1/2.\ ] ] now within the cluster region the evolution of the dark energy component is written as ( see @xcite ) @xmath256 while if we consider a scalar field the above equation becomes @xmath257 where @xmath258 figure 6 presents examples of @xmath259 obtained for the udm ( solid line ) and for the concordance @xmath0 model ( dashed line ) .
the time needed for a spherical shell to re - collapse is twice the turn - around time , @xmath260 . on the other hand , utilizing both the virial theorem and the energy conservation we reach to the following condition : @xmath261^{a = a_{\rm f}}= \left[u_{g}+u_{\phi_{c } } \right]^{a = a_{\rm t } } \label{virial}\ ] ] where @xmath262 is the potential energy and @xmath263 is the potential energy associated with the dark energy for the spherical overdensity ( see @xcite and @xcite ; in our case @xmath264 ) . using the above formulation
we can obtain a cubic equation that relates the ratio between the virial @xmath249 and the turn - around outer radius @xmath248 the so called collapse factor ( @xmath265 ) . notice that eq.([virial ] ) is valid when the ratio of the system s dark energy to the matter s densities at the time of the turn - around takes relatively small values @xcite .
of course in the case of @xmath266 the above expressions get the usual form for @xmath0 cosmology ( @xcite , @xcite ) while for an einstein - de sitter model ( @xmath267 ) we have @xmath268 . finally solving numerically eq.([virial ] )
[ it can be done also analytically ] we calculate the collapse factor .
in particular , figure 7 shows the behavior of the collapse factor for the current cosmological models starting from the udm ( solid line ) , @xmath0 ( short dashed ) , vcg ( dot dashed ) , and cpl ( long dashed ) .
we find that the collapse factor lies in the range @xmath269 in agreement with previous studies ( @xcite , @xcite , @xcite , @xcite , @xcite , @xcite , @xcite ) .
prior to the cluster formation epoch ( @xmath234 ) the udm scenario , appears to produce more bound systems with respect to the other dark energy models .
indeed , we find the following values : @xmath270 , @xmath271 , @xmath272 and @xmath273 . also it becomes clear that the udm collapse factor decreases slowly with the redshift of virialization @xmath274 , due to its positive equation of state parameter .
this is also incorporated by the fact that at early epochs the cosmic expansion of the udm model is much more decelerated than in the other three dark energy models .
the latter result is in agreement with those obtained by @xcite .
they found a similar behavior for the collapse factor by considering several potentials with an exponential phase .
in this work we investigate analytically and numerically the large and small scale dynamics of the scalar field flrw flat cosmologies in the framework of the so called _ unified dark matter _ scenario . in particular using a hamiltonian formulation
we find that the time evolution of the basic cosmological functions are described in terms of hyperbolic functions .
this theoretical approach yields analytical solutions which can accommodate a late time accelerated expansion , equivalent to either the dark energy or the standard @xmath0 models .
furthermore , based on a joint likelihood analysis using the snia data and the baryonic acoustic oscillations , we put tight constraints on the main cosmological parameters of the udm cosmological model . in particular
we find @xmath62 and the scalar field at the present time is @xmath69 or 0.084 ( in planck units ) . also , we compare the udm scenario with various dark energy models namely @xmath0 cosmology , parametric dark energy model and variable chaplygin gas .
we find that the cosmological behavior of the udm scalar field model is in a good agreement , especially after the inflection point , with those predicted by the above dark energy models although there are some differences especially at early epochs . in particular , we reveal that the udm scalar field cosmology has three important differences over the other three dark energy models considered : * it can pick up positive values of the equation of state parameter at large redshifts ( @xmath275 ) . also , it behaves relatively well with respect to the cosmic coincidence problem .
* at early enough epochs ( @xmath197 or @xmath276 ) the cosmic expansion in the udm model is much more decelerated than in the other three dark energy models . in order to investigate whether the expansion of the observed universe has the above property
, we need a visible distance indicator ( better observations ) at high redshifts ( @xmath277 ) . * close to the cluster formation epoch
, its collapse factor @xmath278 is less than @xmath279 of the corresponding factor of the other three dark energy models .
this feature points to the direction that perhaps the @xmath280 parameter can be used as a cosmological tool .
we thank prof .
george contopoulos , dr .
manolis plionis and the anonymous referee for their useful comments and suggestions .
g. lukes - gerakopoulos was supported by the greek foundation of state scholarships ( iky ) .
m. makler , s. q. de oliveira , i. waga , phys . lett b. , * 555 * , 1 , ( 2003 ) ; m. c. bento , o. bertolami , a. a. sen , phys .
b. , * 575 * , 172 , ( 2003 ) ; a. dev , j. s. alcaniz , d. jain , phys .
d. , * 67 * , 023515 , ( 2003 ) ; y. gong , c. k. duan , mon . not .
* 352 * , 847 , ( 2004 ) ; z. h. zhu , astron .
astrophys . , * 423 * , 421 , ( 2004 ) l. amendola , i. waga , f. finelli , astro - ph/0509099 ; d. kirkman , d. tytler , n. suzuki , n. , j. m. omeara , d. lubin , astrophys . j. suppl . , apjs , * 149 * , 1 , ( 2003 ) | we study the dynamics of the scalar field flrw flat cosmological models within the framework of the _ unified dark matter _ ( udm ) scenario . in this model
we find that the main cosmological functions such as the scale factor of the universe , the scalar field , the hubble flow and the equation of state parameter are defined in terms of hyperbolic functions .
these analytical solutions can accommodate an accelerated expansion , equivalent to either the _ dark energy _ or the standard @xmath0 models . performing a joint likelihood analysis of the recent supernovae type ia data and the baryonic acoustic oscillations traced by the sloan digital sky survey ( sdss ) galaxies , we place tight constraints on the main cosmological parameters of the udm cosmological scenario .
finally , we compare the udm scenario with various dark energy models namely @xmath0 cosmology , parametric dark energy model and variable chaplygin gas .
we find that the udm scalar field model provides a large and small scale dynamics which are in fair agreement with the predictions by the above dark energy models although there are some differences especially at high redshifts . |
SAN DIEGO – A trip to the happiest place on Earth was anything but for a Spring Valley family who’s claiming they are victims of racism and are planning to file suit against Disneyland.
The Black family took a trip to Disneyland in August and their son Jason Black Jr. met his favorite character The Rabbit from Alice in Wonderland.
“I went to hug him but he turned his back,” the 6-year-old said. “It’s made me feel sad because I wanted to really hug him.”
“The Rabbit was turning his back on him like he didn’t want to touch him,” said his older brother Elijah Black. “Then I went up and tried to hold his hand but he kept flicking my hand off.”
The family filed a lawsuit against the Anaheim theme park, claiming the person playing the character of The Rabbit discriminated against their children because they are black.
“I asked the rabbit, I said ‘he wants to hug you,’” said the boy’s mother, Annelia Black. “He’s like twirling his fingers, like hurry up take the picture.”
Their father said it was obvious The Rabbit was avoiding them.
“When the rabbit shied away from the kids, our first instinct was, maybe they have new policy. Maybe they aren’t supposed to touch the kids anymore. So we stood by and watched,” said Jason LeRoy Black Sr.
Two white kids approached The Rabbit moments later.
“[The] Rabbit showered, hugged, kissed and posed with them and took pictures. That made my kids feel horrible, it made us as adults feel horrible,” said Jason LeRoy Black Sr.
The family said they immediately went to the management office to complain, showing them photos that they say indicate the rabbit was trying to avoid touching them.
They filed an official complaint and were offered VIP passes, which they declined.
Instead they’re asking Disney to make a public apology and terminate the employee in The Rabbit suit.
“They’re not trying to get something they don’t deserve,” said their attorney Dan Gilleon. “In fact all they’ve asked for is a little bit of recognition that this should not have happened.”
After months of correspondence with Disney asking them to sign a confidential waiver in exchange for $500, the Blacks have hired an attorney and are demanding surveillance video to prove or disprove their claims. ||||| It's a memorable part of parenthood -- when your child gets to meet their favorite character at an amusement park. But a family in San Diego says they were robbed of that special moment because a Disneyland employee was racist.
This family has been battling with Disney since last August because they say the amusement park employee was racist -- and now they just want to make sure no other children go through this.
"I was going to hug him but he turned his back," said 6-year-old Jason Black.
"How'd that make you feel?"
"Sad," replied Jason.
The 6-year-old was shunned by his favorite character from Alice in Wonderland. All he wanted was a hug -- and his older brother Elijah just wanted to hold the rabbit's hand.
"The rabbit was turning his back on him like he didn't event want to touch him, I went up to try to hold his hand but he kept on flicking my hand off," said Elijah.
"Our first instinct was okay, maybe they have new policies, maybe they aren't supposed to touch the kids anymore so then we stood by and we watched," said father Jason Black Sr.
What they saw led them to believe that the person playing Rabbit was racist.
"This white boy. He started hugging on the little girl and kissing... then hugging the boy and they were white," said Elijah Black.
"There were two other kids that came up and the rabbit showered them, hugged kissed them, posed with them, meanwhile that made my kids feel horrible," said Black Sr.
The family immediately showed the photos to management and filed a complaint. Management offered VIP passes -- but the family turn down that offer, asking for an apology and termination of the employee instead.
"It's about the principle and what are you going to do to make the situation better so this doesn't happen to another family."
After months of battling with Disney, the family has been asked to sign a confidential waiver in exchange for $500. They hired an attorney and now demand that the company look at surveillance video of the interaction.
Disney has not yet responded to the family's request. Our affiliate in San Diego contacted Disney for comment and they emailed a statement saying "We cannot comment on something that we are not aware of -- and that we carefully review all guest claims." | – According to a lawsuit, the Black family went to Disneyland and got shunned by the White Rabbit—literally. The San Diego family is suing the amusement park because, they say, the employee playing the Alice in Wonderland character was racist. Last August, 6-year-old Jason Black tried to hug the "rabbit" and his older brother Elijah tried to hold his hand, but the employee ignored the boys, they tell Fox News. "The rabbit was turning his back on him like he didn't event want to touch him," Elijah recalls. "I went up to try to hold his hand but he kept on flicking my hand off." But when two white children approached (the Blacks are black), the character showered them with attention and hugs, the boys' parents say. The family filed a complaint, with photo evidence, but they weren't satisfied with management's offer of VIP passes. They want a public apology and for the employee to be fired, in order to ensure "this doesn't happen to another family," says Jason Black Sr. They hired an attorney after months of battling, when Disney asked them to sign a confidential waiver in exchange for $500, and are now asking Disney to review surveillance footage of the incident. |
SECTION 1. CENTER FOR TECHNICAL ASSISTANCE FOR NON-DEPARTMENT HEALTH
CARE PROVIDERS WHO FURNISH CARE TO VETERANS IN RURAL
AREAS.
(a) Establishment in Department of Veterans Affairs Authorized.--
(1) In general.--The Secretary of Veterans Affairs may
establish in the Department of Veterans Affairs a center for
technical assistance to assist non-Department health providers
who furnish care to veterans in rural areas.
(2) Designation.--The center authorized by paragraph (1)
may be known as the ``Rural Veterans Health Care Technical
Assistance Center'' (in this section referred to as the
``Center'').
(b) Director.--The head of the Center shall be the Director of the
Rural Veterans Health Care Technical Assistance Center, who shall be
appointed by the Secretary from among individuals who--
(1) are qualified to carry out the duties of the Director;
and
(2) have significant knowledge and experience working for
or with a non-Department health care provider that furnishes
care to veterans in rural areas.
(c) Location.--The Secretary shall select the location of the
Center. In selecting the location of the Center, the Secretary shall
give preference to a location that--
(1) has in place infrastructure appropriate for the
functions of the Center;
(2) is located in a State that has--
(A) a high number of veterans in rural and highly
rural areas (including veterans not enrolled in the
system of annual patient enrollment established under
section 1705 of title 38, United States Code); and
(B) a history of strong collaboration--
(i) between the Veterans Health
Administration and non-Department health
providers who furnish care to veterans; and
(ii) between the Veterans Health
Administration and a State institution of
higher education that maintains links to or
contracts with a State office of rural health
and another rural health program; and
(3) is in proximity to one or more entities carrying out
programs and activities relating to health care for rural
populations (including rural populations of veterans),
including an institution of higher education carrying out such
programs and activities that is willing to enter into a
partnership with the Center to assist and collaborate with the
Center in the discharge of its functions.
(d) Functions.--The functions of the Center shall be as follows:
(1) To develop and disseminate information, educational
materials, training programs, technical assistance and
materials, and other tools to improve access to health care
services for veterans in rural areas and to otherwise improve
the health care provided veterans by non-Department health care
providers.
(2) To improve collaboration on health care matters,
including exchange of health information, for veterans
receiving health care from both Department and non-Department
providers of health care services between the Department and
other health care providers serving rural populations,
including rural health clinics, community health centers
serving rural populations, critical access hospitals serving
rural populations, small rural hospitals, telehealth networks,
and other rural health care providers and systems.
(3) To establish and maintain Internet-based information
(including practical models, best practices, research results,
and other appropriate information) on mechanisms to improve
health care for veterans in rural areas.
(4) To work with existing Government offices and agencies
on health care for rural populations and veterans health care,
including the Office of Rural Health of the Department of
Veterans Affairs and the Office of Rural Health Policy of the
Health Resources and Services Administration of the Department
of Health and Human Services, on programs, activities, and
other mechanisms for improving health care for rural veterans.
(5) To track and monitor fee expenditures of the Department
relating to non-Department health care providers serving rural
populations and to evaluate the Center through the use of an
independent entity experienced and knowledgeable about rural
health care matters, such non-Department providers, and
programs and services of the Department.
(e) Discharge of Functions Through Partnerships.--For purposes of
discharging its functions under subsection (d), the Center may enter
into partnerships with persons and entities (including small business
concerns owned by veterans or veterans with service-connected
disabilities) that have demonstrated expertise in the provision of
educational and technical assistance for veterans in rural areas,
health care providers serving rural populations, and persons and
entities seeking to enter into contracts with the Federal Government in
matters relating to the functions of the Center, including the
provision of educational and technical assistance relating to
telehealth, reimbursement for health care, improvement of quality of
care, and contracting with the Federal Government. | Authorizes the Secretary of Veterans Affairs to establish within the Department of Veterans Affairs (VA) a center for technical assistance to assist non-VA health providers who furnish care to veterans in rural areas. Makes the head of such center the Director of the Rural Veterans Health Care Technical Assistance Center. Requires the Secretary, in selecting the center's location, to give preference to a location that, among other things: (1) has a high number of veterans in rural and highly rural areas, and (2) is near one or more entities carrying out programs and activities relating to health care for rural populations. |
Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. ||||| New video shows the moment an Indiana police officer confronted and shot at a man he thought was robbing a business.The incident happened last week in Crawfordsville, when police responded to a 911 call about a robbery at Backstep Brewery. The caller told dispatchers that a man in a ski mask entered the bar with a gun.Sgt. Matt Schroeter arrived at the location and saw a man in a ski mask backing out the door. He appeared to be holding a weapon. The "robber" was actually actor Jim Duff, who was filming a scene for a movie.But Schroeter didn't know that."Drop the gun! Drop the gun now!" Schroeter yells in body camera footage released after the incident. Duff then turns toward the officer and takes off his mask; Schroeter fires a shot and repeats his orders for Duff to drop his weapon."We're doing a movie," Duff says after taking off the ski mask and dropping the weapon.The officer then orders him to get on the ground. Duff complies and then yells to someone inside the bar, "You guys better get out here, man."Someone starts to come out the door, but the officer tells them to stay inside.Police said neither the production company nor the bar owners told them that a movie was being shot in the area. It didn't help that the other actors and filming equipment were inside the business, making it difficult to know that a movie was being shot.Duff was placed into custody until police could confirm that he was part of a movie scene."We could not see the police, so when the actor left the building we had no knowledge any police had even arrived at the scene," Montgomery County Movies owner Philip Demoret told CBS4.No charges were filed in connection with the case. Demoret said he planned to work with Crawfordsville police so that something like this doesn't happen again. | – Note to movie makers: Give the police a heads-up when filming a robbery scene in public. Newly released police bodycam video released by the city of Crawfordsville, Ind., shows how close one actor came to learning that lesson in the worst possible way. Officers responded to a robbery call at a local bar when someone spotted a masked gunman entering. Cops arrived just as actor Jim Duff was stepping back out, holding a fake gun, and an officer fired a shot when he didn't immediately drop his weapon as ordered, reports ABC 7. Luckily, the bullet didn't hit him. "We're doing a movie," Duff can be heard trying to explain as officers order him to get on the ground. Police ended up holding him in custody until they confirmed that he was, in fact, part of a movie being filmed. No charges will be filed related to the incident, reports the Journal & Courier. "We ask, for obvious safety purposes, that our department be notified of future instances where toy or prop weapons are going to be used," said a city statement. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Medicare Nursing Facility Pay-for-
Performance Act of 2004''.
SEC. 2. ADDITIONAL MEDICARE PAYMENT FOR FACILITIES THAT REPORT
ADDITIONAL QUALITY DATA.
(a) Voluntary Reporting of Quality Measures and Adjustment in
Payment.--
(1) In general.--Section 1888 of the Social Security Act
(42 U.S.C. 1395yy) is amended by adding at the end the
following new subsection:
``(f) Voluntary Reporting of Quality Measures; Change in Payment
Based on Reported Quality Measures.--
``(1) Establishment of additional quality measures.--
``(A) In general.--Not later than 6 months after
the date of the enactment of this subsection, the
Secretary, through a contract with a qualified
independent party (such as the National Quality Forum)
identified by the Secretary, shall provide for the
identification of--
``(i) at least 10, and not more than 15,
quality measures for the performance of skilled
nursing facilities under this title; and
``(ii) the data to be reported, including
their collection and formatting, on a calendar
quarter basis for each such quality measure to
measure the performance of a skilled nursing
facility.
Such measures may be outcome or process measures. Such
measures shall be in addition to the 14 enhanced
measures published by the Secretary for such facilities
for use as of September 1, 2004.
``(B) Measure of staffing level.--The quality
measures identified under subparagraph (A) shall
include a measure of the level of facility staffing and
the mix of licensed staff at a facility.
``(C) Risk adjustment.--The values obtained for
quality measures identified under subparagraph (A),
including the existing 14 enhanced measures, shall be
appropriately risk adjusted as applied to individual
skilled nursing facilities in order to increase the
likelihood that any differences in such values reflect
differences in the care provided by the skilled nursing
facilities and not differences in the characteristics
of the residents in such facilities. Such risk
adjustment shall take into account resident
characteristics that are related to triggering a value
for a quality measure but are not reflective of
facility care processes. Risk adjustment approaches may
include, as appropriate--
``(i) excluding certain types of residents;
``(ii) stratifying residents into high-risk
and low-risk groups; or
``(iii) statistical adjustment (such as
regression analysis) that takes into
consideration multiple characteristics
(covariates) for each resident simultaneously
and adjusts the nursing facilities' quality
measure values for different resident
characteristics.
``(D) Small facilities.--
``(i) In general.--In selecting and
applying quality measures, there shall be taken
into account the circumstances of small skilled
nursing facilities.
``(ii) Definition.--For purposes of clause
(i), the term `small skilled nursing facility'
means a skilled nursing facility which had, in
most recent preceding cost reporting period,
fewer than 1,500 patient days with respect to
which payments were made under this title.
``(E) Annual evaluation.--The Secretary shall
provide for an annual process whereby the use of
particular quality measures are evaluated and, as
appropriate, adjusted in consultation with the National
Quality Forum.
``(F) Posting on website.--The Secretary shall
provide for the posting on its website, and the
publication at least annually, of the quality
performance of skilled nursing facilities as measured
through values reported under this subsection by such
facilities.
``(2) Adjustment in payment based on quality performance.--
``(A) In general.--For each fiscal year beginning
with fiscal year 2006, in the case of a skilled nursing
facility that reports data under paragraph (1) for the
data reporting period with respect to that fiscal year
(as defined in subparagraph (C)), the aggregate amount
of payment under this subsection shall be adjusted as
follows:
``(i) Increase of 2 percent for facilities
in top 10 percent in quality.--In the case of a
facility that, based on such data, has a
composite score for quality that is equal to or
exceeds such score for the baseline period (as
defined in subparagraph (D)) for the top 10
percent of skilled nursing facilities that have
reported such data for such baseline period,
such aggregate payment shall be increased by
such amount as reflects an increase in the
market basket percentage increase applied for
the fiscal year involved under subsection
(e)(4)(E)(ii)(V) by 2 percentage points.
``(ii) Increase of 1 percent for facilities
in next 10 percent in quality.--In the case of
a facility that, based on such data, has a
composite score for quality that exceeds such
score for the baseline period for the top 10
percent of skilled nursing facilities that have
reported such data for such baseline period,
but is equal to or exceeds such score for the
baseline period for the top 20 percent of such
skilled nursing facilities, such aggregate
payment shall be increased by such amount as
reflects an increase in the market basket
percentage increase applied for the fiscal year
involved under subsection (e)(4)(E)(ii)(V) by 1
percentage point.
``(iii) Quality threshold covering 80
percent of facilities.--For a baseline period,
the Secretary shall establish a quality
threshold score that covers 80 percent of the
skilled nursing facilities that have reported
such data for such baseline period.
``(iv) Decrease of 1 percent for facilities
below quality threshold.--In the case of a
fiscal year beginning with fiscal year 2007, in
the case of a facility that, based on such
data, has a composite score on quality measures
that is below the quality threshold score
established under clause (iii) for the baseline
period, the aggregate payment for the fiscal
year involved shall be decreased by such amount
as reflects a decrease in the market basket
percentage increase applied under subsection
(e)(4)(E)(ii)(V) by 1 percentage point.
``(v) Year by year determination.--Any
increase or decrease in payments to a skilled
nursing facility under the preceding provisions
of this subparagraph for a fiscal year shall
not affect or apply to payments to such
facility in any subsequent fiscal year.
``(B) Treatment of small facilities.--In the case
of a skilled nursing facility which because of its
small size is unable to submit data on one or more
quality measures--
``(i) the facility shall not be penalized
under this paragraph due to its non-reporting
of such data; and
``(ii) the composite rank or score shall be
based on the data so reported, with appropriate
adjustments so as to be comparable to other
facilities.
``(C) Data reporting period.--For purposes of
subparagraph (A), the term `data reporting period'
means, with respect to--
``(i) fiscal year 2006, such period of
calendar quarters in fiscal year 2005 as the
Secretary shall specify, which, to the extent
feasible, shall be a period of at least 2
calendar quarters; or
``(ii) a subsequent fiscal year, the period
of 4 consecutive calendar quarters ending on
the June 30 preceding the fiscal year.
``(D) Baseline period.--For purposes of
subparagraph (A), the term `baseline period' means,
with respect to--
``(i) fiscal year 2006, the period of
calendar quarters specified under subparagraph
(C)(i); or
``(ii) a subsequent fiscal year, the period
of 4-calendar-quarters ending on June 30,
2006.''.
(2) Limiting market basket increases to facilities that
voluntarily report information.--Subsection (e)(4)(E)(ii) of
such section is amended--
(A) in subclause (III), by striking ``and'' at the
end;
(B) in subclause (IV), by inserting ``before the
first fiscal year in which the reporting of quality
measures is in effect under subsection (f)(1)'' after
``each subsequent fiscal year'' and by striking the
period at the end and inserting ``; and''; and
(C) by adding at the end the following new
subclause:by inserting before the period at the end the
following:
``(V) for each subsequent year, the
rate computed for the previous fiscal
year increased, in the case of a
skilled nursing facility that reports
data under subsection (f)(1) for the
fiscal year, by the skilled nursing
facility market basket percentage for
the fiscal year involved.''.
(b) Using Fiscal Year 2005 Payment Rates as a Floor for Subsequent
Updates.--
(1) In general.--Subsection (e)(4)(E)(ii)(IV) and
subsection (e)(4)(E)(ii)(V), as added by subsection (a)(2), of
such section is amended by inserting ``(taking into account,
with respect to a previous fiscal year that was fiscal year
2005, all add-ons to such rate that were applicable in such
fiscal year as well as market basket adjustments made in
subsequent fiscal years)'' after ``the rate computed for the
previous fiscal year''.
(2) Effective date.--The amendment made by paragraph (1)
shall apply to the computation of rates for fiscal years
beginning with fiscal year 2006.
SEC. 3. LONG-TERM CARE FINANCING COMMISSION.
(a) Establishment.--There is hereby established a commission to be
known as the ``Long-Term Care Financing Commission'' (in this section
referred to as the ``Commission'').
(b) Composition.--The Commission shall be composed of 10 members
appointed by the Secretary of Health and Human Services.
(c) Duties.--
(1) Analyses.--The Commission shall conduct analyses of the
financing of long-term care, including the financing of nursing
facilities. Such analyses shall include an analysis of each of
the following:
(A) The adequacy of Medicaid program financing of
the long term care system.
(B) Medicare's cross-subsidization of long-term
care for Medicaid patients.
(C) Total industry margins in long-term care.
(D) Long-term demographic challenges.
(E) The impact of current trends, including
staffing shortages and litigation costs, on long-term
care spending.
(F) Different approaches to refinements in the per
diem RUG payment amounts and related payment
methodologies under section 1888(e) of the Social
Security Act (42 U.S.C. 1395yy(e)) .
(2) Report.--The Commission shall submit to Congress an
annual report on its analyses. Each such report shall include
recommendations for such changes in financing of long-term care
as the Commission deems appropriate.
(d) Terms, Compensation, Chairman, Meetings, Staff, and Powers.--
The provisions of subsections (c)(3), (c)(4), (c)(5), (c)(6), (d), and
(e) of section 1805 of the Social Security Act (42 U.S.C. 1395b-6)
(relating to provisions for the Medicare Payment Advisory Commission)
shall apply to the Commission in the same manner as they apply to the
Medicare Payment Advisory Commission. | Medicare Nursing Facility Pay-for-Performance Act of 2004 - Amends title XVIII (Medicare) of the Social Security Act (SSA) to direct the Secretary of Health and Human Services, through a contract with a qualified independent party (such as the National Quality Forum), to provide for identification of: (1) between ten and 15 quality measures for the performance of skilled nursing facilities under Medicare; and (2) the data to be reported, including their collection and formatting, on a calendar quarter basis for each such quality measure.
Requires the values obtained for quality measures to be appropriately risk-adjusted as applied to individual skilled nursing facilities in order to increase the likelihood that any differences in such values reflect differences in the care provided by the facilities and not differences in the characteristics of their residents.
Provides for: (1) adjusting payments for skilled nursing facilities based on quality performance, including an increase of two percent for facilities in the top ten percent in quality as well as a decrease of one percent for facilities below the quality threshold; (2) limiting market basket increases to facilities that voluntarily report information; and (3) using FY 2005 payment rates as a floor for subsequent updates.
Establishes the Long-Term Care Financing Commission to analyse and report to Congress on the financing of long-term care. |
SECTION 1. SUMMER CAMPS.
(a) Program.--
(1) In general.--The Director shall carry out a program to
award grants to institutions of higher education or eligible
nonprofit organizations (or consortia of such institutions and
organizations) to develop and operate summer science and
mathematics camps for middle school and high school students.
The camps shall be designed to promote the interest of children
in science, mathematics, and technology and to increase their
knowledge of these subjects.
(2) Distribution of awards.--The Director shall, in
awarding grants under this section, consider the distribution
of awards among institutions and organizations of different
sizes and geographic locations.
(3) Merit review.--Grants shall be provided under this
section on a competitive, merit-reviewed basis.
(4) Use of grants.--Grant funds may be used to--
(A) develop educational programs and materials for
use in the summer science and mathematics camps; and
(B) cover the costs of operating the summer camps,
including the cost of attendance for students selected
to attend the camps in accordance with subsection
(b)(3).
(b) Selection Process.--
(1) Application.--An applicant for an award under this
section shall submit an application to the Director at such
time, in such manner, and containing such information as the
Director may require. The application shall include, at a
minimum--
(A) a description of the educational program that
will be offered by the science or mathematics summer
camp that the applicant intends to operate;
(B) a description of the process by which students
will be selected to attend the summer camp;
(C) the duration of the camp and the number of
students who can be accommodated in the program each
year;
(D) identification of the individuals who will be
involved in designing and implementing the educational
program at the summer camp; and
(E) evidence of the agreement required under
paragraph (3).
(2) Review of applications.--In evaluating the applications
submitted under paragraph (1), the Director shall consider, at
a minimum--
(A) the ability of the applicant to effectively
carry out the program;
(B) the novelty and educational value of the
program to be offered at the summer camp;
(C) the number of the students that will be served
by the program; and
(D) the extent to which the program is tailored to
the needs of individuals from groups underrepresented
in science and technology careers.
(3) Eligibility requirement.--To be eligible to receive a
grant under this section, an institution of higher education or
eligible nonprofit organization (or consortia of such
institutions and organizations) must enter into an agreement
with one or more urban high-need local educational agencies to
develop a process for selecting students from schools
administered by the educational agencies to attend the science
or mathematics summer camp.
(c) Definitions.--In this section--
(1) The term ``Director'' means the Director of the
National Science Foundation.
(2) The term ``eligible nonprofit organization'' means a
nonprofit organization, such as a museum or science center,
that has expertise and experience in providing informal science
and mathematics education for the public.
(3) The term ``urban high-need local educational agency''
means a local educational agency that--
(A) is located in one of the 25 United States
cities with the greatest numbers of children aged 5 to
17 living in poverty, based on data from the Census
Bureau; and
(B) has at least 1 school in which 50 percent or
more of the enrolled students are eligible for
participation in the free and reduced price lunch
program established by the Richard B. Russell National
School Lunch Act (42 U.S.C. 1751 et seq.).
(d) Authorization of Appropriations.--There are authorized to be
appropriated to the National Science Foundation for the purposes of
this section, $2,000,000 for fiscal year 2007, $2,050,000 for fiscal
year 2008, $2,100,000 for fiscal year 2009, $2,150,000 for fiscal year
2010, and $2,200,000 for fiscal year 2011. | Requires the Director of the National Science Foundation to award grants to institutions of higher education or eligible nonprofit organizations to develop and operate summer camps designed to interest and instruct middle and high school students in science, mathematics, and technology. Conditions a nonprofit organization's eligibility on its expertise and experience in providing the public with informal science and mathematics education.
Requires grantees to enter into agreements with urban high-need local educational agencies on processes for selecting disadvantaged students from schools administered by such agencies for attendance at such camps. Allows the use of grant funds to cover the cost of attendance by such students. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Little Shell Tribe of Chippewa
Indians Restoration Act of 2007''.
SEC. 2. FINDINGS.
Congress finds that--
(1) the Little Shell Tribe of Chippewa Indians is a
political successor to signatories of the Pembina Treaty of
1863, under which a large area of land in the State of North
Dakota was ceded to the United States;
(2) the Turtle Mountain Band of Chippewa of North Dakota
and the Chippewa-Cree Tribe of the Rocky Boy's Reservation of
Montana, which also are political successors to the signatories
of the Pembina Treaty of 1863, have been recognized by the
Federal Government as distinct Indian tribes;
(3) the members of the Little Shell Tribe continue to live
in the State of Montana, as their ancestors have for more than
100 years since ceding land in the State of North Dakota as
described in paragraph (1);
(4) in the 1930s and 1940s, the Tribe repeatedly petitioned
the Federal Government for reorganization under the Act of June
18, 1934 (25 U.S.C. 461 et seq.) (commonly known as the
``Indian Reorganization Act'');
(5) Federal agents who visited the Tribe and Commissioner
of Indian Affairs John Collier attested to the responsibility
of the Federal Government for the Tribe and members of the
Tribe, concluding that members of the Tribe are eligible for,
and should be provided with, trust land, making the Tribe
eligible for reorganization under the Act of June 18, 1934 (25
U.S.C. 461 et seq.) (commonly known as the ``Indian
Reorganization Act'');
(6) due to a lack of Federal appropriations during the
Depression, the Bureau of Indian Affairs lacked adequate
financial resources to purchase land for the Tribe, and the
members of the Tribe were denied the opportunity to reorganize;
(7) in spite of the failure of the Federal Government to
appropriate adequate funding to secure land for the Tribe as
required for reorganization under the Act of June 18, 1934 (25
U.S.C. 461 et seq.) (commonly known as the ``Indian
Reorganization Act''), the Tribe continued to exist as a
separate community, with leaders exhibiting clear political
authority;
(8) the Tribe, together with the Turtle Mountain Band of
Chippewa of North Dakota and the Chippewa-Cree Tribe of the
Rocky Boy's Reservation of Montana, filed 2 law suits under the
Act of August 13, 1946 (60 Stat. 1049) (commonly known as the
``Indian Claims Commission Act''), to petition for additional
compensation for land ceded to the United States under the
Pembina Treaty of 1863 and the McCumber Agreement of 1892;
(9) in 1971 and 1982, pursuant to Acts of Congress, the
tribes received awards for the claims described in paragraph
(8);
(10) in 1978, the Tribe submitted to the Bureau of Indian
Affairs a petition for Federal recognition, which is still
pending as of the date of enactment of this Act; and
(11) the Federal Government, the State of Montana, and the
other federally recognized Indian tribes of the State have had
continuous dealings with the recognized political leaders of
the Tribe since the 1930s.
SEC. 3. DEFINITIONS.
In this Act:
(1) Member.--The term ``member'' means an individual who is
enrolled in the Tribe pursuant to section 7.
(2) Secretary.--The term ``Secretary'' means the Secretary
of the Interior.
(3) Tribe.--The term ``Tribe'' means the Little Shell Tribe
of Chippewa Indians of Montana.
SEC. 4. FEDERAL RECOGNITION.
(a) In General.--Federal recognition is extended to the Tribe.
(b) Effect of Federal Laws.--Except as otherwise provided in this
Act, all Federal laws (including regulations) of general application to
Indians and Indian tribes, including the Act of June 18, 1934 (25
U.S.C. 461 et seq.) (commonly known as the ``Indian Reorganization
Act''), shall apply to the Tribe and members.
SEC. 5. FEDERAL SERVICES AND BENEFITS.
(a) In General.--Beginning on the date of enactment of this Act,
the Tribe and each member shall be eligible for all services and
benefits provided by the United States to Indians and federally
recognized Indian tribes, without regard to--
(1) the existence of a reservation for the Tribe; or
(2) the location of the residence of any member on or near
an Indian reservation.
(b) Service Area.--For purposes of the delivery of services and
benefits to members, the service area of the Tribe shall be considered
to be the area comprised of Blaine, Cascade, Glacier, and Hill Counties
in the State of Montana.
SEC. 6. REAFFIRMATION OF RIGHTS.
(a) In General.--Nothing in this Act diminishes any right or
privilege of the Tribe or any member that existed before the date of
enactment of this Act.
(b) Claims of Tribe.--Except as otherwise provided in this Act,
nothing in this Act alters or affects any legal or equitable claim of
the Tribe to enforce any right or privilege reserved by, or granted to,
the Tribe that was wrongfully denied to, or taken from, the Tribe
before the date of enactment of this Act.
SEC. 7. MEMBERSHIP ROLL.
(a) In General.--As a condition of receiving recognition, services,
and benefits pursuant to this Act, the Tribe shall submit to the
Secretary, by not later than 18 months after the date of enactment of
this Act, a membership roll consisting of the name of each individual
enrolled as a member of the Tribe.
(b) Determination of Membership.--The qualifications for inclusion
on the membership roll of the Tribe shall be determined in accordance
with sections 1 through 3 of article 5 of the constitution of the Tribe
dated September 10, 1977 (including amendments to the constitution).
(c) Maintenance of Roll.--The Tribe shall maintain the membership
roll under this section.
SEC. 8. TRANSFER OF LAND.
(a) Homeland.--The Secretary shall acquire, for the benefit of the
Tribe, trust title to 200 acres of land within the service area of the
Tribe to be used for a tribal land base.
(b) Additional Land.--The Secretary may acquire additional land for
the benefit of the Tribe pursuant to section 5 of the Act of June 18,
1934 (25 U.S.C. 465) (commonly known as the ``Indian Reorganization
Act''). | Little Shell Tribe of Chippewa Indians Restoration Act of 2007 - Extends federal recognition to the Little Shell Tribe of Chippewa Indians of Montana. Makes the Tribe and each member eligible for all services and benefits provided by the United States to Indians and federally recognized Indian tribes, without regard to the existence of a reservation for the Tribe or the location of the residence of any member on or near an Indian reservation.
Directs the Tribe, as a condition of receiving recognition, services, and benefits pursuant to this Act, to submit to the Secretary of the Interior a membership roll consisting of the name of each individual enrolled as a member of the Tribe. Requires the Tribe to maintain such membership roll.
Directs the Secretary to acquire, for the benefit of the Tribe, trust title to 200 acres of land within the Tribe's service area to be used for a tribal land base. |
the bose - einstein condensation of low density atomic samples @xcite provides one with a new paradigm in many - body theory , atomic physics , quantum optics , and nonlinear dynamics . below the critical temperature ,
condensates are described to an excellent degree of accuracy by a scalar nonlinear schrdinger equation , the gross - pitaevskii equation describing the dynamics of the condensate wave function @xcite .
the elementary excitations of the condensate evaluated from a bogoliubov linearization about the condensate solution are in good qualitative agreement with experiments @xcite , and so is the hartree mean - field energy of the system @xcite .
nonlinear schrdinger equations are of course ubiquitous in physics , and have been studied in great detail in the past , in situations from fluid dynamics @xcite to phenomenological models of field theories @xcite and to nonlinear optics @xcite .
they play an important role in the study of pattern formation in beam propagation @xcite , and their soliton solutions find applications in problems such as light propagation in fibers @xcite . from this work
, it is known that the dynamical and stability properties of multicomponent nonlinear schrdinger equations can be vastly different from those of their scalar versions , and lead to a wealth of new effects @xcite .
it would be of considerable interest to generalize these ideas to the case of matter waves and to have multicomponent condensates available . while it is not generally possible to create two coexisting condensates inside a trap , exceptions are possible , as recently demonstrated in rubidium experiments by the jila group @xcite . however , this coexistence relies on a fortuitous coincidence of the scattering lengths for the two zeeman sublevels involved @xcite , a coincidence that can not be generally counted on .
there are proposals to optically change the s - wave scattering length of ground state atoms @xcite , but whether this can be used to produce coexisting condensates remains to be seen .
the goal of the present paper is to propose and analyze a method by which effective multicomponent condensates can be generated inside a high - q multimode optical resonator .
the cavity photons dress the condensate , very much like atoms can be dressed by electromagnetic fields @xcite , and the various dressed condensate states are coupled , e.g. via an electric dipole interaction .
hence , the condensate inside the cavity should be thought of as a coupled multicomponent system , each component subject to a nonlinear equation , and in addition coupled to its neighboring components .
in contrast to the situation involving two ( or more ) atomic species or levels , the coupling between the various components of the dressed condensate is linear , rather than resulting from collisions and hence nonlinear .
nonetheless , we submit that this method permits to generate and study `` coupled condensates '' in a controlable and at least in principle simple way .
this paper is organized as follows : section ii defines our model and uses a hartree variational principle to derive coupled nonlinear schrdinger equations describing the evolution of a dressed condensate in a two - mode cavity .
section iii specializes to the case where only one photon is present inside the resonator , and thus only two dressed condensate components are of importance . in that case , the problem reduces to the so - called discrete self - trapping equations for a dimer familiar in nonlinear physics .
these equations are integrable in free space , but not in the trap situation that we consider here .
we solve them approximately in the thomas - fermi approximation and study the spectrum of elementary excitations .
we predict the onset of instabilities in the system , and compare these analytical results with an exact numerical solution of the equations .
finally , section iv is a summary and conclusion .
our model system comprises a bose - einstein condensate ( bec ) which interacts with two counter - propagating modes supported by a high - q ring cavity .
we assume a cavity qed configuration @xcite and neglect all field modes except the two of interest , so that the electric field operator can be written as @xmath0 e^{-i\omega_c t } + h.c . , \ ] ] where @xmath1 is the unit polarization vector of the light , @xmath2 is the electric field per photon for light of frequency @xmath3 in a mode volume @xmath4 , @xmath5 the light wavevector , and @xmath6 , and @xmath7 are annihilation and creation operators of the cavity modes satisfying bose commutation relations @xmath8=\delta_{ij}$ ] . for this treatment
we neglect the detailed mode structure of the field in the transverse plane perpendicular to the optical - axis @xmath9 , assuming that it is homogeneous on the spatial scale of the bec , and that it is unaffected by its presence .
the atoms comprising the condensate are confined by an external trapping potential @xmath10 which binds the atoms on a sub - wavelength scale along the longitudinal axis @xmath9 .
in addition they interact with the cavity field , which induces transitions between the ground and excited electronic states .
the single - particle hamiltonian @xmath11 for the atoms , in an interaction picture with the optical frequency removed , then reads @xcite @xmath12 , \label{hamsin}\ ] ] where we have located the bec at @xmath13 without loss of generality , @xmath14 is the center - of - mass atomic momentum , @xmath15 the atomic mass , @xmath16 is atom - field detuning , @xmath17 being the atomic transition frequency , @xmath18 is the strength of the atom - field coupling , @xmath19 being the atomic dipole - matrix element , and @xmath20 , @xmath21 are pseudo - spin atomic raising and lowering operators for transitions between the ground and excited atomic states .
we consider the case of large atom - field detuning for which the excited atomic state can be adiabatically eliminated .
this results in the following effective single - particle hamiltonian ( see appendix a for details ) involving only the ground atomic state @xmath22 the four terms involving field mode operators in this effective hamiltonian describe virtual transitions involving the absorption of a photon from mode 1 followed by re - emission into mode 1 , the same but for mode 2 , absorption of a photon from mode 1 followed by re - emission into mode 2 , and vice versa .
the last two of these processes are allowed since the length @xmath23 of the bec is taken to be less than an optical wavelength , yielding an associated momentum uncertainty @xmath24 .
hence the momentum deficit involved in the transfer of a photon from one direction to the other around the cavity is within the heisenberg uncertainty principle .
we also note that the effective hamiltonian conserves the total number of photons @xmath25 which is then a good quantum number .
proceeding now to include many - body interactions , the second - quantized hamiltonian describing our system is @xcite @xmath26 where @xmath27 denotes a full set of quantum numbers , and @xmath28 and @xmath29 are the usual atomic field annihilation and creation operators , which for bosonic atoms satisfy the commutation relations @xmath30 = \delta(\ell-\ell ' ) .
\label{psicom}\ ] ] the two - body potential is in the limit of s - wave scattering @xcite @xmath31 where @xmath32 measures the strength of the two - body interaction , @xmath33 being the s - wave scattering length . here
we consider a repulsive interaction so that @xmath34 .
the second - quantized hamiltonian for our system conserves both the number of atoms and the total number of photons , so we consider a state comprising @xmath35 atoms and @xmath36 photons .
the state of the system can be written in the form @xmath37 where the summation runs over all positive integers @xmath38 obeying @xmath39 , @xmath40 is the many - particle schrdinger wave function for the bec given there are @xmath41 photons in mode 1 and @xmath42 photons in mode 2 , and @xmath43 is the state with no ground state atoms present , @xmath41 photons in mode 1 and @xmath42 photons in mode 2 . to proceed we invoke the hartree approximation , which is appropriate for a bose condensed system in which the atoms are predominantly in the same state .
the hartree approximation is therefore strictly valid at zero temperature , and for a weakly interacting bose gas as assumed here , so that the condensate fraction is close to unity @xcite . accordingly
the many - particle wave function is written as a product of hartree wave functions @xmath44 here @xmath45 is the hartree wave function which represents the state the atoms occupy .
the equation of motion for @xmath46 results from the hartree variational principle @xcite @xmath47 \phi_{n_1,n_2}({\bf r},t ) + \hbar v_0 n|\phi_{n_1,n_2}({\bf r},t)|^2 \phi_{n_1,n_2}({\bf r},t ) \nonumber\\ & + & \frac { \hbar{\omega_0}^2}{\delta } \left ( \sqrt { n_1(n_2 + 1 ) } \phi_{n_1 - 1,n_2 + 1}({\bf r},t)+ \sqrt{(n_1 + 1)n_2 } \phi_{n_1 + 1,n_2 - 1}({\bf r},t)\right ) , \label{sys0}\end{aligned}\ ] ] where the photon numbers @xmath38 in modes 1 and 2 again run over all positive integers obeying @xmath39 . in the limit @xmath48
there is no coupling between the cavity modes and the bec and eq .
( [ sys0 ] ) is the usual scalar gross - pitaevskii equation for the condensate .
in contrast , for non - zero values of @xmath49 the processes involving absorption of a photon from one direction and re - emission into the other direction lead to a linear coupling between the state with @xmath50 photons and those with @xmath51 and @xmath52 photons , the notation @xmath50 meaning @xmath41 photons in state 1 and @xmath42 photons in state 2 . as a result ,
the system is generally a superposition of states with different @xmath50 . as they stand , eqs . ( [ sys0 ] )
account for the full three - dimensional structure of the bec . in order to make our presentation as straightforward as possible we now make some further simplifying assumptions , but we stress that these are not essential and do not limit the generality of the conclusions that we draw .
to proceed we write the trapping potential explicitly as @xcite @xmath53 thereby separating the longitudinal potential out from the transverse trapping potential . here
@xmath54 being the transverse position coordinate , @xmath55 the transverse angular frequency of the trap , and @xmath56 is the ratio of the longitudinal to transverse frequencies .
here we assume that @xmath57 so that the longitudinal trapping is much weaker than the transverse trapping , hence giving the bec density profile a cigar structure @xcite .
corresponds to the bec having a pancake structure @xcite and alters only the dimensionality of the resulting equations .
in particular it leads to a two - dimensional rather than a one - dimensional problem . ]
specifically , we assume that the transverse structure of the bec is not significantly altered by many - body interactions and is determined as the ground state solution of the transverse potential @xmath58 v_g({\bf r}_\perp ) , \ ] ] and we express the hartree wave function as @xmath59 substituting this expression into eq .
( [ sys0 ] ) , projecting out the transverse mode , and dropping the prime for simplicity in notation , yields the coupled gross - pitaevskii equations for the quasi - one - dimensional system . introducing the dimensionless length @xmath60 , @xmath61 being the characteristic length associated with the longitudinal trapping potential , and the dimensionless time @xmath62 ,
these coupled equations can be written in the scaled form @xmath63 where the hamiltonian @xmath64 is given by @xmath65 , \ ] ] @xmath66 is the atom - field interaction energy per photon in units of @xmath67 , which acts as the coupling coefficient between different states @xmath50 , and @xmath68 is the many - body interaction energy for @xmath35 atoms in a volume @xmath69 in units of @xmath67
. equations ( [ sys1 ] ) are the basis of the remainder of this paper .
the transformation to dimensionless variables reveals that the key parameters for the system are the linear coupling coefficient @xmath70 , and the nonlinear parameter @xmath71 describing self - phase modulation .
the dressed becs are the eigenstates of eqs .
( [ sys1 ] ) ( or more generally eqs .
( [ sys0 ] ) ) , and are quantum superpositions of states with different photon numbers @xmath50 . setting @xmath72 for the dressed states , we obtain @xmath73 with @xmath74 the chemical potential scaled to @xmath67 .
admissable solutions should also be normalized according to @xmath75 numerical calculations are generally required to solve these equations for the dressed states , but simple limiting cases can be treated analytically . the essential physics of dressed becs can be exposed using the simplest case of one cavity photon , @xmath76 , so only the states with @xmath77 and @xmath78 are relevant
. then the coupled time - dependent eqs .
( [ sys1 ] ) reduce to @xmath79 which yields the following pair of coupled equations for the dressed states @xmath80 these systems of equations are similar to those that appear in the theory of multi - component condensates @xcite
. however , instead of a nonlinear coupling due to cross - phase modulation we have here linear coupling due to the exchange of photons between cavity modes via virtual atomic transitions . in this respect our equations more closely resemble those describing the _ linear _ evanescent coupling of adjacent nonlinear optical fibers @xcite
. equations ( [ sys3 ] ) are known in nonlinear physics as the discrete self - trapping equations for a dimer ( see e.g. @xcite and references therein ) .
the stationary solutions for such a system were classified and their stability was studied in ref .
@xcite in the case @xmath81 .
three types of solution for the dimer were uncovered : an in - phase solution denoted @xmath82 for which , in our notation , @xmath83 , an out - of - phase solution @xmath84 with @xmath85 , and asymmetric solutions @xmath86 and @xmath87 with @xmath88 .
the same classification scheme can be employed here for the dressed states including the hamiltonian @xmath64 in eqs .
( [ sys4 ] ) . in this case , however , exact analytic solutions are not available for the dressed states , but approximate solutions can be obtained for the in - phase and out - of - phase solutions within the thomas - fermi approximation in which the nonlinear interaction term dominates over the kinetic energy term @xcite . in the thomas - fermi approximation
we then obtain the following dressed state solution from eqs .
( [ sys4 ] ) @xmath89 when the argument of the square root is greater than or equal to zero , and is zero otherwise .
the top sign in eq .
( [ thetadr ] ) corresponds to the in - phase solution , and the lower one to the out - of - phase solution .
the normalization of the wave function then leads to the following expression for the chemical potential of the two solutions @xmath90 where @xmath91^{1/3}$ ] is the longitudinal coordinate at which the thomas - fermi solution vanishes . using this expression for the chemical potential in eq .
( [ thetadr ] ) for the dressed state solution we readily find that the profile @xmath92 is in fact independent of @xmath70 and whether it is the in - phase or out - of - phase solution .
the elementary excitations of the system can be found by linearizing eqs .
( [ sys3 ] ) around the dressed state solutions @xmath93 , \nonumber \\
\phi_{01}(\xi,\tau ) & = & e^{-i\mu\tau}[\theta_{01}(\xi ) + u_{01}(\xi)e^{-i\omega \tau}+v_{01}^\star(\xi ) e^{i\omega \tau } ] , \label{uv}\end{aligned}\ ] ] where @xmath94 and @xmath95 represent small perturbations around the dressed state with energies @xmath96 . in zeroth
order , substitution of these expressions into eq .
( [ sys3 ] ) results in the system of two equations ( [ sys4 ] ) for the dressed states . in first - order
, it leads to the system of four equations for the linearized perturbations @xmath94 and @xmath95
@xmath97 u_{10}(\xi)+g u_{01}(\xi)+\eta \theta_{10}(\xi)^2 v_{10}(\xi ) , \nonumber\\ \omega u_{01}(\xi)&=&\left [ h_l + 2\eta |\theta_{10}(\xi)|^2-\mu\right ] u_{01}(\xi)+g u_{10}(\xi)+\eta \theta_{01}(\xi)^2 v_{01}(\xi ) , \nonumber\\ \omega v_{10}(\xi)&=&\left [ h_l + 2\eta |\theta_{10}(\xi)|^2-\mu\right ] v_{10}(\xi)+g v_{01}(\xi)+\eta \theta_{10}(\xi)^2u_{10}(\xi ) , \nonumber\\ \omega v_{01}(\xi)&=&\left [ h_l + 2\eta |\theta_{10}(\xi)|^2-\mu\right ] v_{01}(\xi)+g v_{10}(\xi)+\eta \theta_{01}(\xi)^2 u_{01}(\xi ) .
\label{sysbog}\end{aligned}\ ] ] the normal modes of this system of coupled equations are identical to the elementary excitations determined via the bogoliubov method in which the hamiltonian for the linearized perturbations is brought into diagonal form using a bogoliubov transformation @xcite .
it was shown in ref .
@xcite that for the case corresponding to @xmath81 the out - of - phase solution @xmath84 of the self - trapped equations is always stable , while the in - phase solution @xmath82 is stable until it bifurcates at a condition corresponding to @xmath98 , yielding a stable asymmetric branch and an unstable in - phase branch .
here we study the influence of the hamiltonian @xmath64 on the stability of the in - phase and out - of - phase dressed - states of the system .
an exact solution for the normal modes of eqs .
( [ sysbog ] ) is not available , to the best of our knowledge . to proceed we therefore use the consequence of the thomas - fermi approximation @xmath99 , which means that the profile of the dressed state solution is broad compared to the characteristic length scale of the trapping potential
this allows us to assume @xmath100 for normal modes localized close to the center of the trapping potential . with this replacement eqs .
( [ sysbog ] ) can be conveniently solved by expanding the perturbations in terms of eigenfunctions of the linear trapping potential @xmath101 @xmath102 substitution of these expressions into eqs .
( [ sysbog ] ) gives a system of linear equations for the coefficients @xmath103 , @xmath104 which is straightforward to solve .
for the out - of - phase dressed - state the spectrum of the elementary excitations is @xmath105 both branches of normal modes given by eq .
( [ stable ] ) are stable , as they are characterized by real values of @xmath106 . in contrast
, the spectrum of the normal modes for the in - phase dressed - state is @xmath107 there are again two branches in the excitation spectrum , one of which , @xmath108 can become unstable , that is , @xmath106 can assume imaginary values .
in particular , the region of instability is defined in terms of the index @xmath109 of the linear oscillator mode as @xmath110 a detailed analysis of the eigenmodes corresponding to the elementary excitations reveals that for the in - phase dressed state the unstable excitations have normal modes that are @xmath111 out of phase , that is , @xmath112 and @xmath113 , whereas the system is stable against symmetric perturbations , @xmath114 and @xmath115 .
this means that as the instability develops , the density profiles of the @xmath77 and @xmath78 components should display modulations which are @xmath111 out of phase . from this analysis
we can also estimate the index @xmath116 for the mode of largest growth rate , that is , the most negative value of @xmath117 the requirement @xmath118 then leads to the condition @xmath119 where the nearest integer value should be taken for @xmath116 .
the corresponding growth rate is readily found to be @xmath120 . in this section
we present sample numerical simulations of the development of the predicted instability for the in - phase dressed state bec .
the aim of these simulations is to validate the approximate stability analysis of the previous section , and to put its predictions in context using a concrete example .
we have solved the coupled gross - pitaevskii equations ( [ sys3 ] ) using a standard beam propagation technique for the initial conditions @xmath121 where @xmath122 is the in - phase ground state . in the initial condition ( [ incon ] ) the parameter @xmath123 is included to provide a slight deviation from the exact in - phase solution .
this deviation from the exact ground state of the trapping potential can be viewed as a wave packet of the normal modes , which triggers any instability present in the system .
we numerically generated the in - phase ground state solution by evolving the thomas - fermi ground state ( 22 ) , which represents a symmetric perturbation of the system and is hence stable , until the density profile reached a steady - state .
this evolution of the initial thomas - fermi solution towards the actual ground state occurs since in our numerical scheme an absorber is placed at the spatial grid boundaries to avoid unphysical reflections of high spatial frequencies : the absorber removes the high spatial frequencies present in the thomas - fermi solution leaving behind the actual ground state solution for which the absorber has a negligible effect .
for the numerical simulations presented here we have set @xmath124 , and thus @xmath125 , so the approximations employed here should be valid .
figure 1 shows the density profile @xmath126 for @xmath127 ( solid line ) and @xmath128 ( dashed line ) , and @xmath129 so there is no coupling . in this case
the density profile remains close to the initial profile with no sign of any density modulations appearing , meaning that the system is stable against density oscillations .
figure 2 shows the evolution of the central densities @xmath130 and @xmath131 for @xmath132 .
initially , the densities show modulations resulting from the beating of the normal modes excited in the initial state , but this is followed by a region of exponential growth for one component and decay for the other .
this is where the excitation of largest growth rate is expected to be dominant , and the predictions of the linear stability analysis can be tested . in particular , for the parameters used here we find @xmath133 from eq .
( [ mumax ] ) .
figure 3 shows the density profiles @xmath134 ( dotted line ) and @xmath135 ( dashed line ) , along with the initial profile ( solid line ) for comparison .
here we have plotted the densities for @xmath136 in the region of exponential growth and density modulations signaling an instability are clearly seen . in particular , the density oscillations of the two components are @xmath111 out of phase as predicted , and the oscillations correspond precisely to those expected for the most unstable mode with @xmath133 . these results clearly show that the linear coupling and associated quantum superposition of the system wave function have a large effect : in contrast to the stable bec shown in fig . 1 for @xmath137 , the introduction of linear coupling via a single cavity photon is sufficient to render the @xmath35-atom condensate spatially unstable .
the density profiles shown in fig .
3 are appropriate if we determine which cavity mode the photon occupies .
for example , @xmath126 is the density given that there is one photon in mode 1 and none in mode 2 .
in contrast , if no determination is made of which mode the photon occupies then the density profile is @xmath138 , and this is shown in fig . 4 where the density oscillations remain but with sufficiently reduced contrast .
here we see the possibility for a delayed choice experiment with a many - body system : imagine that the system is left to evolve and then released from the trap after which it falls under gravity and its density profile is measured .
if we measure which mode the photon occupies as the bec is dropping we expect the large contrast density oscillations , whereas if we do nt measure which mode the photon occupies we expect the low contrast density oscillations .
in addition , the decision whether to measure the cavity photon or not can in principle be made after the bec has left the cavity when the bec and field no longer interact , thus providing a delayed choice experiment .
care should be taken , however , that the time at which the measurement is performed is within the linear growth range illustrated in fig .
in this paper , we have introduced the concept of dressed condensates , which permit to create a coupled , multicomponent macroscopic quantum system whose dynamics can be vastly different from that of a bare condensate . a number of immediate extensions of the ideas presented here can readily be envisioned .
for example , one can easily imagine ways to create three - component systems , entangled condensates , etc .
such systems will allow one to extend many of the ideas related to measurement theory that have been developed in recent years in quantum optics to truly macroscopic quantum systems .
this work is supported by the u.s .
office of naval research contract no .
14 - 91-j1205 , by the national science foundation grant phy95 - 07639 , by the joint services optics program and by the us army research office .
the information contained in this paper does not necessarily reflect the position or policy of the u.s .
government , and no official endorsement should be inferred .
p. m. greatfully acknowledges an alexander - von - humboldt stiftung award which enabled his stay at the max - planck institut fr quantenoptik , during which part of this work was carried out .
he also thanks prof .
h. walther for his warm hospitality .
in section i we introduced the hamiltonian ( [ hamsin ] ) which describes dynamics of a single - atom .
this hamiltonian conserves number of excitations in the system , and thus its state can be represented as a linear combination of the states with one excitation @xmath139 equations for the coefficients @xmath140 follow from the schrdinger equation for the state @xmath141 and read ( here we omit kinetic energy and trapping potential terms ) @xmath142 assuming @xmath143 the excited atomic state can be adiabatically eliminated @xmath144 and the resulting equations for the coefficients @xmath145 and @xmath146 read @xmath147 at this point it is straightforward to see that these equations follow from the schrdinger equation for the state @xmath148 if the effective hamiltonian for the system is ( [ hamsineff ] ) . | we propose and analyze a way in which effective multicomponent condensates can be created inside high - q multimode cavities .
in contrast to the situation involving several atomic species or levels , the coupling between the various components of the dressed condensates is linear .
we predict analytically and numerically confirm the onset of instabilities in the quasiparticle excitation spectrum .
= 10000 |
quantum theory does not admit bell - local or kochen - specker - noncontextual hidden variable ( ks - nchv ) models . this is manifest in bell - nonlocality @xcite and ks - contextuality @xcite .
both these features arise at a mathematical level from the lack of a global joint probability distribution over measurement outcomes that recovers the measurement statistics predicted by quantum theory .
traditionally , ks - contextuality has been shown for ks - nchv models of projective measurements for hilbert spaces of dimension three or greater @xcite . for projective measurements
, ks - noncontextuality assumes that the outcome of a measurement @xmath0 is independent of whether it is performed together with a measurement @xmath1 , where @xmath2=0 $ ] , or with measurement @xmath3 , where @xmath4=0 $ ] and @xmath1 and @xmath3 are not compatible , i.e. , @xmath5\neq0 $ ] .
@xmath1 and @xmath3 provide contexts for measurement of @xmath0 .
a qubit can not yield a proof of ks - contextuality because it does not admit such a triple of projective measurements . while a state - independent proof of ks - contextuality holds for any state - preparation
, a state - dependent proof requires a special choice of the prepared state .
the minimal state - independent proof of ks - contextuality requires a qutrit and @xmath6 projectors @xcite .
the minimal state - dependent proof @xcite , first given by klyachko et al . , requires a qutrit and five projectors ( fig .
[ kcbs ] ) .
thus a qutrit is the simplest quantum system that allows a proof of ks - contextuality , both state - independent and state - dependent .
however , we note that generalizations of ks - noncontextuality for a qubit have been considered earlier @xcite in a manner that is different from our approach . the precise difference , and the merits of our approach over earlier attempts , are discussed at length in ref .
@xcite . here
we simply note that our approach builds upon the work of spekkens @xcite and liang et .
@xcite , and we consider generalized noncontextuality proposed by spekkens as the appropriate notion of noncontextuality for unsharp measurements @xcite .
generalized noncontextuality allows outcome - indeterministic response functions for unsharp measurements in the ontological model while ks - nchv models insist on outcome - deterministic response functions .
in particular , generalized noncontextuality insists on the noncontextuality of _ probability _ assignments to measurement outcomes rather than the stronger ks - noncontextual assumption of the noncontextuality of _ value _ assignments . a ks - nchv model is necessarily generalized - noncontextual but the converse is not true .
further , while the assumption of ks - noncontextuality applies to projective measurements in quantum theory , the assumption of generalized noncontextuality applies to all experimental procedures
preparations , transformations , and measurements in any operational theory .
we define a contextuality scenario as a collection of subsets , called ` contexts ' , of the set of all measurements .
a context refers to measurements that can be jointly implemented .
conceptually , the simplest possible contextuality scenario , first considered by specker @xcite ( fig . [ specker ] ) , requires three two - valued measurements , @xmath7 , to allow for three non - trivial contexts : @xmath8 . any other choice of contexts will be trivially ks - noncontextual , e.g. , @xmath9 is ks - noncontextual because the joint probability distribution @xmath10 reproduces the marginal statistics . by implication ,
it is also generalized - noncontextual . on assigning outcomes @xmath11 noncontextually to the three measurements @xmath7
, it becomes obvious that the maximum number of anticorrelated contexts possible in a single assignment is two , e.g. , for the assignment @xmath12 , @xmath13 and @xmath14 are anticorrelated but @xmath15 is not .
this puts a ks - noncontextual upper bound of @xmath16 on the probability of anticorrelation when a context is chosen uniformly at random .
specker s scenario precludes projective measurements because a set of three pairwise commuting projective measurements is trivially jointly measurable and can not show any contextuality .
one may surmise that it represents a kind of contextuality that is not seen in quantum theory .
however , as liang et al . showed @xcite , this contextuality scenario can be realized using noisy spin-1/2 observables . if one does not assume outcome - determinism for unsharp measurements and models them stochastically but noncontextually , then this generalized - noncontextual model for noisy spin-1/2 observables will obey a bound of @xmath17 , where @xmath18 $ ] is the sharpness associated with each observable .
formally , @xmath19 where @xmath20 is the probability of anticorrelation between the outcomes @xmath21 of measurements @xmath22 and @xmath23 , respectively .
@xmath24 denotes the povm corresponding to the joint implementation of @xmath22 and @xmath23 .
we will refer to this generalized noncontextuality inequality as the _ lsw ( liang - spekkens - wiseman ) inequality_. this is _ not _ a ks - noncontextual inequality , for which the bound would be @xmath16 .
a violation of the lsw inequality will rule out generalized noncontextuality and , by implication , ks - noncontextuality .
a discussion of this generalized noncontextual model and its ontological meaning , compared to the usual ks - nchv model , is provided in appendix [ modelcompare ] , where we also point out the merits of generalized noncontextuality over ks - noncontextuality as a benchmark for nonclassicality . for a more detailed analysis of these issues
we refer the interested reader to refs . @xcite . after giving examples of orthogonal and trine spin - axes that did not seem to show a violation of this inequality , liang et al .
left open the question of whether such a violation exists @xcite .
they conjectured that all such triples of povms will admit a generalized noncontextual model @xcite , i.e. , the lsw inequality will not be violated .
our main result is a proof that a state - dependent violation of the lsw inequality is possible . in section [ lswineq ]
we set up the lsw inequality for three unsharp qubit povms , in section [ constraintsoneta ] we obtain constraints on @xmath25 from joint measurability , and section [ constructjoint ] provides construction of the joint measurement povms . in section [ nosi ]
we prove that noisy spin-1/2 observables do not allow a state - independent violation of the lsw inequality , followed by our main result in section [ sdviolation ] : a state - dependent violation of lsw inequality for the case of trine spin axes .
we conclude with some discussion and open questions in section [ conclude ] .
the three povms considered , @xmath26 , @xmath27 , are noisy spin-@xmath28 observables of the form @xmath29 that is , @xmath30 where @xmath31 are the corresponding projectors .
so @xmath32 are noisy versions of the projectors @xmath33 , and the observable @xmath34 is therefore a noisy ( or unsharp ) version of the projective measurement @xmath35 ( for @xmath27 ) .
the lsw inequality concerns the following quantity : @xmath36 where @xmath21 label measurement outcomes for @xmath22 and @xmath23 , respectively .
the joint measurement povm for the context @xmath37 is denoted by @xmath38 .
@xmath39 is the joint measurement effect corresponding to the effects @xmath40 and @xmath41 , i.e. , @xmath42 , and @xmath43 .
@xmath44 is the average probability of anticorrelation when one of the three contexts is chosen uniformly at random . under a generalized noncontextual model for these noisy spin-1/2 observables ,
the following bound on @xmath44 holds ( cf .
@xcite , section 7.3 ) : @xmath45 the question is : does there exist a triple of noisy spin-1/2 observables that will violate this inequality , perhaps for some specific state - preparation ?
testing the lsw inequality for a quantum mechanical violation requires a special kind of joint measurability , denoted by jointly measurable contexts @xmath46,i.e . , pairwise joint measurability but no triplewise joint measurability .
this can be achieved by adding noise to projective measurements along three different axes . for a given choice of @xmath47 in eq .
( [ qubitpovms ] ) , denoting @xmath48 , a sufficient condition for this kind of joint measurability is @xmath49 where @xmath50 and @xmath51 these are obtained as special cases of the more general joint measurability conditions obtained in appendix [ bounds ] , based on refs . @xcite and @xcite .
we note that this condition is necessary and sufficient for the special case of trine ( @xmath52 ) and orthogonal ( @xmath53 ) spin axes .
we construct the joint measurement povm , @xmath54 , such that the given povms , @xmath55 and @xmath56 , are recovered as marginals , i.e. , @xmath42 , @xmath43 , @xmath57 , and @xmath58 , where @xmath21 .
the joint measurement povm has the following general form : @xmath59\\ g^{ij}_{+-}&=&\frac{1}{2}[(1-\frac{\alpha_{ij}}{2})i+\vec{\sigma}.\frac{1}{2}(\eta(\hat{n}_i-\hat{n}_j)+\vec{a}_{ij})]\\ g^{ij}_{-+}&=&\frac{1}{2}[(1-\frac{\alpha_{ij}}{2})i+\vec{\sigma}.\frac{1}{2}(\eta(-\hat{n}_i+\hat{n}_j)+\vec{a}_{ij})]\\ g^{ij}_{--}&=&\frac{1}{2}[\frac{\alpha_{ij}}{2}i+\vec{\sigma}.\frac{1}{2}(\eta(-\hat{n}_i-\hat{n}_j)-\vec{a}_{ij})]\label{jointend}\end{aligned}\ ] ] where @xmath60 is the @xmath61 identity matrix , @xmath62 are the @xmath61 pauli matrices , @xmath63 , and @xmath64 .
the necessary and sufficient conditions for these to be valid qubit effects , @xmath57 , @xmath65 , are equivalent to the following inequalities @xcite , @xmath66 @xmath67 where @xmath68 . the construction of the joint measurement povm and derivation of the necessary and sufficient condition for its validity , ( [ valid1])-([valid2 ] ) ,
are provided in appendix [ jointconstruct ] .
the joint measurement effects corresponding to anticorrelation sum to @xmath69
we will now show that no state - independent violation of the lsw inequality with qubit povms is possible .
there exists no state - independent violation of the generalized - noncontextual inequality @xmath70 using a triple of qubit povms , @xmath71 , that are pairwise jointly measurable but not triplewise jointly measurable . _
proof. _ in quantum theory , the probability @xmath72 for anticorrelation of measurement outcomes for pairwise joint measurements of @xmath73 ( where @xmath74 ) has the following form for a qubit state @xmath75 : @xmath76 the condition for violation of noncontextual inequality ( [ ncineq ] ) is @xmath77 . using ( [ anticorr ] )
, this reduces to @xmath78 using the standard @xmath79 pauli matrices and @xmath75 parameterized by @xmath80 and @xmath81 : @xmath82 the condition for violation becomes @xmath83 where @xmath84\ ] ] denotes the state - dependent term in the condition and @xmath85 is given by @xmath86 for a state - independent violation , either the state - dependent term in ( [ viol ] ) , @xmath87 , must vanish for all qubit states @xmath75 , or @xmath88 should hold .
the first case , @xmath89 , requires @xmath90 , since @xmath91 is the only term in @xmath87 that depends on the joint measurement povm .
this means @xmath92 , so that @xmath93 for all @xmath75 .
the second case requires @xmath94 . in both cases
, we have the following lower bound on @xmath95 , from inequality ( [ valid1 ] ) : @xmath96 taking the sum of @xmath95 , @xmath97 , we have @xmath98 for the first case , the condition for state - independent violation is , @xmath99 , while for the second case the condition for such a violation is @xmath94 . given the lower bound on @xmath100 , it follows that a necessary condition for state - independent violation of the lsw inequality is : @xmath101 we will show that there exists no choice of measurement directions that will satisfy this necessary condition , thereby ruling out a state - independent violation of the lsw inequality .
the particular cases of orthogonal axes ( @xmath53 ) or trine spin axes ( @xmath52 ) , used in @xcite , are clearly ruled out by this necessary condition . denoting @xmath102 ,
the necessary condition for violation is @xmath103 without loss of generality , the three directions can be parameterized as : @xmath104 where @xmath105 and @xmath106 this implies : @xmath107 then @xmath108 this contradicts the necessary condition ( [ necc ] ) .
hence , there is no state - independent violation of the lsw inequality ( [ ncineq ] ) allowed by noisy spin-1/2 observables .
our main result is that the lsw inequality can be violated in a state - dependent manner . from the condition for violation ( [ viol ] )
, it follows that a necessary condition for state - dependent violation is @xmath109 .
an optimal choice of @xmath75 that yields @xmath110 corresponds to @xmath111 and @xmath112 , i.e. , @xmath113 with this choice of @xmath75 the question becomes : does there exist a choice of @xmath114 such that @xmath109 ?
we show that this is indeed the case .
we define @xmath115 so that @xmath116 indicates a state - dependent violation .
note that violation of the lsw inequality @xmath117 is characterized by @xmath118 where @xmath119 for a state - dependent violation . given a coplanar choice of @xmath47 , and @xmath25 satisfying @xmath120 , the optimal value of @xmath3 denoted as @xmath121is given by @xmath122 which is obtained in appendix [ optimal ] .
we obtain a state - dependent violation of the lsw inequality for trine axes ( fig .
[ plane ] ) : [ thm2 ] the optimal violation of the lsw inequality for measurements along trine spin axes , i.e. , @xmath47 such that @xmath123 , occurs for @xmath124 if the plane of measurements is the zx plane .
the lower and upper bounds on @xmath25 are @xmath125 and @xmath126 .
the joint measurement povm is given by @xmath127 and @xmath128 .
the optimal violation corresponds to @xmath129 , so that @xmath130 , @xmath131 , @xmath132 , and @xmath133 .
along trine spin axes in the z - x plane.,scaledwidth=28.0% ] thus the quantum probability of anticorrelation can exceed the generalized - noncontextual bound by an amount arbitrarily close to @xmath134 or about @xmath135 for trine spin axes .
the quantum degree of anti - correlation for this violation is @xmath136 and the generalized - noncontextual bound is @xmath137 .
the proof of theorem [ thm2 ] follows from appendix [ optimal ] .
a violation of the lsw inequality is interesting primarily because the benchmark for nonclassicality set by generalized noncontextuality is more stringent than the one set by the traditional notion of ks - noncontextuality .
the lsw inequality takes into account , for example , the possibility that the measurement apparatus could introduce anticorrelations that have nothing to do with hidden variable(s ) one could associate with the system s preparation .
this would allow violation of the ks - noncontextual bound of @xmath16 when the measurement is unsharp ( @xmath138 ) even though this violation could purely be a result of noise coming from elsewhere , such as the measurement apparatus , rather than a consequence of quantum theory .
a violation of the lsw inequality , on the other hand , rules out this possibility and certifies genuine nonclassicality that can not be attributed to hidden variables associated with either the preparation or the noise . as argued by spekkens @xcite , the appropriate notion of noncontextuality for unsharp measurements is one that allows outcome - indeterministic response functions .
an interesting open question is whether such a violation is possible in higher dimensional systems and whether the amount of violation could be higher for these than for a qubit . whether a state - independent violation of the lsw inequality is possible in higher dimensions
also remains an open question .
our result also hints at the fact that perhaps all contextuality scenarios may be realizable and contextuality demonstrated if we consider the possibilities that general quantum measurements allow . in particular , scenarios that involve pairwise compatibility between all measurements but no global compatibility may be realizable within quantum theory .
specker s scenario is the simplest such example we have considered . indeed , as later shown in ref .
@xcite after the present work was completed , quantum theory does admit all contextuality scenarios since it allows one to realize any conceivable set of ( in)compatibility relations between a set of observables . in summary ,
the joint measurability allowed in a theory restricts the kind of contextuality scenarios that can arise in it .
quantum theory admits specker s contextuality scenario with unsharp measurements @xcite .
further , as we have shown , quantum theory allows violation of the lsw inequality in this scenario .
thus quantum theory is contextual even in the simplest contextuality scenario .
whether , and to what extent , this is the case with more complicated contextuality scenarios realizable , for example , via the construction in ref .
@xcite remains to be explored . _
_ note.__in ref .
@xcite , which appeared after completion of the present work , the authors deal with the lsw inequality and make some remarks on the results of this paper .
we refer the interested reader to ref .
@xcite for a discussion of claims in ref .
@xcite compared to the results of this paper .
see also appendix [ relaxincomp ] for a brief remark on the question of triplewise incompatibility .
r. k. would like to thank rob spekkens for comments on earlier drafts of this work ; in particular , for asking whether a state - dependent violation of the lsw inequality was possible , and for clarifying the operational meaning of the generalized - noncontextual bound in this inequality .
thanks are also due to andreas winter for asking some tough questions about benchmarks for nonclassicality .
we also thank the anonymous referees for their comments which made us re - examine and revise some of our results .
a. einstein , b. podolsky , n. rosen , phys .
. lett . * 47 * , 777 ( 1935 ) .
j. s. bell , physics , * 1 * , 195 ( 1964 ) .
e. p. specker , dialectica * 14 * , 239 - 246 ( 1960 ) ; english translation : m. p. seevinck , arxiv preprint arxiv:1103.4537v3 ( 2011 ) s. kochen and e. p. specker , j. math . & mech .
* 17 * , 59 ( 1967 ) .
r. clifton , am .
* 61 * , 443 ( 1993 ) .
a. peres , j. phys .
a : math . gen . * 24 * l175 ( 1991 ) n. d. mermin , rev .
phys . , * 65 * , 803 ( 1993 ) .
a. cabello , j. m. estebaranz , and g. garca - alcaine , phys .
a * 212 * , 4 ( 1996 ) .
a. cabello and g. garca - alcaine , j. phys .
a * 29 * , 1025 ( 1996 ) .
a. a. klyachko , m. a. can , s. biniciolu , and a. s. shumovsky , phys .
rev . lett . * 101 * , 020403 ( 2008 ) .
p. kurzyski , r. ramanathan , and d. kaszlikowski , phys .
lett . * 109 * , 020404 ( 2012 ) .
sixia yu , and c. h. oh , phys . rev .
* 108 * , 030402 ( 2012 ) .
a. cabello , arxiv:1112.5149v2 [ quant - ph ] ( 2011 ) a. cabello , phys .
* 90 * , 190401 ( 2003 ) p. busch , phys .
lett . * 91 * , 120403 ( 2003 ) c. m. caves , c. a. fuchs , k. k. manne , and j. m. renes , found
. phys . * 34 * , 193 ( 2004 ) .
r. w. spekkens , arxiv:1312.3667 [ quant - ph ] ( 2013 ) r. w. spekkens , phys .
a * 71 * , 052108 ( 2005 ) .
liang , and r.w .
spekkens , and h.m .
wiseman , phys .
506 * , 1 ( 2011 ) .
s. yu , n.l .
liu , l. li , and c.h .
oh , phys .
a 81 , 062116 ( 2010 ) t. heinosaari , d. reitzner , and p. stano , found .
phys . * 38 * , 1133 ( 2008 ) .
r. kunjwal , c. heunen , t. fritz , arxiv:1311.5948 ( 2013 ) .
s. yu , c.h .
oh , arxiv:1312.6470 ( 2013 ) .
r. kunjwal , arxiv:1403.0470 ( 2014 ) .
the traditional assumption of ks - noncontextuality entails two things : measurement noncontextuality and outcome - determinism for sharp measurements @xcite .
given a set of measurements @xmath139 , measurement noncontextuality is the assumption that the response function for each measurement is insensitive to contexts jointly measurable subsets that it may be a part of : @xmath140 $ ] . here
@xmath141 is an outcome for measurement @xmath22 and @xmath142 is the hidden variable associated with the system s preparation .
outcome - determinism is the further assumption that @xmath143 , i.e. , response functions are outcome - deterministic .
a ks - nchv model is one that makes these two assumptions for sharp ( projective ) measurements .
a ks - inequality is a constraint on measurement statistics obtained under these two assumptions .
a generalized - noncontextual model , on the other hand , derives outcome - determinism for sharp measurements as a consequence of preparation noncontextuality @xcite . for unsharp measurements , however
, outcome - determinism is not implied by generalized - noncontextuality and one needs to model these measurements by outcome - indeterministic response functions @xcite .
we refer the reader to ref .
@xcite for a detailed critique of the assumption of outcome - determinism for unsharp measurements and for arguments on the reasonableness , and generality , of the notion of noncontextuality for unsharp measurements that is the basis of our work . indeed , the qubit effects we need to write the response functions for are of the form : @xmath144 .
we will relabel the outcomes according to @xmath145 so that @xmath146 in what follows .
liang , spekkens and wiseman ( lsw ) argued @xcite that the response function for these effects in a generalized - noncontextual model should be @xmath147+(1-\eta)\left(\frac{1}{2}[0]+\frac{1}{2}[1]\right)$ ] , where @xmath148 $ ] denotes the point distribution given by the kronecker delta function @xmath149 . for @xmath150 ( sharp measurements )
this would be the traditional ks - noncontextual model . when @xmath138 ( unsharp measurements ) , the second `` coin flip '' term in the response function , @xmath151+\frac{1}{2}[1])$ ] , begins to play a role .
this term is not conditioned by @xmath142 , the hidden variable associated with the system s preparation , but is instead the response function for tossing a fair coin regardless of what measurement is being made .
it characterizes the random noise introduced , for example , by the measuring apparatus .
the important thing to note is that this noise is uncorrelated with the system s hidden variable @xmath142 . given these single - measurement response functions , one needs to figure out pairwise response functions for pairwise joint measurements of the three qubit povms .
lsw @xcite argued that the pairwise response functions maximizing the average anti - correlation @xmath44 and consistent with the single - measurement response functions are given by @xmath152[x_j(\lambda)]\\ & + & ( 1-\eta)\left(\frac{1}{2}[0][1]+\frac{1}{2}[1][0]\right),\end{aligned}\ ] ] for all pairs of measurements @xmath153 .
this generalized - noncontextual model for these measurements turns out to be ks - contextual in the sense that the three pairwise response functions do not admit a joint probability distribution over the three measurement outcomes , @xmath154 , that is consistent with all three of them .
indeed this lsw - model maximizes the average anticorrelation possible in specker s scenario given the single - measurement response functions , thus allowing us to obtain the lsw inequality @xmath155 .
let us note the two bounds separately : @xmath156 to be clear , the assumptions leading to the lsw inequality are : * measurement noncontextuality , * outcome - determinism for projective measurements , * no outcome - determinism for nonprojective measurements .
on the other hand , the assumptions that lead to the corresponding ks - inequality ( upper bound of @xmath16 ) are : * measurement noncontextuality , * outcome - determinism for _ all _ ( projective as well as nonprojective ) measurements .
the first set of assumptions is clearly weaker than the second set of assumptions and violation of the lsw inequality rules out even this weaker notion of noncontextuality .
our main result in this paper is that there exists a choice of measurement directions , @xmath47 , and a choice of @xmath25 for some state @xmath75 such that the quantum probability of anticorrelation , @xmath157 , beats the generalized - noncontextual bound @xmath158 .
this rules out the possibility of being able to generate these correlations by classical means , as in the lsw - model , for at least some values of sharpness parameter @xmath25 .
of course , if @xmath159 , then the generalized - noncontextual bound becomes trivial and the question of violation does not arise this situation corresponds to the case where for any of the three pairwise joint measurements , the measuring apparatus outputs one of the two anticorrelated outcomes by flipping a fair coin and there is no functional dependence of the response function on @xmath142 .
in other words , one could generate perfect anti - correlation in a generalized - noncontextual model if @xmath159 .
however , as long as one is performing a nontrivial measurement ( where @xmath160 ) there is a constraint on the degree of anticorrelation imposed by generalized - noncontextuality .
what we establish is that generalized - noncontextuality can not account for the degree of anticorrelation observed in quantum theory .
clearly , quantum theory is nonclassical even given a more stringent benchmark than the one set by ks - noncontextuality .
a violation of the ks - noncontextual bound , @xmath161 , is possible in a generalized - noncontextual model , so such a violation is not in itself a signature of nonclassicality . on the other hand , violation of the generalized - noncontextual bound , @xmath158 , should be considered a signature of genuine nonclassicality in that it is nt attributable either to the system s hidden variable or random noise ( from the measuring apparatus or elsewhere ) .
in appendix f of @xcite , theorem 13 , the authors obtain necessary and sufficient conditions for joint measurability of noisy spin-1/2 observables .
we note , as pointed out by an anonymous referee , that the claimed necessary condition in the aforementioned theorem is incorrect , while the sufficient condition holds . here
we prove a necessary condition for joint measurability , one we use in the main text for triplewise joint measurability , by revising the argument for necessity made by liang et al .
: given a set of qubit povms , @xmath162 , of the form @xmath163 and defining @xmath164 3-vectors @xmath165 a necessary condition for all the povms to be jointly measurable is that @xmath166 and a sufficient condition is that @xmath167 * proof . * we will only prove the necessary condition , which we use in the main text , and refer the reader to ref.@xcite , appendix f , for proof of the sufficient condition .
note that @xmath168 $ ] .
since this holds @xmath169 , we have @xmath170\ ] ] if all the povms are jointly measurable , then we must necessarily have a joint povm @xmath171 such that @xmath172 then , @xmath173,\ ] ] and using @xmath174 , we have @xmath175.\ ] ] further , @xmath176\leq { \text{tr}}\left[e_{x_1\dots x_n}\right],\ ] ] which yields the inequality @xmath177.\ ] ] now , @xmath178 , and therefore , @xmath179=1.\ ] ] also , @xmath180\leq1 $ ] , so we have , by convexity of the set @xmath181\right\}_{x_1\dots x_n}$ ] , @xmath182 which is a necessary condition for joint measurability .
+ for @xmath183 we obtain the necessary condition for triplewise joint measurability which is used in the main text for computing @xmath184 . the necessary and sufficient condition for pairwise joint measurability
is given by @xmath185 this is obtained as a special case , for the present problem , of the more general necessary and sufficient condition for joint measurability of unsharp qubit observables obtained in ref .
@xcite . using @xmath186
, this inequality becomes @xmath187 since @xmath188 , the necessary and sufficient condition for pairwise joint measurability becomes @xmath189 which is used to compute @xmath190 in the main text .
_ orthogonal spin axes : _
the necessary and sufficient joint measurability condition is @xmath192 + _ trine spin axes : _
the necessary and sufficient joint measurability condition is @xmath194
the joint measurement povm @xmath24 for @xmath195 should satisfy the marginal condition : @xmath196 also , the joint measurement should consist of valid effects : @xmath197 where @xmath60 is the @xmath61 identity matrix .
the general form of the joint measurement effects is : @xmath198,\\ g^{ij}_{+-}&=&\frac{1}{2}[(1-\frac{\alpha_{ij}}{2})i+\vec{\sigma}.\vec{a}^{ij}_{+-}],\\ g^{ij}_{-+}&=&\frac{1}{2}[(1-\frac{\alpha_{ij}}{2})i+\vec{\sigma}.\vec{a}^{ij}_{-+}],\\ g^{ij}_{--}&=&\frac{1}{2}[\frac{\alpha_{ij}}{2}i+\vec{\sigma}.\vec{a}^{ij}_{--}],\end{aligned}\ ] ] where each effect is parameterized by four real numbers . from the marginal condition , eqs .
( [ marg1]-[marg2 ] ) , it follows that : @xmath199 these can be rewritten as : @xmath200 from eqs .
( [ bega]-[begb ] ) it follows that : @xmath201
so one can define : @xmath202 now , from eqs .
( [ beg1])-([end1 ] ) the following are obvious : @xmath203,\\ \vec{a}^{ij}_{+-}&=&\frac{1}{2}[\eta(\hat{n}_i-\hat{n}_j)+\vec{a}_{ij}],\\ \vec{a}^{ij}_{-+}&=&\frac{1}{2}[\eta(-\hat{n}_i+\hat{n}_j)+\vec{a}_{ij}],\\ \vec{a}^{ij}_{--}&=&\frac{1}{2}[\eta(-\hat{n}_i-\hat{n}_j)-\vec{a}_{ij}].\end{aligned}\ ] ] this gives the general form of the joint measurement povms in the main text .
for qubit effects , @xmath204 , where @xmath21 , the valid effect condition ( [ valid ] ) is equivalent to the following @xcite : @xmath205 these inequalities can be combined and rewritten as : @xmath206 where @xmath207 and @xmath208 this is the condition for a valid joint measurement used in inequalities ( [ valid1]-[valid2 ] ) in the main text .
we need to maximize @xmath209 to obtain the optimal violation of the lsw inequality .
subject to satisfaction of the joint measurability constraints ( 13 - 14 ) in the main text , we have @xmath210 the inequality above follows from the fact that @xmath211 so that @xmath212 , and @xmath213 also , we have @xmath214 that is , for a fixed @xmath215 , @xmath216 is smallest when the measurement directions @xmath217 are coplanar and @xmath218 . from eqs .
( [ mmts1]-[mmts2 ] ) in the main text , @xmath219 .
when @xmath220 , the three measurements are coplanar and there are only two free angles , @xmath221 and @xmath222 , while the third angle is fixed by these two : @xmath223 . since @xmath224 , for any given @xmath225 and @xmath226 , @xmath227 is smallest when @xmath218 .
hence , we choose the three measurements to be coplanar such that @xmath218 and @xmath228 .
any other choice of @xmath217 will give a larger value of @xmath227 , hence also @xmath216 .
so , @xmath229 we will now argue that this inequality for @xmath230 can be replaced by an equality .
let us take coplanar measurement directions @xmath217 such that @xmath218 .
we also take all the @xmath231 parallel to each other , i.e. , @xmath232 , @xmath233 , and @xmath234 , so that @xmath235 .
besides , @xmath236 @xmath237 . from these conditions
it follows that each @xmath231 is perpendicular to the plane and @xmath238 , @xmath239 .
this allows us to choose @xmath240 .
so , in our optimal configuration , the measurement directions are coplanar while the @xmath231 s are parallel to each other and perpendicular to the plane of measurements .
note that this also means @xmath91 will be parallel to @xmath231 and therefore perpendicular to the plane of measurements , and so will be the optimal state ( which is parallel to @xmath91 ) .
with these optimality conditions satisfied , the optimal violation can now be written as @xmath241 the constraints from joint measurability ( 13 - 14 ) become @xmath242 now , @xmath243 the upper bound follows from the fact that @xmath244 , where @xmath245 and @xmath246 , is an increasing function of @xmath247 for a fixed @xmath248 , i.e. , @xmath249 . here
@xmath250 and @xmath251 .
so , taking @xmath252 , we have @xmath253 note that @xmath254 for @xmath252 .
@xmath255 is the maximum value of @xmath3 for a given coplanar choice of measurement directions @xmath47 and sharpness parameter @xmath25 .
we have worked with the constraint of triplewise incompatibility originally employed by lsw @xcite . the intuition behind this constraint was to ensure that the binary qubit povms we consider do not trivially admit a joint distribution , in which case there should be no contextuality even with respect to the ks - inequality bound of @xmath256 .
however , the situation in respect of povms turns out to be richer than expected : it is possible to construct pairwise joint measurements of these qubit povms such that they violate the lsw inequality , and it is also possible for these qubit povms to be triplewise jointly measurable .
it s just that any triplewise joint measurement that one may construct for these qubit povms will not marginalize to pairwise joint measurements capable of violating the lsw inequality ( or even the ks - inequality for this scenario ) .
however , it may still be possible to construct pairwise joint measurements for the qubit povms that do not arise as marginals of any triplewise joint measurement and can therefore violate the lsw inequality .
on relaxing the requirement of triplewise incompatibility in our optimal ( trine axes ) scenario , @xmath257 , we find that the maximum violation of the lsw inequality occurs at @xmath258 , with @xmath259 and @xmath260 , the violation being @xmath261 or about @xmath262 .
this is a straightforward consequence of eq .
( [ cmax ] ) in the main text .
this result has been quoted in theorem 3 of ref .
@xcite , where the peculiar behaviour of povms with respect to joint measurability has also been discussed .
we note , however , that the claim in ref .
@xcite that _ `` lsw s inequality can be regarded as a genuine ks inequality '' _ is fallacious , which should be clear from our discussion of the lsw inequality and its comparison with the corresponding ks - inequality in appendix [ modelcompare ] . for further related discussion , see ref . | we show that three unsharp binary qubit measurements are enough to violate a generalized noncontextuality inequality , the lsw inequality , in a state - dependent manner . for the case of trine spin axes we calculate the optimal quantum violation of this inequality . besides
, we show that unsharp qubit measurements do not allow a state - independent violation of this inequality .
we thus provide a minimal state - dependent proof of measurement contextuality requiring one qubit and three unsharp measurements .
our result rules out generalized noncontextual models of these measurements which were previously conjectured to exist .
more importantly , this class of generalized noncontextual models includes the traditional kochen - specker ( ks ) noncontextual models as a proper subset , so our result rules out a larger class of models than those ruled out by a violation of the corresponding ks - inequality in this scenario . |
Consumers can expect to pay more to get a mortgage next year, the result of changes meant to reduce the role that Fannie Mae and Freddie Mac play in the market.
The mortgage giants said late Monday that, at the direction of their regulator, they will charge higher fees on loans to borrowers who don't make large down payments or don't have high credit scores—a group that represents a large share of home buyers. Such fees are... ||||| This Christmas holiday, a number of families will be allowed to stay in their homes for a short while longer. Mortgage giants Fannie Mae and Freddie Mac have both announced a two-week moratorium on evictions over the Christmas and New Year's holiday. Though the gesture seems benevolent and considerate, does it really mean that much?
A brief respite
From Dec. 18 through Jan. 3, residents of foreclosed single-family homes and two- to four-unit dwellings will be allowed to stay without disturbance from Fannie Mae or Freddie Mac. The moratorium will only stand for actual evictions, as other administrative functions and proceedings will continue during the two-week period. In addition, the local eviction companies will be allowed to continue their administrative duties to prepare for the eviction at the end of the break.
By looking at the most recent data from both Freddie Mac and Fannie Mae, the firms have foreclosed on an average of 58,000 homes per quarter. The two-week moratorium may give up to 11,000 families nationwide the time they need to reach out to friends and family to make new living arrangements.
Nothing new
The moratorium is not a new occurrence for the mortgage giants. Each year they announce the same respite for beleaguered families. Though the families appreciate the gesture, investors shouldn't view the event as a real kindness that would cost the companies anything. Since the administrative and procedural steps will continue on schedule, the only difference to Freddie and Fannie will be the physical acquisition of the foreclosed property.
Though Fannie and Freddie wouldn't be able to sell the foreclosed property before the eviction, that would possibly pose a slight delay for the firms, but the Christmas holiday is historically a slow time for property sales -- giving the GSEs very little downside for offering the moratorium.
Season of giving
Announcing the moratorium has given both Fannie Mae and Freddie Mac the opportunity to give some advice to troubled homeowners. Both firms have expressed their desire that borrowers who find themselves in trouble should seek help as soon as possible. Terry Edwards, chief operating officer for Fannie Mae, said, "We encourage any homeowner who is having difficulty making their mortgage payment to reach out for help right away. Fannie Mae will continue to help borrowers avoid foreclosure whenever possible." Both GSEs have good records of helping troubled borrowers avoid foreclosure, with Freddie Mac saying eight out of 10 borrowers have been able to stay in their homes with the foreclosure alternatives it provides.
Hopefully the extra time the families get from Freddie Mac and Fannie Mae's moratorium will be enough to allow them a happy holiday season. For investors, the mortgage giants' generosity should give you a little bit of the warm and fuzzies, without any concerns about their operations. | – An effort to loosen Fannie Mae and Freddie Mac's grip on the mortgage market will come at a price: pricier mortgages. The two announced late Monday that, at the insistence of the Federal Housing Finance Agency, they will increase their fees to borrowers without impeccable credit (designated as those with scores of 680 to 760 out of 850) or those making a down payment of less than 20%. The Wall Street Journal provides an example: A borrower making a 10% down payment and carrying a credit score of 735 would currently pay 0.75% of the loan amount in fees on a 30-year fixed-rate mortgage. In 2014, that would rise to 2%, a fee that the Journal calculates would raise the mortgage rate some 0.4 percentage points. And even those making a down payment of more than 20% will see the fees rise if they're in the so-so credit zone. The changes go live in March, but may start showing up earlier. The reason for the move: to make it more competitive for private investors, who "target a higher rate of return" and charge higher fees, to back mortgages. Fannie and Freddie back about two-thirds of America's mortgages, and a FHFA official says even the new rates will be lower than private ones. The Journal pairs that with some gloomy responses, such as this from Lewis Ranieri, credited with co-inventing the mortgage-backed security. He thinks the private sector isn't in a position to lend more yet, so "you're just making housing less affordable." Meanwhile, Daily Finance provides a brief respite from the gloom, reporting that, as they do each holiday season, Fannie and Freddie won't evict residents of foreclosed single-family homes between today and Jan. 3—a move that could affect as many as 11,000 families. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Vaccine Injured Children's
Compensation Act of 2001''.
SEC. 2. PURPOSE OF PROGRAM.
Section 2110(a) of the Public Health Service Act, such Act (42
U.S.C. 300aa-10(a)) is amended by adding at the end the following
sentence: ``Such Program is a remedial program that is to be construed,
both as to causation and damages, in a fashion that gives broad effect
to the remedial purpose of this subtitle. Concepts of sovereign
immunity do not apply in such Program.''.
SEC. 3. BURDEN OF PROOF.
Section 2113 of the Public Health Service Act (42 U.S.C. 300aa-13)
is amended--
(1) in subsection (a)(1)--
(A) in subparagraph (A), by striking ``a
preponderance of the evidence'' and inserting the
following: ``submitting evidence sufficient to justify
a belief by a fair and impartial individual that
petitioner's claims are well grounded as to''; and
(B) in the matter after and below subparagraph (B),
by adding at the end the following: ``When, after
consideration of all evidence and material of record in
a case, there is an approximate balance of positive and
negative evidence, while applying the standard under
subparagraph (A), regarding the merits of an issue
material to the determination of the matter, the
benefit of the doubt in resolving each such issue shall
be given to petitioner.'';
(2) in subsection (a)(2)(B)--
(A) by inserting ``only'' before ``include
infection''; and
(B) by inserting a comma after ``metabolic
disturbances'';
(3) in subsection (a), by adding at the end the following
paragraph:
``(3) Any defense raised by respondent that the illness,
disability, injury, condition, or death described in the
petition was in fact due to factors unrelated to the
administration of the vaccine must be proved by clear and
convincing evidence and may not be made on the basis of a
repudiation of the Vaccine Injury Table.''; and
(4) in subsection (b)(1), in the matter after and below
subparagraph (B), by striking ``shall consider the entire
record and the course of the injury'' and inserting the
following: ``shall consider the entire record. In the
evaluation of damages and future needs, the special master or
court shall consider the course of injury''.
SEC. 4. COMPENSATION ISSUES.
Section 2115 of the Public Health Service Act (42 U.S.C. 300aa-15)
is amended--
(1) in subsection (a)--
(A) in paragraph (1)(A)--
(i) in clause (ii), by striking ``and'' at
the end;
(ii) in clause (iii), by striking the
period at the end of subclause (II) and
inserting ``; and''; and
(iii) by adding at the end the following
clause:
``(iv) are necessary for the establishment and
maintenance of a trust to receive program funds.'';
(B) in paragraph (4), by adding after the period
the following sentence: ``No reduction to net present
value shall be applied to this portion of a
petitioner's award.''; and
(C) by adding at the end the following paragraph:
``(5) Actual unreimbursable expenses that have been or will
be incurred for family counseling and/ or training determined
to be reasonably necessary and that result from the vaccine-related
injury for which the petitioner seeks compensation.'';
(2) in subsection (b)--
(A) in paragraph (1), by adding ``and'' after the
comma at the end;
(B) in paragraph (2), by striking ``, and'' and
inserting a period; and
(C) by striking paragraph (3); and
(3) in subsection (e), by adding at the end the following
paragraph:
``(4)(A) During the pendency of a petition filed under
section 2111 (whether for a vaccine administered after the
effective date of this part or before such date), the special
master or court may, upon application of the petitioner, award
payments to cover the petitioner's reasonable attorneys' fees
and other costs that have been incurred with respect to the
petition.
``(B) Payments under subparagraph (A) regarding the
petition involved may not be made more frequently than once
every 90 days.'' .
SEC. 5. LIMITATIONS OF ACTIONS.
Section 2116 of the Public Health Service Act (42 U.S.C. 300aa-16)
is amended--
(1) in subsection (a)--
(A) in paragraph (2), by striking ``36 months'' and
inserting ``72 months'';
(B) in paragraph (3)--
(i) by striking ``24 months'' and inserting
``36 months''; and
(ii) by striking ``48 months'' and
inserting ``72 months''; and
(C) by adding after and below paragraph (3) the
following:
``Notwithstanding the limitations contained in this subtitle as amended
by the Vaccine Injury Compensation Program Corrective Amendments of
2001, the time period for filing a petition shall be extended an
additional 36 months from the date the petitioner first knew or
reasonably should have known that the petitioner may have been eligible
for compensation under this subtitle, including knowledge not only that
the injury or death involved may have been caused by the vaccine, but
also that a petition under section 2111 was a potential remedy.'';
(2) in subsection (b), in the matter preceding paragraph
(1), by striking ``2 years'' and inserting ``72 months''; and
(3) by adding at the end the following subsections:
``(d) The statute of limitations for filing a petition under
section 2111 shall be tolled until petitioner reaches the age of 18,
and, if a petitioner is incompetent, until 24 months after a guardian
is appointed or otherwise qualified by a court of competent
jurisdiction.
``(e) Notwithstanding section 2114(c)(4) or 2111(b)(2), if a
petitioner who previously filed a petition under section 2111 was
denied compensation because of (1) failure to satisfy the former $1,000
unreimbursed expenses requirement of section 2111(c)(1)(D)(I), or (2)
failure to satisfy the filing deadlines set forth in section 2114, in
any case in which the petitioner would have satisfied the limitations
of actions provisions of this subtitle as amended by the Vaccine Injury
Compensation Program Corrective Amendments of 2001, then the petitioner
shall have the right to refile the petition within 72 months after
reaching the age of majority, or within 24 months after the effective
date of such Amendments, whichever is the longer period.''. | Vaccine Injured Children's Compensation Act of 2001 - Amends provisions of the Public Health Service Act relating to the National Vaccine Injury Compensation Program to: (1) designate the Program as a remedial program under which sovereign immunity does not apply; (2) change the burden of proof requirement for the award of compensation from a preponderance of the evidence to evidence sufficient to justify a belief that the petitioner's claims are well grounded (while giving the benefit of doubt to the petitioner); (3) require any defense raised that an illness, injury, or death was due to unrelated factors to be proved by clear and convincing evidence; (4) authorize as Program compensation expenses necessary for the establishment of a trust to receive Program funds, as well as expenses incurred for family counseling or training necessitated by the vaccine-related injury; (5) allow the award of petitioner's attorneys' fees; (6) increase to up to 72 months the statute of limitations under the Program; (7) allow such period to be extended for an additional 36 months after a petitioner first knew or should have known about his or her eligibility for compensation; (8) toll the statute of limitations until a petitioner reaches age 18 and, if a petitioner is incompetent, until 24 months after a guardian is appointed; and (9) authorize the refiling of a previously failed petition if the petitioner would have met the extended statute of limitations provided under this Act. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Bi-State Aircraft Noise Correction
Act of 1996''.
SEC. 2. FINDINGS, DECLARATION, AND PURPOSE.
(a) Findings.--Congress finds that--
(1) the Expanded East Coast Plan of the Federal Aviation
Administration has resulted in significantly increased levels
of aircraft noise over New Jersey;
(2) over the past 30 years, and especially since the
implementation of the Expanded East Coast Plan, relentless
noise from aircraft departing from Newark International Airport
has adversely affected the residents of northwestern Staten
Island, New York;
(3) the Federal Aviation Administration has stalled,
obfuscated, resisted, and delayed any meaningful attempt to
mitigate the aircraft noise problem in New Jersey created by
the Administration through implementation of the Expanded East
Coast Plan; and
(4) the efforts of the Federal Aviation Administration to
mitigate aircraft noise levels on Staten Island have been
inadequate.
(b) Declaration.--Congress declares that the Federal Aviation
Administration should remedy the problem it has created, to the maximum
extent practicable, by formulating and implementing plans to mitigate
aircraft noise over certain areas of New Jersey and Staten Island.
(c) Purpose.--It is the purpose of this Act to compel the
Administrator to mitigate aircraft noise over certain areas of New
Jersey and Staten Island.
SEC. 3. REDUCTION IN AIRCRAFT NOISE OVER NEW JERSEY.
Not later than 6 months after the date of the enactment of this
Act, the Administrator of the Federal Aviation Administration
(hereinafter in this Act referred to as the ``Administrator'') shall
develop and publish, without compromising safety, a comprehensive plan
to reduce aircraft generated noise in New Jersey by 6 decibels (yearly
day-night average sound level) for at least 80 percent of the people
residing within 18 nautical miles of Newark International Airport,
relative to the noise values reported in the Federal Aviation
Administration document, ``Environmental Impact Statement, Runway 11
ILS at Newark International Airport (September 16, 1993)'', and
excluding regions where aircraft-generated noise exceeds 65 decibels
(yearly day-night average sound level).
SEC. 4. REDUCTION IN AIRCRAFT NOISE OVER STATEN ISLAND.
Not later than 6 months after the date of the enactment of this
Act, the Administrator shall develop and publish a plan to investigate
and test southbound departure procedures from runway 22 of Newark
International Airport that will fully utilize, without compromising
safety, all of the allowable and available climbout airspace and that
will result in a minimum 25 percent decrease in aircraft noise on the
ground in northwestern Staten Island. The Administrator shall also
investigate a straight-out southbound departure from runway 22.
SEC. 5. OTHER MEASURES.
(a) Remediation Efforts.--The Administrator shall undertake such
remediation efforts as may be necessary to mitigate aircraft noise
within the sound level contour described in section 3 pursuant to part
150 of title 14, Code of Federal Regulations.
(b) Nonapplicability of EIS Requirement.--In carrying out the
activities under this Act, the Administrator shall not be required to
prepare an environmental impact statement in accordance with the
National Environment Policy Act of 1969 or any other law.
SEC. 6. PROCEDURE.
(a) Standing.--In order to ensure compliance with this Act by the
Administrator--
(1) the New Jersey Citizens for Environmental Research, and
(2) a group to be designated by the Staten Island Borough
President,
shall have standing in United States district court to compel the
Administrator to comply with this Act.
(b) Venue.--The venue for any such action shall be the United
States district court in Newark, New Jersey.
(c) Attorney's Fees.--
(1) Award.--Except as provided in paragraph (2), the
Administrator shall pay court costs and reasonable attorney
fees incurred by the organizations referred to in subsection
(a) with respect to an action to compel the Administrator to
comply with this Act. Punitive damages may not be awarded.
(2) Limitation.--Paragraph (1) shall not apply if the judge
imposes a sanction under rule 11 of the Federal Rules of Civil
Procedure on an attorney, law firm, or party in the plaintiff's
case or if the suit is dismissed by a judge on a motion by the
defendants for summary judgment.
SEC. 7. IMPLEMENTATION.
(a) Deadlines.--The Federal Aviation Administration shall begin
implementation of--
(1) the plan described in section 3 on or before 90th day
after the date of publication of the plan;
(2) the plan described in section 4 on or before 90th day
after the date of publication of the plan; and
(3) the plan described in section 3 or 4 on or before 90th
day after the date of any judicial order or settlement
agreement which is issued or entered into in response to a
civil action brought in accordance with section 6(a) and which
requires the implementation of such plan.
(b) Limitation.--No plan described in section 3 or 4 shall have the
effect of reducing aircraft arrivals to or departures from Newark
International Airport. | Bi-State Aircraft Noise Correction Act of 1996 - Declares that the Federal Aviation Administration (FAA) should remedy the problem it has created by formulating and implementing plans to mitigate aircraft noise over certain areas of New Jersey and Staten Island.
Instructs the FAA Administrator to: (1) develop and publish a plan to reduce aircraft-generated noise in New Jersey by six decibels for at least 80 percent of the people residing within 18 nautical miles of Newark International Airport; (2) investigate and test southbound departure procedures from Newark International Airport runway 22 that will result in a minimum 25 percent decrease in aircraft noise on the ground in northwestern Staten Island; (3) investigate a straight-out southbound departure from runway 22; and (4) undertake remediation efforts to mitigate aircraft noise within a specified sound level contour.
Confers standing in Federal district court upon the New Jersey Citizens for Environmental Research (and a group to be designated by the Staten Island Borough President) to compel the Administrator to comply with this Act.
Sets deadlines for FAA implementation of this Act. |
SATIRE
The talented comedienne and anti-gun activist hosted SNL on Saturday and featured in an ad poking fun at gun enthusiasts.
Back in late July, a little over a month after the Charleston church shooting that killed nine African Americans, one month before the Roanoke shooting that took the lives of two news station employees live on air, and a few months before the recent Umpqua Community College shooting that claimed the lives of nine people, a misogynistic gunman opened fire at a screening of the movie Trainwreck in a Lafayette, Louisiana, movie theater, leaving two dead and nine wounded.
In the wake of the shooting, Trainwreck star Amy Schumer joined her cousin, Senator Chuck Schumer of New York, and made an emotional plea for stricter gun control regulations.
“Unless something is done and done soon, dangerous people will continue to get their hands on guns,” Schumer said. “We need a background check system without holes and fatal flaws. We need one with accurate information that protects us like a firewall. The critics scoff and say, ‘Well, there’s no way to stop crazy people from doing crazy things,’ but they’re wrong. There is a way to stop them. Preventing dangerous people from getting guns is very possible. We have common-sense solutions. We can toughen background checks and stop the sale of firearms to folks who have a violent history or history of mental illness.”
She later blamed “money”—i.e. the gun lobby—for the persistent lack of gun control in America in follow-up statements made at Italy’s Locarno Film Festival.
As the host of the second episode of this season’s Saturday Night Live, Schumer poked fun at America’s national obsession with guns, featuring in a mock ad (think Zales commercial) of regular Americans experiencing pivotal moments in their lives, from receiving gifts to childbirth to first love.
“Whatever you’re waiting for, whatever you face, whatever you’re looking for, there are things we all share: love, family, connection, a sense of purpose, and also… guns,” a narrator announces.
Yes, in the ad—titled “Guns: We’re Here to Stay”—guns play a role in each of the characters’ lives. Schumer is gifted a gun as an anniversary present by Taran Killam; Bobby Moynihan and Schumer’s Trainwreck co-star Vanessa Bayer visit the hospital to have their child with a rifle in tow; Kate McKinnon fires shots in the air from her revolver while taking a leisurely jog through the park; the list goes on. ||||| Creators of the Star Wars franchise want the world to know they did not give comedian Amy Schumer permission to feature its characters in her GQ magazine spread.
Schumer is currently appearing in the August issue of the mag as a nude Princess Leia, alongside Star Wars droids.
advertisement
In response to an angry fan on Twitter, Disney and Lucasfilm announced it had no affiliation with the “inappropriate” Star Wars-themed spread, which shows the topless Schumer sitting in a bed, smoking cigarettes beside R2-D2 and C-3PO.
“Lucasfilm & Disney did not approve, participate in or condone the inappropriate use of our characters in this manner,” the company wrote.
While the mag named Schumer as “the funniest woman in the galaxy,” many Star Wars fanatics seemingly do not agree.
“I’m very mad to see C-3PO sexualized in this way,” said one fan, who even suggested a lawsuit against GQ, per Page Six.
“Those images disgust me and cheapen your characters,” suggested another.
Thursday morning, political commentator and Drudge Report editor Matt Drudge tweeted the question, “Who is @amyschumer? Where did she come from? Why is she being force-fed on population?”
GQ released a behind-the-scenes clip from Schumer’s shoot Thursday, below, in which the 34-year-old actress and comedian makes sexual jokes about the fictional characters.
“Are we not supposed to talk about what really happened?” asked Schumer as she glances over at C-3PO.
Warning: Explicit Language
I love @AmySchumer & her comedy routine(s). But this GQ #StarWars spread is just in poor taste. @starwars — Zack W. (@RogueKnite) July 16, 2015 ||||| She went on to talk about the women in her life from her 18-month-old niece to meeting Hillary Clinton for the first time and finding out she didn't like tequila. She made signature self deprecating jokes about her shape and size saying she doesn't take nude selfies because everything under he dress looks like a lava lamp. But the message of the monologue was clear: from childhood to stardom to middle age and beyond, women are all getting hosed by Hollywood.
The best sketch was a fake public service announcement for firearms. A week after the Oregon mass shooting and just days after a double shooting, the ad features a collection of seemingly random people in their lives. A woman waiting at a restaurant for a date who is late, followed by an awkward dude checking out a girl at a party, a couple having a baby, a runner in Central Park.
"There are things we all share," the ad says. "Love. Family. Connection. A sense of purpose. And also: Guns. Guns are there. In little events, and big ones, when things fall apart, or it all comes together. They unite us, comfort us, bring us joy, and strength." | – Hometown girl Amy Schumer took the stage at Saturday Night Live and wasted little time in taking on what the Daily Beast terms "America's national obsession with guns," vis a vis a mock ad that inserts firearms into the more tender moments in our lives. "Whatever you’re waiting for, whatever you face, whatever you’re looking for, there are things we all share: love, family, connection, a sense of purpose, and also… guns," a narrator announces. Schumer, who's spoken out in favor of stricter gun laws after a theater shooting killed two people at a screening of her movie, Trainwreck, also took on female body image in her opening monologue, zinging the Kardashians as "a whole family who take the faces they were born with as a light suggestion." Salon calls her performance a "near-perfect show." (Click to see how Schumer recently angered Star Wars fans.) |
the motor neuron diseases ( mnd ) are a class of progressive neurologic diseases characterized by selective degeneration of the motor neurons that govern voluntary muscle movement .
amyotrophic lateral sclerosis ( als ) , also called lou gehrig 's disease , is the most common form of motor neuron disease . it is a fatal adult - onset ( the average disease onset age is approximately 50 years ) neurodegenerative disease , and major characteristic symptoms of als include muscle weakness , spasticity , atrophy , paralysis and premature death .
motor neurons in the cortex , brain stem and spinal cord gradually degenerate in als patients , and most als patients die within 3~5 years of disease onset due to respiratory failure . however , approximately 10% of patients survive more than ten years .
the incidence rate of als is 2 out of 100,000 individuals per year , and the average age of onset is approximately 50 years .
als is sporadic in 95% of patients and seems to occur randomly throughout the population ( called sporadic als ( sals ) ) .
the remaining 5% of als patients have at least one affected first degree relative ( familial als ( fals ) ) .
there is no cure for als , and riluzole , the only fda - approved drug for als treatment , extends survival by only a few months .
the first identified als disease gene was sod1 ( copper zinc superoxide dismutase 1 ) , which was found in 1993 .
thanks to recent advances in sequencing and genotyping technology , many new genetic mutations have been identified in fals and sals patients .
these mutations are found in the hnrnpa1 , pfn1 , taf15 , atxn2 , c9orf72 , ubqln2 , optn , vcp , fus and tardbp genes .
interestingly , the most prominent histopathological hallmark of als is the accumulation of misfolded oligomers or protein inclusions containing tdp-43 , fus or sod1 protein .
moreover , most als patients have dense tdp-43 aggregates in affected neurons and glia in the cns . despite protein aggregation being a prominent pathological hallmark of als , many questions still need to be addressed , especially regarding the pathological role and formation mechanism of these protein aggregates .
recently , it was proposed that cell to cell transmission of misfolded protein aggregates ( a prion - like mechanism ) may directly contribute to the generation of novel protein aggregates and the propagation of neurodegeneration in als and other neurodegenerative diseases , such as alzheimer 's disease , parkinson 's disease and huntington 's disease .
in this review , we highlight the recent findings implicating a prion - like mechanism as a key player in neurodegeneration in als patients and discuss the possible therapeutic strategies for als based on a prion - like mechanism .
transmissible spongiform encephalopathies ( tses ) , also called prion diseases , are fatal neurodegenerative diseases of mammals .
prp aggregates can self - propagate and elongate by binding to monomers of prp ( normal prion protein ) .
first of all , als patients have an aggregate structure that contains a misfolded self - protein in their affected neurons .
furthermore , recent cultured cell line and animal model studies suggest that the misfolded forms of sod1 and tdp-43 do self - propagate within neuronal cells and transmit to neighboring cells .
one of the most well - known clinical observations of als is focal onset of motor weakness in the spinal and bulbar regions and contagious spreading of the disease .
interestingly , intensive autopsy studies of postmortem als patients show that the loss of lower motor neurons is most apparent at the region of onset and decreases in a graded manner with further distance from the anatomic location of disease onset .
these clinical data indicate that neurodegeneration in als is an orderly propagating process , which seems to share the signature of seeded self - propagation with pathogenic prion proteins . despite a number of parallels between prion disease and als ,
prion diseases can be transmitted through an animal population via oral uptake , blood transfusion or other sources of direct contact .
tdp-43 and sod1 misfolded protein aggregates found in als patients can induce the misfolding of their normally structured counterparts ; however , at least under natural circumstances , als is not an infectious disease .
therefore , we should differentiate the terms " prion " and " prion - like " .
the term " prion - like mechanism " will be used to identify the molecular mechanisms that share common features with the self - propagation and spreading characteristics of prion proteins .
a summary of the evidence implicating the prion - like mechanism within als is shown in table 1 .
native sod1 forms a very stable homodimer , but almost all als - linked sod1 mutations are susceptible to partial unfolding at physiological ph and temperature .
furthermore , mutant sod1-containing insoluble inclusions are highly accumulated within affected motor neurons of sod1-related fals patients .
transgenic mice expressing human sod1 with a pathogenic mutation very accurately recapitulate common characteristics of als such as the selective progressive loss of motor neurons and a progressive loss of motor activity .
sod1-positive inclusions found in human patients and fals mouse - models show identified granule - coated fibrillar morphologies .
fibrillar protein aggregates that are rich in -sheet structures act as a structural template to convert normal proteins into a misfolded structure and then elongate the protein fibril .
it is not clear if sod1-containing inclusions of fals patients are -sheet rich fibrils , but fibrillar aggregates of the sod1 mouse model contain amyloid - like aggregates with a -pleated sheet .
moreover , spinal cord homogenates of transgenic mice expressing the als - linked g93a - mutant human sod1 protein triggered amyloid - like fibril formation of purified wild type and mutant sod1 protein . other studies also show that wild type and mutant misfolded sod1 proteins can induce misfolding of cell - endogenous normal structured wild type sod1 in a physiological intracellular environment .
removed misfolded mutant sod1 proteins using a mutant form specific antibody ( gx - ct ) in a hek cell line expressing the sod1 g127x mutant , but this removal of misfolded seeds did not prevent aggregation of endogenous sod1 .
this result indicates that the newly generated misfolded sod1 also act as a template for the self - propagation of misfolded sod1 aggregates . as mentioned above
, many studies suggest that misfolded sod1-containing fibrils can act as a structural template for a " prion - like replication " from native - structured protein to insoluble misfolded conformers . misfolded fibrils can be elongated by this conversion process of the normal protein .
breakage of misfolded fibrils is important for this self - propagation process because sheared pieces of misfolded fibrils can be propagated through the template - assisted misfolding of native protein .
this self - propagation process is analogous to the replication of infectious prion aggregates ( fig .
another key feature of prion - like mechanisms is the cell - to - cell spread of misfolded aggregates .
another possible process is cell death induced leakage of misfolded seeds . the death of affected cells releases aggregated protein into the extracellular environment , and then neighboring cells can take up these aggregates by phagocytosis ( fig .
therefore , there is a high possibility that seed spreading by cell death is implicated in cell - to - cell transmission of misfolded aggregates .
a recent study reported that purified als - linked sod1 mutant aggregates ( sod1 ) effectively penetrate into cells and convert the endogenous wild type sod1 to misfolded aggregates in the neuro-2a cell line .
these aggregates are continuously released by cells and taken up into neighboring cells via macropinocytosis .
these findings indicate that als - linked sod1 aggregates have prion - like properties such as self - perpetuation and the transmission of the misfolded pathological proteins to adjacent cells .
misfolded aggregate spreading mediated by secreted mutant sod1 may not be limited to neuron to neuron propagation .
it is very well accepted that secreted toxic factors produced by glial cells expressing als - causing mutant sod1 induce the loss of motor neurons in the fals sod1 mouse model . using astrocytes derived from neural progenitor cells from the postmortem tissues of als patients , haidet - phillps et al . demonstrated that astrocytes generated from tissues from als patients are toxic to motor neurons derived from non - als postmortem tissues .
importantly , knock - down of sod1 in als tissue - derived astrocytes was found to mitigate the motor neuron toxicity of these astrocytes .
these findings indicate that the generation of a glial cell released factor that is toxic to motor neurons is dependent on the glial sod1 protein .
furthermore , overexpression of als - linked mutant sod1 in astrocytes induced an increase in exosome release , and those astrocyte - produced exosomes contained mutant sod1 proteins .
not surprisingly , mutant sod1 was transmitted to the cytoplasm of spinal neurons through astrocyte - released exosomes .
however , additional experimental studies are required to prove that secretory vesicles released from glial cells act as a messenger for misfolded seeding protein .
tar dna binding protein ( tdp-43 , 43 kda ) is a highly conserved rna / dna binding protein involved in various rna processing pathways including stress granule formation and rna splicing .
tdp-43 is a major component of pathological inclusions found in spinal cord motor neurons , hippocampal and frontal cortex neurons and glial cells in most sals and sod1-negative fals cases . under normal conditions ,
most tdp-43 protein is localized in the nucleus , however , in als patients , neurons with cytoplasmic tdp-43 aggregates often showed a corresponding reduction of the nuclear tdp-43 level .
genetic studies have identified tardbp ( which encodes tdp-43 ) mutations in ~4% of fals cases and a small percentage of sals cases . following the identification of tardbp mutations in als patients ,
other fals mutations were identified in fus which also encodes the rna binding protein fus .
3 ) , and both purified tdp-43 and fus can easily aggregate in vitro . furthermore ,
both tdp-43 and fus contain a prion - like glutamine / asparagine rich domain that shares similarities with yeast prion protein , and this domain is essential for amyloid - like fiber polymerization in cell - free models of rna granule formation . like sod1 proteins ,
the als - causing tdp-43 mutation enhances neurotoxicity and abnormal aggregate formation , and the c - terminal domain is essential for this aggregation process .
moreover , almost all als linked tdp-43 mutations are found in the c - terminal region , and c - terminal truncated fragments of tdp-43 show significantly enhanced aggregation properties in vitro and in cells . taken together
, these data suggest that tdp-43 positive neuronal cytoplasmic inclusions are driven by the prion - like c - terminal domain of tdp-43 and that pathologic aggregation process is accelerated by als - linked tdp-43 mutations .
a recent study reported that the recombinant tdp-43 protein forms sarkosyl - insoluble fibrillar aggregates in vitro , and transduction of these tdp-43 fibrils into cultured hek293 t cells overexpressing tdp-43 induces fibrillation of the endogenous tdp-43 .
nonaka et al . also demonstrated that the introduction of detergent insoluble tdp-43 aggregates from als or ftld - tdp patients into sh - sy5y cells expressing tdp-43 induces aggregation of phosphorylated and ubiquitinated tdp-43 in a prion - like , self - templating manner .
in addition , phosphorylated tdp-43 aggregates are transmitted between cultured cells , and intracellular tdp-43 aggregates are associated with exosomes . based upon these results
, the prion - like properties of tdp-43 may contribute to the pathological mechanism of als .
it remains to be determined whether fus also shows prion - like propagation , but recent studies have reported that an als - causing mutant of fus forms amyloid - like fibrillar aggregates , and these fibrils act as seeds to trigger the aggregation of wild - type fus in vitro .
accumulating evidence suggests that the prion - like mechanism of tdp-43 and fus plays an important role in als pathogenesis ; however , further studies are needed to elucidate their exact mechanism of action and pathological effect .
the focality of clinical onset and regional spreading of neurodegeneration are typical features of als .
one of the possible models for the progression of als would be the spreading of toxic misfolded seeds from a focal site .
if a prion - like mechanism represents key components of disease progression and persistence , antibody - based drug development could be possible .
antibodies could promote break - down of misfolded - aggregates and block their ability to act as a nucleation seed or penetrate into neighboring cells .
interestingly , intracerebroventricular infusion of monoclonal antibodies specific to misfolded sod1 extends the lifespan of mice with als ( g93a - sod1 mouse model ) .
another promising therapeutic strategy for als involves elucidating the shared pathological mechanism between als and prion diseases .
recent studies indicate that elevation of the eif2-phosphorylation level is a common feature of prion diseases and als models .
strikingly , downregulation of eif2-phosphorylation by perk inhibitor treatment mitigates the toxicity of prion proteins and tdp-43 [ 39 , 41 ] .
consequently , dissecting the molecular mechanism of a prion - like process may yield valuable insights into developing therapeutic strategies for als .
therefore , lessons and tools from the prion field may become useful for future research on als . | als is a fatal adult - onset motor neuron disease .
motor neurons in the cortex , brain stem and spinal cord gradually degenerate in als patients , and most als patients die within 3~5 years of disease onset due to respiratory failure .
the major pathological hallmark of als is abnormal accumulation of protein inclusions containing tdp-43 , fus or sod1 protein .
moreover , the focality of clinical onset and regional spreading of neurodegeneration are typical features of als .
these clinical data indicate that neurodegeneration in als is an orderly propagating process , which seems to share the signature of a seeded self - propagation with pathogenic prion proteins . in vitro and cell line experimental evidence
suggests that sod1 , tdp-43 and fus form insoluble fibrillar aggregates .
notably , these protein fibrillar aggregates can act as seeds to trigger the aggregation of native counterparts .
collectively , a self - propagation mechanism similar to prion replication and spreading may underlie the pathology of als . in this review , we will briefly summarize recent evidence to support the prion - like properties of major als - associated proteins and discuss the possible therapeutic strategies for als based on a prion - like mechanism . |
the measurement and control of light produced by quantum systems have been the focus of interest of cavity quantum electrodynamics @xcite .
specially , the emission of light powered by solid - state devices coupled to nanocavities is an extensive area of research due to its promising technological applications , such as infrared and low - threshold lasers @xcite , single and entangled photon sources @xcite , as well as various applications in quantum cryptography @xcite , and quantum information @xcite .
experiments with semiconductor quantum dots ( qds ) embedded in microcavities have revealed a plethora of quantum effects and offer desirable properties for harnessing coherent quantum phenomena at the single photon level .
for example , the purcell enhancement @xcite , photon anti - bunching @xcite , vacuum rabi splitting @xcite and strong light matter coupling @xcite . these and many others quantum phenomena are being confirmed experimentally by observing the power spectral density of the light ( psd ) emitted by quantum - dot - cavity systems ( qd - cavity ) .
thus , the psd or so - called emission spectrum is the only relevant information about the system which allows to study the properties of light via measurements on correlations functions as stated by the wiener - khintchine theorem @xcite . in order to compute the absorption or emission spectrum in open quantum systems , more precisely , in qd - cavity systems different approaches
have been developed from theoretical point of view .
for example , the method of the thermodynamic green functions which is applied to the determination of the susceptibilities and absorption spectra of atomic systems embbeded in nanocavities @xcite , and the time - resolved photoluminescence approach whose application allows to determine the emission spectrum by consideration of an additional subsystem called the photon reservoir @xcite . however , these methods have their own approximations and restrictions and therefore are not widely used . frequently , the emission spectrum in qd - cavity systems is computed through the quantum regression theorem ( qrt ) @xcite , since it relates the evolution of mean values of observables and the two - time correlation functions .
it is worth mentioning that this approach can be difficult to implement in a computer program , it due to that computational complexity of qrt approach increases significantly as the number of qds or modes inside the cavity , and the dimensionalities of the hilbert spaces are large . in general , this approach is time - consuming due to that it requires to solve a large system of coupled differential equations , and numerical instabilities can arise .
moreover , theoretical complications can appears related to dynamics of the operators involved , as we will point out in the next section . in spite of this
, the qrt approach is widely used for theoretical works , for example , in studies of the luminescence spectra of coupled light - matter systems in microcavities in the presence of a continuous and incoherent pumping @xcite , and the relation between dynamical regimes and entanglement in qd - cavity systems @xcite . in the past ,
the green s functions technique ( gft ) was successfully applied for calculation of the micromaser spectrum @xcite , as a methodology in which the two - time correlation function is treated as a green s function that decays as the off - diagonal elements of the reduced density matrix of the system for a very specific initial conditions .
nevertheless , this approach has not been widely noticed in many significant situations in open quantum systems .
possibly , it is due to their work having a limitation of implementation . the purpose of this work is to present a simple , but efficient numerical method based on qrt formalism which overcome the inherent difficulties associated to the direct application of the qrt , by solving the dynamics of the system in the frequency domain directly .
this paper is organized as follows : section [ sec : qrt ] review the theoretical background of quantum regression theorem and its relationship with the green s functions technique .
section [ sec : appl ] deal with a concrete application of our proposed method for computing the emission spectra of a dissipative qd - cavity system . in section
[ results ] we show the numerical calculations of the emission of spectrum for the cavity and the quantum dot from both gft and qrt approaches .
finally , we conclude in the last section .
one of the most important measurements when the light excites resonantly a qd - cavity system is the emission spectrum of the system . from theoretical point of view
, it is assumed that corresponds to a stationary and ergodic process which can be calculated as a psd of light using the well - known wiener - khintchine theorem @xcite .
it states that the emission spectrum is given by the fourier transform of the correlation function ( two - time expectation value ) of the operator field @xmath0 , @xmath1 where @xmath2 and the normalizing factor is the population @xmath3 at the steady - state . in order to calculate the two - time expectation value
is frequently used the qrt which states that if a set of operators @xmath4 satisfy the dynamical equations @xmath5 then @xmath6 is valid for any operator @xmath7 at arbitrary time @xmath8 .
it is worth mentioning that vality of this theorem holds whenever a closed set of operators are involved in the dynamics . in general , to obtain the closed set of operators can be difficult or an impossible task , since it must be added as many operators as necessary in order to close the dynamics of the system .
for example , in order to compute the emission of spectrum in a simple model of qd - cavity system @xcite two new operadors are required due to that the field operators in the interaction picture does not lead to a complete set .
before we consider the green s functions technique , we will briefly describe the calculation of the qrt in an alternate form which will be the starting point in the following section .
lets consider a system operador @xmath9 which does not operate on the reservoir , then its single - time expectation value in the heisenberg picture is given by @xmath10.\ ] ] the operator @xmath11 depics the composite density operador of the system and reservoir .
it is worth pointing out that the dynamics of the system depends directly on @xmath12 for all times , but the validity of the markovian approximation requires that the state of the system is sufficiently well described by @xmath13 , therefore it is sufficient to write @xmath11 . in what follows ,
we change to the schrdinger representation using @xmath14 with @xmath15 being the unitary time - evolution operator , and after tracing over degrees of freedom of the reservoirs , we have @xmath16,\ ] ] where the reduced density operador for the system is given by @xmath17 $ ] .
then , if the @xmath18 satisfies the markovian master equation @xmath19 with @xmath20 the liouvillian superoperator , the evolution of @xmath21 can be computed by solving the dynamics of the master equation . to calculate the two - time correlation function @xmath22 where @xmath23 and @xmath24 are arbitrary heisenberg operators , we proceed in a similar manner ,
it is , @xmath25,\notag \\ & = & tr_s[\hat{a}(t)\hat{g}(t+\tau)],\end{aligned}\ ] ] where we have used the well - known composition and inversion properties of the evolution operator .
then , the two - time operator is given by @xmath26.\ ] ] by comparison of the eq .
( [ sec : qrt:02 ] ) and eq .
( [ sec : qrt:03 ] ) , we find that @xmath27 is an operator that obeys the same dynamical equations as @xmath18 , but as function of @xmath28 .
it is , @xmath29 with the boundary condition @xmath30 at arbitrary time @xmath8 .
hence , in the long - time limit the qrt reads , @xmath31\ ] ] where @xmath32 $ ] is the green s functions operator , and the operators @xmath9 , @xmath33 and @xmath34 are written in the schrdinger representation .
the superscript `` ( ss ) '' refers to the steady state of the reduced density operator of the system . after taking the laplace
transform on eq .
( [ eq - final ] ) , we obtain an expression for the emission of spectrum in terms of the green s functions operator , it is , @xmath35.\ ] ] prior to leaving this section , we mention that this equation will be considered for computing the emission spectrum due to the cavity as well as the quantum dot , e.g. by considering the photon and fermionic operators in a separated way .
therefore , we will describe in the next subsetion a general approach that can be applied for both cases . before to describe a simple algorithm for calculating the emission spectrum , we take into account that the dynamics for both opertors @xmath36 and @xmath37 are governed by the same master equation , i.e. , @xmath38 with @xmath20 the liouvillian superoperator , that efectivelly has a larger tensor rank than the reduced density operator of the system .
so , we can write the dynamical equations for the green s functions operator in a component form : @xmath39 together with the initial condition @xmath40 . the symbol @xmath41 is a composite index for labeling the states of the reduced density operator of the system , e.g. for indexing both matter and photon states in the qd - cavity system , see section [ sec : appl ] for example .
hence , @xmath42 and @xmath43 acts as a column vector and a matrix in this notation .
( ii ) to obtain the solution to the eq .
( [ components ] ) in frequency domain via the laplace transform , it is @xmath44 ( iii ) we perform the invertion of the matrix @xmath45 ) and finally , the spectrum of emission is computed in terms of the initial conditions given by , @xmath46 these initial conditions are easily obtained by evaluating the green s function operator at @xmath47 .
in order to apply our proposed method for calculating the emission spectrum in qd - cavity system , we will consider a simple but illustrative system composed of a quantum dot interacting with a confined mode of the electromagnetic field inside a semiconductor cavity .
this quantum system is well described by the jaynes - cummings hamiltonian @xcite @xmath48 where the quantum dot is described as a fermionic system with only two possible states , @xmath49 and @xmath50 are the ground and excited state , respectively . @xmath51 and @xmath52 ( @xmath53 and @xmath54 ) are the annihilation ( creation ) operators for the fermionic system and the cavity mode .
@xmath55 is the light - matter coupling constant , and we have set @xmath56 .
we also define the detuning between frequencies of the quantum dot and the cavity mode as @xmath57 , moreover @xmath58 is the energy to create an exciton and @xmath59 is the energy associated to the photons inside de cavity , respectively .
this hamiltonian system is far away for describing any real physical situation since it is completely integrable @xcite and no measurements could be done since the light remains always inside the cavity .
+ in order to include the effects of environment on the dynamics of the system , we consider the usual approach to model an open quantum system by considering a whole system - reservoir hamiltonian which is frequently splitted in three parts .
the first part corresponds to the system of quantum dot - microcavity .
the second part is the hamiltonian of the reservoirs and finally , the third part which is a bilinear coupling between the system and the reservoirs @xcite . after tracing out the degrees of freedom of all the reservoirs and assuming the validity of the born - markov approximation , one arrives to a master equation for the reduced density matrix of the system , @xmath60+\frac{\kappa}{2}(2 \hat{a } \hat{\rho}_{s } \hat{a}^{\dagger}-\hat{a}^{\dagger } \hat{a } \hat{\rho}_{s}-\hat{\rho}_{s } \hat{a}^{\dagger}\hat{a})\notag\\&+&\frac{\gamma}{2}(2 \hat{\sigma } \hat{\rho}_{s } \hat{\sigma}^{\dagger}-\hat{\sigma}^{\dagger } \hat{\sigma } \hat{\rho}_{s}-\hat{\rho}_{s } \hat{\sigma}^{\dagger}\hat{\sigma})\notag \\ & + & \frac{p}{2}(2 \hat{\sigma}^{\dagger } \hat{\rho}_{s } \hat{\sigma}-\hat{\sigma } \hat{\sigma}^{\dagger } \hat{\rho}_{s}-\hat{\rho}_{s } \hat{\sigma}\hat{\sigma}^{\dagger}).\end{aligned}\ ] ] where @xmath61 is the decay rate due to the spontaneous emission , @xmath62 is the decay rate of the cavity photons across the cavity mirrors , and @xmath63 is the rate at which the excitons are being pumped . fig .
[ systemscheme ] shows a scheme of the simplified model of the qd - cavity system showing the processes of continuous pumping @xmath63 and cavity loses @xmath62 .
the physical process begin when the light from the pumping laser enters into the cavity and excites one of the quantum dots in the qd layer .
thus , light from this source couples to the cavity and a fraction of photons escapes through the partly transparent mirror from the cavity and goes to the spectrometer for measurements of the emission of spectrum .
+ a general approach for solving the dynamics of the coupled system , consist of writting the bloch equations for the reduced density matrix of the system in the bared basis .
it is , an extended hilbert space formed by taking the tensor product of the state vectors for each of the system components , @xmath64 . in this basis
, the reduced density matrix @xmath65 can be written in terms of its matrix elements as @xmath66 .
hence , the eq
. explicitly reads , @xmath67\notag\\&+&ig\big[\big(\sqrt{m+1}\delta_{\beta x}\rho_{s \alpha n , g m+1}\notag\\&+&\sqrt{m}\delta_{\beta g}\rho_{s \alpha n , x m-1}\big)\notag\\&-&\big(\sqrt{n}\delta_{\alpha g}\rho_{s x n-1,\beta m}\notag\\&+&\sqrt{n+1}\delta_{\alpha x}\rho_{s g n+1,\beta m}\big)\big ] \notag\\&+&\frac{\kappa}{2}\big(2\sqrt{(m+1)(n+1)}\rho_{s \alpha n+1,\beta m+1}\notag \\ & -&(n+m)\rho_{s \alpha n,\beta m}\big ) -\frac{\gamma}{2}\big(\delta_{\alpha x}\rho_{s x n,\beta m}\notag \\&-&2\delta_{\alpha g}\delta_{\beta g}\rho_{s x n , x m}+\delta_{\beta x}\rho_{s \alpha n , x m}\big)\notag\\&+&\frac{p}{2}\big(2\delta_{\alpha x}\delta_{\beta x}\rho_{s gn , gm}-\delta_{\alpha g}\rho_{s gn,\beta m}\notag\\&-&\delta_{\beta g}\rho_{s \alpha n , gm}\big).\end{aligned}\ ] ] note that we use the convention that all indices written in greek alphabet are used for matter states and take values @xmath49 , @xmath50 , and
the indices written in latin alphabet are used for fock states and take values @xmath68 .
additionally , it is worth mentioning that our proposed method does not require to solve a system of coupled differential equations , instead of it , we solve a reduced set of algebraic equations that speed up the numerical solution .
+ prior to leaving this section , we point out that the number of excitations of the system is defined by the operator @xmath69 . the closed system and the number of excitations of the system is conserved , i.e. , @xmath70=0 $ ] .
it allows us to organize the states of the system through the number of excitations criterion such that the density matrix elements @xmath71 , @xmath72 @xmath73 and @xmath74 are related by having the same number of quanta .
it is , subspaces of a fixed number of excitation evolve independently from each other .
the fig .
[ qs ] shows a schematic representation of the action of the dissipative processes involved in the dynamics of the system according to the excitation number ( @xmath75 ) .
and cavity loses @xmath62 . ] , dashed red lines the emission of the cavity mode @xmath62 , solid black lines the exciton pumping rate @xmath63 and solid blue lines the spontaneous emission rate @xmath61 . ] in order to compute the emission spectrum of the cavity , we will consider the two - time correlation function according to the eq .
( [ eq - final ] ) for the photon operator as follows : @xmath76 after performing the partial trace over the degrees of freedom of the system we have that @xmath77,\end{aligned}\ ] ] where the matrix elements for the time evolution operator are given by @xmath78 and @xmath79 . in
what follows , we assume the validity of the markovian approximation , it means that the correlations between the system and the reservoir must be unimportant even at the steady state .
thus , the density operator system - reservoir can written as @xmath80 which implies that @xmath81 replacing the previous expression in eq .
( [ ka ] ) , it is straightforward to shows that the two - time correlation function reads @xmath82 where the green s functions operator @xmath36 is given by @xmath83.\end{aligned}\ ] ] as we pointed out in section [ sec : qrt ] , this operator must obey the same master equation as the reduced density operator of the system .
in fact , the terms that only contribute in the eq .
( [ corr ] ) are given by the matrix elements @xmath84 of the green s functions operator .
this is due to the fact that the projection operator @xmath85 enter into @xmath36 in the same way as into the reduced density operator of the system .
+ in order to identify these matrix elements , it should consider that for the qd - cavity system the dynamics of the all coherences asymptotically vanish and remains only the reduced density matrix elements which are ruled by the number of excitations criterion , i.e. @xmath71 , @xmath72 , @xmath73 , @xmath74 .
then , the eq .
( [ densitysr ] ) can be written as follows , @xmath86 by replacing the eq .
( [ diagonalrho ] ) into eq .
( [ op.green ] ) we find that the green s functions operator explicitly reads @xmath87.\end{aligned}\ ] ] note that from this expression is easy to identify the nonzero matrix elements of the green s functions operator that contribute to the emission spectrum .
finally , after performing the laplace transform we have that the emission spectrum of the cavity is given by @xmath88 it is worth mentioning that the initial conditions may be obtained by evaluating the green s function operator at @xmath47 , then using the fact that the time evolution operators become the identity and @xmath89=1 $ ] , we obtain a set of initial conditions given by @xmath90 note that this set of initial conditions corresponds to the asymptotic solution of the bloch equations for the reduced density matrix of the system . in order to compute the emission spectrum of the quantum dot
, we will consider the two - time correlation function given by eq .
( [ eq - final ] ) , but for the case of the matter operator : @xmath91 it is straightforward to show after performing the partial trace over the degrees of freedom of the system that the two - time correlation function reads @xmath92 where the green s functions operator @xmath36 is given by @xmath93.\end{aligned}\ ] ] assuming again the validity of the markovian approximation and taking into account the number of excitations criterion , we have that the density operator system - reservoir can be written as : @xmath94 by inserting the eq .
( [ diagonalrho2 ] ) into eq .
( [ op.green2 ] ) we find that the green s functions operator explicitly reads @xmath95.\end{aligned}\ ] ] analogously as in section [ cavityspectrum ] , we identify the nonzero matrix elements of the green s functions operator that contribute to the emission spectrum and after performing the laplace transform the emission spectrum of the quantum dot is given by @xmath96 where @xmath97 is the normalizing factor at the steady - state .
taking into account that the initial conditions are obtained by evaluating the green s function operator at @xmath47 , we have the time evolution operators become the identity and @xmath89=1 $ ] , thus , we obtain a set of initial conditions given by @xmath98
in this section , we compare the numerical calculations based on gft and qrt approach for the emission spectrum of the cavity as well as the quantum dot . due to that the qd - cavity system can display two different dynamical regimes by changing the values of the free parameters of the system and transitions between these two regimes
can be achieved when the loss and pump rates are modified .
particularly , in the strong coupling regime the relation @xmath99 holds and the relation @xmath100 remains valid for the weak coupling regime .
[ panel2 ] shows the numerical calculations of the emission spectrum due to the cavity in the weak coupling regime , the parameters values are @xmath101 , @xmath102 , @xmath103 , @xmath104 , @xmath105 , @xmath106 .
panel ( a ) shows the emission spectrum for the gft compared to the qrt approach .
panel ( b ) shows the quantity @xmath107 as a measurement of the error between the numerical calculations in the emission spectrum . for this set of parameters values ,
we can easily to identify two peaks associated to the modes of the cavity and the quantum dot , it is @xmath108 and @xmath109 , respectively .
[ panel1 ] shows the same calculations as in fig .
[ panel2 ] , but in the strong coupling regime and the parameters values are @xmath101 , @xmath102 , @xmath110 , @xmath111 , @xmath112 , @xmath106 . in the case of resonance , the modes associated to the cavity and the quantum dot
do not match , but repel each other , resulting in a structure of two separate peaks a distance @xmath113 .
[ panel3 ] shows the numerical calculations of the emission of spectrum due to the quantum dot with a high value of the rate @xmath114 and a smaller , although non negligible , pumping @xmath115 .
the rest of parameters values are @xmath101 , @xmath116 , @xmath117 , @xmath106 . + we observed that our numerical method based on gft is in full agreement with the qrt approach and reproduces very well the spectrum of emission associated with this system .
the quantity @xmath107 shows the discrepancy between both methods which is the order of @xmath118 as it is seen in fig .
[ panel1 ] and fig .
[ panel2 ] .
mainly , the discrepancy between both methods is due to the numerical errors accumulated in the numerical integration of the bloch equations in the qrt approach , it causes some differences in the spectrum with respect to the results computed by the gft . note that , there is no integration of any equations in the gft , therefore
, we expect a more accurate spectrum of emission .
we mention that for the numerical calculations based on qrt approach , we have followed the ref . @xcite . in order to test the performance of the numerical method , we regard four calculation times for computing the emission spectrum of the cavity based on gft and qrt approach at different excitation numbers .
table [ table01 ] shows in first column the excitation number , i.e. the truncation level in the bare - state basis for the numerical calculations involved .
second and third column show the results of elapsed time in seconds for both the gft and qrt approach , respectively . note that for comparison purposes all numerical calculations were performed at the same truncation level i.e. @xmath119 .
additionally , we have solved numerically the bloch equations ( see eq .
( [ eq : maestra ] ) ) until time @xmath120 in order to obtain a good resolution in the frequency domain for the qrt approach , i.e. @xmath121 .
hence , we have evaluated the emission spectrum for the gft in a grid with the same resolution in the frequency domain ( we emphasize that qrt approach is time consuming due to the number of coupled differential equations to be solved , rather than the number of evaluations in the grid size used ) . in addition , the numerical calculations were carried out with the same parameters values as in fig .
[ panel1 ] for both gft and qrt approach .
we found that our numerical approach based on gft is very efficient and accurate for calculating the emission spectrum in qd - cavity systems .
moreover , this method can easily be implemented in the numerical linear algebra packages as well as in any programming language .
.[table01 ] comparison of calculation times between the green s functions technique ( gft ) and the quantum regression theorem ( qrt ) in the numerical calculation of the emission spectrum of the cavity .
the calculations were made using a commercial intel(r ) core(tm ) @xmath122 processor of @xmath123ghz @xmath124 , and @xmath125 gb ram . [ cols="^,^,^,^ " , ] as a measure of the difference in the numerical calculations of the emission spectrum of the cavity between two methods . ] as a measure of the difference in the numerical calculations of the emission spectrum of the cavity between two methods . ] for details . ]
we have developed the green s function technique as an alternative methodology to the qrt for calculating the two - time correlation functions in open quantum systems . in particular , we have shown the performance of the green s function technique by calculating the emission spectrum in an open quantum system composed by a quantum dot embedded in a microcavity .
this theoretical approach is rather general and allows to overcome the inherent theoretical difficulties presented in the direct application of the qrt , i.e. , to find a closure condition on the set of operators involved in the dynamics equations , by considering that all coherences asymptotically vanish , and remains only the reduced density matrix elements which are ruled by the number of excitations criterion .
we have shown that the green s function technique offers several computational advantages , namely , the speeding up numerical computations via a transformation of the dynamics of the master equation in a set of linear algebraic equations , which are efficiently solvable by a numerical linear algebra routine , a faster convergence and significant reduction of computational time since the emission spectrum is calculated as a sum of terms of non - diagonal matrix elements of the reduced density operator of the system .
we mention that our methodology can be extended for calculating the emission spectrum in significant situations of quantum dots in biexcitonic regime or involving coupled photonic cavities .
this work was financed by vicerrectora de investigaciones of the universidad del quindo within the project with code @xmath126 , and by colciencias within the project with code @xmath127 , contract number @xmath128 , and hermes code @xmath129 .
10 h. walther , et .
phys . , 69 ( 2006 ) 395603 .
a. kavokin , et .
oxford university press ( 2007 ) h. altug , d. englund and j. vuckovic nature phys . 2 ( 2006 ) 484 .
y. mu and c. m. savage , phys .
rev . a , 46 ( 1992 ) 5944 r. m. stevenson , et .
al . nature 179 ( 2006 ) 439 t. m. stace , g. j. milburn , and c. h. w. barnes , phys .
b , 67 ( 2003 ) 085317 n. gisin , et . al . , rev .
74 ( 2002 ) 145 c. monroe , nature , 146 ( 2002 ) 238 y. todorov , et .
al . , phys .
lett . , 99 ( 2007 ) 223603 j. wiersig , et .al nature 460 ( 2009 ) 245
. g. khitrova , et .
phys . , 2 ( 2006 ) 81 .
j. p. reithmaier , et .
al . nature , 432 ( 2004 ) 197 .
mandel l. and wolf e. , optical coherence and quantum optics , cambridge university press , ( 1997 ) o. jedrkiewicz and r. loudon , j. opt .
b : quantum semiclass . opt .
, 2 ( 2000 ) r47 .
n. v. hieu and n. b. ha , adv .
sci : nanosci .
nanotechnol . , 1 ( 2010 ) 045001 .
walls d. f. and milburn g. j. 1994 _ quantum optics _
( berlin : springer - verlag ) . m. lax , phys .
145 ( 1966 ) 110 .
s swain , j. phys .
a : math . gen .
, 14 ( 1981 ) 2577 .
e. del valle , f. p. laussy , and c. tejedor , phys .
rev b , 79 ( 2009 ) 235326 .
n. quesada , h. vinck - posada and b. a. rodrguez , j. phys .
: condens .
matter , 23 ( 2011 ) 025301 .
c. a. vera , n. quesada , h. vinck - posada and b. a. rodrguez , j. phys . :
matter , 21 ( 2009 ) 395603 .
n. ishida , et .
, 3 ( 2013 ) 1180 .
t. quang , et .
al . , phys .
a , 48 ( 1993 ) 803 e. t. jaynes and f. w. cummings , proc .
ieee , 51 ( 1963 ) 89 .
m. o. scully and m. s. zubairy , quantum optics , cambridge : cambridge university press ( 1996 ) j. i. perea , d. porras and c. tejedor , phys .
b , 70 ( 2004 ) 115304 . | we introduce the green s functions technique as an alternative theory to the quantum regression theorem formalism for calculating the two - time correlation functions in open quantum systems .
in particular , we investigate the potential of this theoretical approach by its application to compute the emission spectrum of a dissipative system composed by a single quantum dot inside of a semiconductor cavity .
we also describe a simple algorithm based on the green s functions technique for calculating the emission spectrum of the quantum dot as well as of the cavity which can easily be implemented in any numerical linear algebra package .
we find that the green s functions technique demonstrates a better accuracy and efficiency in the calculation of the emission spectrum and it allows to overcome the inherent theoretical difficulties associated to the direct application of the quantum regression theorem approach .
quantum dot , semiconductor cavity , emission spectrum , markovian master equation , green s functions technique , quantum regression theorem . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Senior Scams Prevention Act''.
SEC. 2. SENIOR SCAMS PREVENTION ADVISORY COUNCIL.
(a) Establishment.--There is established a Senior Scams Prevention
Advisory Council (referred to in this Act as the ``Advisory Council'').
(b) Members.--The Advisory Council shall be composed of the
following members or the designees of those members:
(1) The Chairman of the Federal Trade Commission.
(2) The Secretary of the Treasury.
(3) The Attorney General.
(4) The Director of the Bureau of Consumer Financial
Protection.
(5) Not more than 2 representatives from each of the
following sectors, including trade associations, to be selected
by the Chairman of the Federal Trade Commission:
(A) Retail.
(B) Gift card.
(C) Telecommunications.
(D) Wire transfer services.
(E) Senior peer advocates.
(F) Consumer advocacy organization with efforts
focused on preventing seniors from becoming the victims
of scams.
(G) Financial services, including institutions who
engage in digital currency.
(H) Prepaid cards.
(6) Any other Federal, State, or local agency, industry
representative, consumer advocate, or entity, as determined by
the Chairman of the Federal Trade Commission.
(c) Duties.--
(1) In general.--The Advisory Council shall, while
considering public comment--
(A) collect information on the existence, use, and
success of model educational materials and programs for
retailers, financial services and wire transfer
companies, which--
(i) may be used as a guide to educate
employees on how to identify and prevent scams
that affect seniors; and
(ii) include--
(I) useful information for
retailers, financial services, and wire
transfer companies for the purpose
described in clause (i);
(II) training for employees on ways
to identify and prevent senior scams;
(III) the best methods for keeping
employees up to date on current scams;
(IV) the most effective signage and
best placement for signage in retail
locations to warn seniors about
scammers' use of gift cards and wire
transfer services;
(V) suggestions on effective
collaborative community education
campaigns;
(VI) available technology to assist
in identifying possible scams at the
point of sale; and
(VII) other information that would
be helpful to retailers and wire
transfer companies and their employees
as they work to prevent fraud affecting
seniors; and
(B) based on the findings in subparagraph (A)--
(i) identify inadequacies, omissions, or
deficiencies in those educational materials and
programs for the categories listed in
subparagraph (A) and their execution in
reaching employees to protect older adults; and
(ii) create model materials to fill those
inadequacies, omissions, or deficiencies.
(2) Encouraged use.--The Chairman of the Federal Trade
Commission shall, after the public comment period is complete--
(A) make the model educational materials and
programs and information about execution of the
programs described in paragraph (1) publicly available;
and
(B) encourage the use and distribution of the
materials created under this subsection to prevent
scams affecting seniors by governmental agencies and
the private sector.
(d) Reports.--Section 101(c) of the Elder Abuse Prevention and
Prosecution Act (34 U.S.C. 21711(c)) is amended--
(1) in subparagraph (C), by striking ``and'' at the end;
(2) in subparagraph (D), by striking the period at the end
and inserting ``; and''; and
(3) by adding at the end the following:
``(E) for the Federal Trade Commission, include
information on--
``(i) the Senior Scams Prevention Advisory
Council's newly created model materials, any
recommendations of the Advisory Council, and
any views or considerations made by members of
the Advisory Council or by public comment that
were not included in the Advisory Council's
model materials or considered an official
recommendation by the Advisory Council;
``(ii) the Senior Scams Prevention Advisory
Council's findings about senior scams
(including information about the ways scams
affect seniors, including the negative effects
on their well-being); and
``(iii) any recommendations on ways
stakeholders can continue to work together to
reduce scams affecting seniors.''.
(e) Termination.--This Act, and the amendments made by this Act,
ceases to be effective on the date that is 5 years after the date of
enactment of this Act. | Senior Scams Prevention Act This bill establishes a Senior Scams Prevention Advisory Council, which shall create model educational materials to educate employees of retailers, financial-services companies, and wire-transfer companies on how to identify and prevent scams that affect seniors. |
somatostatinoma was first reported in 1997 by larsson et al.1 since then , only a handful of cases have been reported worldwide because of the extremely low incidence of this tumor , estimated to be 1 in 4 million.2 women were found to have higher incidence than men.3 most of the cases were symptomatic , with some clinical features of inhibitory symptoms such as diabetes mellitus - like symptoms , steatorrhea , cholelithiaisis , and chronic diarrhea .
asymptomatic somatostatinoma accounted for approximately 10% of reported cases.4 the most common locations for tumors were the duodenum and pancreas.5 this tumor was malignant in 60% to 70% of cases with tumor size usually less than 5 cm .
an association between somatostatinoma and some familial neuroendocrine disorders such as neurofibromatosis and von hippel - lindau syndrome was reported in approximately 7% of cases .
adjuvant chemotherapy was not advocated after surgery . despite slow disease progression , 5-year survival after incomplete resection of the tumor
was reported to be 60% to 100%.6 we report a case of somatostatinoma with an unusual location and rare manifestation .
a 49-year - old woman with a history of type 2 diabetes mellitus for a few years presented with chronic abdominal discomfort and nausea for 1 year .
three months earlier , she complained of severe abdominal pain , significant weight loss , and chronic intermittent watery diarrhea .
abdominal sonography showed a large epigastric mass , 1016 cm in size , with multiple hyperechoic nodules at both lobes of the liver and peripancreatic lymphadenopathy , suspected to be metastatic spread to the lymph nodes .
the patient was scheduled for a computed tomography ( ct ) scan of the upper abdomen . while she was waiting at the radiology unit , she developed hematemesis and was referred to the internal medicine department for further treatment
endoscopy showed a large mass , approximately 15 cm in diameter , in the upper gastric body .
the mass extended down to the antrum and duodenal bulb with blood oozing ( fig .
, endoscopic ultrasonography was performed , and a large well - defined isoechoic gastric subepithelial mass more than 138 cm in diameter was detected ( fig .
multiple intra - abdominal and peripancreatic lymphadenopathy were detected , and the lesions varied in size from 8 to 25 mm .
fine needle aspiration was performed on the mass , and the cytology results led to a suspicion of malignancy .
the ct scan showed a large soft tissue mass , with multiple liver metastases and intra - abdominal lymphadenopathy ( fig .
gross pathology showed tumor involvement of the posterior wall of the stomach , with the tumor forming a large intramural multinodular mass ( fig .
multiple well - circumscribed masses were detected at the pancreatic head and body and at the first part of the duodenum .
histopathology showed that the tumor was arranged in solid nests or in an acinar pattern infiltrating from muscularis propria of the stomach through the mucosa and extending to the duodenum .
their nuclei were monotonous and bland , with salt and pepper nuclear chromatin , a typical characteristic of neuroendocrine tumors including positivity for typical characteristic of psammoma bodies ( fig .
however , she developed acute jejunal obstruction 2 weeks later due to intussusception from the jejunal leiomyoma and underwent gastrojejunostomy .
this case of somatostatinoma involved a tumor , whose size was the largest ever reported .
tumors in most of the reported cases were smaller than 5 cm . more than 85% were symptomatic and more than two - thirds were malignant.7 - 9 the most common locations of this tumor were the duodenum and pancreas
. however , this patient 's gastric somatostatinoma was detected in a rare location . in the presented case ,
the patient only had chronic intermittent watery diarrhea , one of the clinical presentations of classic somatostatinoma syndrome , without any other symptoms .
in many other reports , these classic somatostatinoma syndrome presentations were more common for pancreatic masses , whereas obstructive symptoms such as abdominal pain , nausea , and vomiting were more common for duodenal masses .
interestingly , this patient presented with acute massive gastric haemorrhage , which was a rare manifestation of this tumor .
duodenal and periampullary bleeding have been reported in very few cases worldwide as a clinical presentation.10 - 12 the bleeding was easily stopped with an adrenaline injection .
surprisingly , the endosonographic study could not demonstrate any difference between this tumor and other subepithelial tumors such as gastrointestinal stromal tumor . in our previous study,13
it was very difficult to differentiate the etiology of large subepithelial tumors , especially those larger than 9 cm in diameter , using only endosonographic characteristics .
therefore , a definite diagnosis must be based on cytology and histopathology with special immunohistochemical stains .
no data supports the benefit of adjuvant chemotherapy for the treatment of metastatic somatostatinoma , although few small case series reported that the synthetic somatostatin analogue octreotide , at 0.5 mg subcutaneously per day , had some benefits , including reduced diarrhea and other somatostatinoma symptoms , and stabilised tumor growth .
some studies on somatostatinoma reported no benefit from this treatment , which might be from a different subtype of somatostatin produced from the tumor.14,15 the overall 5-year survival for metastatic somatostatinoma was reported at approximately 40% , whereas that for small nonmetastatic lesions is 100%.15 | a 49-year - old woman presented with chronic abdominal discomfort , significant weight loss , and chronic intermittent diarrhea .
she suddenly developed massive upper gastrointestinal bleeding and was referred for further treatment .
endoscopy indicated a large mass in the upper gastric body with antral and duodenal bulb involvement .
endosonography showed a large well - defined isoechoic gastric subepithelial mass with multiple intra - abdominal and peripancreatic lymphadenopathy , suspected to be malignant on the basis of fine needle aspiration cytology .
the tumor was surgically removed , and histopathology showed typical characteristics of a neuroendocrine tumor .
on the basis of immunohistochemical staining , somatostatinoma , a rare neuroendocrine tumor , was diagnosed .
gastrointestinal bleeding is a rare presentation and the stomach is an uncommon tumor location . |
the third international seminar on reducing burden of traffic accidents , compared to previous two seminars had witnessed more problem oriented research on this important issue .
presence of policy makers , executive bodies along with academician led to a great success and their cooperation had immediate impact on the fars province traffic mortality as a model . | iran has had incremental incidence of traffic accident mortality since introduction of mechanization about a century ago .
but the newest data from iran show decrease in the absolute number of deaths , death per 10,000 vehicles and death per 100 , 000 populations . despite its huge impact on health and economy , research in the field of traffic crashes is still scant and there are still deficiencies in problem oriented research on traffic accidents
. actual cooperation of policy makers , executive bodies and academician could build platform for intersectoral discussion of different aspects of traffic accidents and could reduce burden of traffic accidents . |
sunspots are an obvious and significant manifestation of a solar magnetic field .
an accurate knowledge of the structure of sunspot is essential to understand the magnetic activity of our star .
however , the subsurface structure of these magnetic features is still an unsolved question in solar physics ; is it a single monolithic flux tube ( monolithic model ) as suggested by @xcite or rather a bundle of individual flux tubes like a spaghetti ( cluster model ) as proposed by @xcite ?
observations by @xcite showed a significant absorption of @xmath0-and @xmath2-modes by sunspots . in monolithic sunspot model , the absorption is due to the conversion of the incoming acoustic @xmath2-modes into magnetoacoustic slow @xmath3-modes that propagate along the magnetic field lines @xcite . for the cluster sunspot model ,
the absorption is caused by a multiple - scattering regime from the excitation of tube waves @xcite .
a comparison between the scattered wave field of the two competing models shows a remarkable difference , which can be used to distinguish the structure of the model @xcite .
we have to note that no gravitational stratification was considered in the models studied in the two latter references .
@xcite used a semi - analytic method to examine the absorption of @xmath2-modes by a large collection of thin magnetic tubes ( plage ) in a stratified media .
however , they did not take into account the scattering between tubes considering each tube isolated from the others .
@xcite were the first to study analytically the multiple - scattering regime of pairs of flux tubes in a stratified atmosphere .
they found that the scattering for the kink mode ( @xmath4 ) changes dramatically for small flux - tubes separations .
they showed also a significant contribution of the near - field phenomenon on the scattering .
@xcite used scattering formalism to investigate the interaction of the monopole component ( @xmath5 ) of acoustic @xmath2-modes with a thin magnetic fibril .
they obtained that mode - mixing and absorption are weak for thin flux tubes .
@xcite used the semi - analytical model of @xcite to incorporate the sausage mode in addition to the kink one , showing the importance contribution of the sausage mode on the scattering by the pair of tubes .
they concluded that the absorption of sausage mode is a magnitude larger than that of the kink mode when the tubes are in a close proximity for the higher frequency of 5 mhz .
@xcite extended the model of @xcite to study the scattering by a larger ensemble of thin magnetic flux tubes .
they deducted that the absorption enhanced for a larger ensemble of tubes , or higher frequency .
in addition , they noted that the spatial distribution of tubes affects the absorption at higher frequencies ( 5 mhz ) .
numerical simulations of wave propagation through solar magnetic features provide an efficient and direct way to infer their structure by observing their helioseismic signatures .
recently , @xcite investigated numerically the interaction of @xmath0-mode with an ensemble of flux tubes of different number and configuration .
they found that the multiple - scattering affects strongly the absorption coefficients , showing that the sausage and kink modes are the dominant modes for the scattering .
they noted also that the absorption generally increases with the number of flux tubes and the reduction of the distance between them .
@xcite studied the helioseismic signatures of monolithic and spaghetti sunspot models , where the latter model contains a realistic number of tubes for the first time .
they obtained that the mode - mixing from the monolithic model is more efficient than that of the spaghetti model .
their simulations reveal also that the differences observed in the absorption coefficient for both models can be detected above the noise level . at @xmath6 s.,width=302 ] @xcite simulated the propagation of an @xmath0-mode wave packet centered at 3 mhz frequency through a hexagonal cluster of thin magnetic flux tubes , considering the effect of tubes configuration and the separation distance on the scattering .
it was found that when the separation between two neighboring tubes within the cluster @xmath7 is about @xmath8 ( @xmath9 is the wavelength of the incoming wave ) , individual tubes within the loose cluster start to scatter waves to nearby tubes which scatter again to a near field and so on leading to a greatly enhanced absorption measured in the far field ( multiple - scattering regime ) .
we define a compact cluster as a bundle of magnetic flux tubes in a close - packed configuration .
we have shown that a loose cluster in a multiple - scattering regime is a more efficient absorber of waves than a compact cluster or an equivalent monolithic tube of both clusters . in the present study , unlike @xcite , we have fixed the distance @xmath7 as in compact cluster separation , and we have changed the wavelength @xmath9 of the incoming wave to see if the multiple - scattering regime can occurs for a cluster in a close - packed configuration as in the case of a loose cluster in the same regime .
given this , an important question emerges : is the condition on @xmath7 ( cited above ) sufficient to have a multiple - scattering regime for the compact cluster or we have to add the condition that the cluster must be in a loose configuration to allow tubes to communicate through their near field .
an important part of this work will try to answer this question .
the paper is organized as follows . in section [ s - simulations ]
we briefly describe the code that we used and the set up of the simulations . in section [ s - identmsr ]
we present the method of inspecting the multiple - scattering effects .
sections [ s - smallspots ] and [ s - largespots ] outline the results of the interaction of a wave with small and large size models of sunspot respectively in a stratified atmosphere , including the effect of the frequency variation on the scattering for both cases . for more general results ,
we compute in section [ scatc ] the scattering cross section for the different sunspot models .
finally , the discussion and conclusions are presented in section [ concl ] .
we have performed the simulations using the code @xcite which solves the linear and ideal mhd equations in a three - dimensional stratified atmosphere .
a pseudo - spectral scheme is implemented in the horizontal directions and a two - steps lax - wendroff scheme in the vertical direction to evolve the horizontal fourier modes .
the background atmosphere is an enhanced polytropic atmosphere described by @xcite .
the horizontal extent of the computational domain is @xmath10 @xmath11 $ ] mm and @xmath12 @xmath11 $ ] mm .
the depth @xmath13 is ranged from 0.2 mm to 6 mm below the solar surface .
the spatial resolution for all simulations is 192 @xmath14 192 fourier modes in @xmath10-and @xmath12-directions , where it is 150 grid points in the @xmath13 direction .
since @xmath0-mode interacts strongly with the sunspot compared to the @xmath2-mode @xcite , we propagate in all our simulations an @xmath0-mode wave packet with a gaussian envelope centered at the angular frequency @xmath15 mhz with a standard deviation of 1.18 mhz . each individual @xmath0-mode has an exponential dependance on depth @xmath13 with the frequency @xmath16 where @xmath17 is the horizontal wavenumber . in subsection [ s - fsmallspots ] and [ s - flargespots ] we have studied the scattering using centered frequencies from @xmath18 mhz to @xmath19 mhz . at @xmath20 ,
the wave packet is situated at the left edge of the computational domain @xmath21 mm and it propagates from the left to the right in the @xmath10-direction .
all waves of the @xmath0-mode are in phase at the initial position .
a periodic boundary condition is imposed on the horizontal side walls of the simulation box .
the upper condition plays an important role in the absorption and the scattering of @xmath0-modes . in our case
, they correspond to that of a free - surface in the non - magnetic regions where waves are strongly reflected from the upper boundary @xcite .
our initial condition has a very little energy in the range where it would escape ( @xmath22 0.06 % ) .
recently , @xcite have modeled the interaction of @xmath0-and-@xmath2-modes with a random distribution of tubes .
they have found that the absorption coefficients and the damping rates are very sensitive when a magnetic fibril is extended into an isothermal region above a polytrope .
more recently , @xcite have extended the model of @xcite by allowing sausage and kink modes to freely escape at the top of model using a radiative boundary condition there .
their results show an increase of the absorption coefficiens of the incoming @xmath0-wave for both modes in comparison to the absorption using a reflective stress - free condition at the top of tubes . our condition is not realistic since in the realistic one
, waves can escape to the chromosphere and corona .
however , our goal is not to construct a model that mimics the solar atmosphere , but to study a very simple sunspot models where we have control over the physics . at the bottom layer
, we impose that waves are evanescent .
the initial individual magnetic flux tube is vertical in the @xmath13-direction .
it is embedded in the polytropic background atmosphere with a radial top profile given by @xmath23 where @xmath24 is the tube radius , and @xmath25 = 4820 g @xcite .
the magnetic field has the same radial profile along the depth @xmath13 .
the center of the sunspot model is located at the point @xmath26 mm , @xmath27 .
the simulation without the flux tube corresponds to @xmath28 .
the scattered wave field is obtained by subtracting the simulation without the flux tube from the simulation with the flux tube .
we consider that the vertical velocity @xmath29 at the upper surface is the most appropriate component to analyze the scattering since it is the only component that can be measured with dopplergrams from the solar disk center .
all the slices of the scattered vertical velocity in this paper were taken near the surface at the depth @xmath30 km .
figure [ fig1 ] shows the unperturbed full wave field of the component @xmath29 taken at @xmath6 seconds .
the amplitude of the incoming wave is normalized at 1 .
this will set the scale for the scattered wave . according to the variation of the sound and the alfvn speeds in the enhanced polytropic atmosphere , the plasma-@xmath31 is not constant inside the tube .
these speeds are set to be equal at a depth of 400 km where the plasma-@xmath32 . in this paper
, we consider only the @xmath0-@xmath0 scattering without focusing on the phase shift variation , where @xmath0-@xmath33 mode - mixing decreases rapidly with increasing radial number @xmath34 @xcite . in our simulations ,
the monolithic models with a large radius can be considered as thick tubes with respect to the wavelength of the incoming wave .
the coupling between the fast and the slow magnetoacoustic waves makes the distinction between different modes difficult , particularly near the surface where @xmath35 .
consequently , these tubes will scatter in all @xmath36 and the vertical velocity will appear as a summation of all these modes . at @xmath37 = 3300
s for three different models of sunspot ( @xmath38 mhz ) : ( a ) a cluster of seven identical compact tubes of 200 km radius , ( b ) a single monolithic tube whose radius @xmath39 km is an average radius of the cluster , ( c ) a hexagonal monolithic tube where the size at the surface is the same as that of the cluster . the colour scale is the same for the three snapshots .
the scattered wave fields of the monolithic tube ( b ) and its equivalent hexagonal monolithic model ( c ) seem to be similar , whereas the compact cluster model ( a ) shows a more extensive left wave field in the @xmath12-direction.,title="fig:",width=321 ] at @xmath37 = 3300 s for three different models of sunspot ( @xmath38 mhz ) : ( a ) a cluster of seven identical compact tubes of 200 km radius , ( b ) a single monolithic tube whose radius @xmath39 km is an average radius of the cluster , ( c ) a hexagonal monolithic tube where the size at the surface is the same as that of the cluster . the colour scale is the same for the three snapshots . the scattered wave fields of the monolithic tube ( b ) and its equivalent hexagonal monolithic model ( c ) seem to be similar , whereas the compact cluster model ( a ) shows a more extensive left wave field in the @xmath12-direction.,title="fig:",width=321 ] at @xmath37 = 3300 s for three different models of sunspot ( @xmath38 mhz ) : ( a ) a cluster of seven identical compact tubes of 200 km radius , ( b ) a single monolithic tube whose radius @xmath39 km is an average radius of the cluster , ( c ) a hexagonal monolithic tube where the size at the surface is the same as that of the cluster . the colour scale is the same for the three snapshots . the scattered wave fields of the monolithic tube ( b ) and its equivalent hexagonal monolithic model ( c ) seem to be similar , whereas the compact cluster model ( a ) shows a more extensive left wave field in the @xmath12-direction.,title="fig:",width=321 ]
we are inspecting the multiple - scattering from two complementary points of view : 1- from a visual inspection of the scattered wave field ; we compare the scattered wave field of a monolithic model with that of cluster models .
as the @xmath0-mode has a maximum of power at the surface , multiple - scattering from a cluster model is easily identifiable when individual tubes scatter waves to the near field making their specific signatures in that region . in the figure 9 of @xcite ,
we have observed a multiple - scattering signature from two loose clusters made of 7 and 9 tubes respectively ( @xmath40 ) .
the near - field area extends till a distance of 2 mm from these structures in the @xmath10-direction and it can be clearly distinguished from the wave field of the equivalent monolithic tube .
+ 2- from the temporal profiles of the scattered surface vertical velocity measured at a single point situated in the far field .
actually , the multiple - scattering regime enhances the absorption of the incoming wave ( near - field scattering ) leading to a decrease in the amplitude of the scattering measured in the far field .
the visual inspection of multiple - scattering effects from a pair of magnetic flux tubes and from two loose clusters made of 7 and 9 tubes was confirmed by using this method @xcite .
we have shown in @xcite that the multiple - scattering regime occurs for a separation distance @xmath41 which is approximately @xmath42 in the case of 3 mhz @xmath0-mode . however , for a separation @xmath43 , we have got a coherent scattering regime which is characterized by the enhancement of the scattering in both near field and far field , but no absorption was measured in the far field . since both ranges correspond to the scattering regime , we can merge them to get @xmath44 . in conclusion , we can state that the scattering regime occurs when the separation distance between tubes within cluster is @xmath45 , where @xmath42 is the lower limit and @xmath46 is the upper limit . the latter value is consistent with previous studies by @xcite and @xcite who agree that the extent of the region of influence of the near field in multiple - scattering is @xmath9/2 .
to probe the subsurface structure of sunspots , it is essential to construct adequate sunspot models that mimic the full complexity of solar magnetic structures .
however , our aim is not to study a realistic sunspot to reproduce quantitatively the observations , but to get specific seismic signatures from simple sunspot models with basic properties in order to distinguish between them , at least qualitatively .
in addition to monolithic and cluster models of the sunspot that were studied by @xcite and previous papers , we have incorporated here an intermediate new model to better interpret and understand the results .
we have carried out three simulations shown in figure [ fig3 ] ( a , b , c ) where the scattered wave field at @xmath37 = 3300 s is displayed .
the snapshots show the propagation of an @xmath0-mode wave packet of angular frequency @xmath47 3 mhz through : * \(a ) a cluster of seven identical flux tubes in hexagonal compact configuration .
each individual tube within the cluster has a radius @xmath48 km , * \(b ) a single monolithic tube whose radius @xmath24 = 880 km is the average radius of the cluster in ( a ) , * \(c ) a hexagonal monolithic flux tube in which the shape at the surface is the same as that of the cluster in ( a ) .
figure [ fig2 ] to the top shows the magnetic field profile of the cluster model ( a ) from the simulated data .
the separation distance between two neighboring tubes within the compact clusters is fixed as @xmath49 , where @xmath50 is the radius of an individual tube within the cluster . at @xmath37 = 3300
s for three different models of sunspot ( @xmath1 3 mhz ) : ( a ) a cluster of seven identical compact tubes of 400 km radius , ( b ) a single monolithic tube whose radius @xmath51 mm is an average radius of the cluster , ( c ) a hexagonal monolithic tube where the size at the surface is the same as that of the cluster . the colour scale is the same for the three snapshots .
the compact cluster model ( a ) shows a triangular shape waveform in the left near field , whereas the hexagonal monolithic model ( c ) shows a non - uniform waveform.,title="fig:",width=321 ] at @xmath37 = 3300 s for three different models of sunspot ( @xmath1 3 mhz ) : ( a ) a cluster of seven identical compact tubes of 400 km radius , ( b ) a single monolithic tube whose radius @xmath51 mm is an average radius of the cluster , ( c ) a hexagonal monolithic tube where the size at the surface is the same as that of the cluster . the colour scale is the same for the three snapshots . the compact cluster model ( a ) shows a triangular shape waveform in the left near field , whereas the hexagonal monolithic model ( c ) shows a non - uniform waveform.,title="fig:",width=321 ] at @xmath37 = 3300 s for three different models of sunspot ( @xmath1 3 mhz ) : ( a ) a cluster of seven identical compact tubes of 400 km radius , ( b ) a single monolithic tube whose radius @xmath51 mm is an average radius of the cluster , ( c ) a hexagonal monolithic tube where the size at the surface is the same as that of the cluster . the colour scale is the same for the three snapshots . the compact cluster model ( a ) shows a triangular shape waveform in the left near field , whereas the hexagonal monolithic model ( c ) shows a non - uniform waveform.,title="fig:",width=321 ] the model ( c ) is an intermediate feature between ( a ) and ( b ) .
it is interesting to see what can be the contribution of the sunspot on the scattering in terms of geometrical shape .
we note that the contour of the magnetic field in hexagonal model ( c ) is obtained by using the parametric equation of an epicycloid ( @xmath52 ) .
we recall that we will restrict our analysis on the left scattered wave field to the magnetic elements where their oscillations are observed without a contribution from the incoming wave .
figure [ fig3](a ) shows the scattering from the cluster .
no signature of multiple - scattering regime was observed from this model .
however , we observe that waves seem to be slightly compressed in the @xmath10-direction compared to the circular waveform of the monolithic tube of 880 km radius .
in fact , @xcite showed that a pair of flux tubes aligned perpendicular to the direction of the incoming wave oscillate simultaneously with the @xmath0-mode in @xmath12-direction whatever the separation @xmath7 is .
indeed , we can distinguish within the cluster two pairs of tubes in a perpendicular configuration to the incoming wave .
@xcite observed in their simulations that waves ( @xmath0-mode ) scattered by a spaghetti model of the sunspot are more flat compared to that scattered by a monolithic tube .
it is reasonable to infer that the flattening observed in our simulation is the same phenomenon described by these authors .
the oscillation of tubes in @xmath12-direction when wave propagates contributes to the scattered wave giving this appearance .
this effect could be amplified if there are more tubes inside the bundle . as is apparent from figure [ fig3 ]
, the scattered wave fields of the hexagonal monolithic model ( c ) and its equivalent monolithic tube ( b ) seem to be similar , at least qualitatively . to complete this study , we need to investigate how oscillations from sunspot models vary with the frequency .
to do so , we have fixed the separation distance @xmath7 as in the cluster ( a ) , and we have changed the wavelength @xmath9 of the incoming wave through the change in frequency .
this will allow us to change the ratio @xmath53 without changing the size of the cluster .
we have studied the scattered wave field for the different models ( a ) , ( b ) and ( c ) using four frequencies of @xmath0-mode .
figure [ fig4 ] shows the scattered vertical velocity @xmath29 as a function of time measured at point b(-14,0 ) for the different magnetic features ( figure [ fig3 ] ) and for four frequencies of @xmath0-mode ( from 2 to 5 mhz ) .
the amplitude of the incoming wave in an unperturbed field is normalized at 1 ( figure [ fig1 ] ) .
firstly , we observe that the curves are more extended in time for @xmath54 mhz than for @xmath55 mhz where the wave packets are more compressed .
this effect shows the variation of the wavelength of the wave packet with the frequency .
as shown in figure [ fig3 ] , figure [ fig4 ] confirms that the monolithic tube ( b ) and the hexagonal monolithic model ( c ) have the same behaviour and consequently oscillate in phase , while the hexagonal monolithic model shows a reduced amplitude relative to that of the monolithic tube for all frequencies .
the behaviour of the scattered curve from the compact cluster model is more interesting . for @xmath56 mhz ( @xmath57 )
, the curve of the cluster measured in b shows a similar trend as the curve of the single tube of 200 km radius . at these frequencies ,
the cluster should oscillate as a monolithic tube of the same size .
however , it oscillates with a different mode due to its fibril structure , which demonstrates the particularity of this model with a behaviour completely different from that of the monolithic tube at this frequency . for @xmath55 mhz ,
the curve of the cluster starts to be out of phase with that of the single tube of 200 km radius .
this case corresponds to the multiple - scattering regime in principle since the condition @xmath58 is satisfied .
however , no signature of the tubeswaves from the cluster was observed in the near field at this frequency .
in fact , individual tubes within the cluster start to oscillate more efficiently in @xmath12-direction , which explains probably the reduction of the scattering amplitude of the cluster at this frequency .
in this section , we want to know if the multiple - scattering regime occurs for a larger size compact cluster , which corresponds to a larger separation distance ( @xmath7 ) .
the distance @xmath7 in the case of a cluster is fixed as @xmath49 , where @xmath50 is the radius of an individual tube within the cluster .
nevertheless , multiple - scattering regime occurs if @xmath7 is about @xmath44 . in this case
a radius of @xmath59 km satisfies the condition of multiple - scattering at the standard frequency @xmath38 mhz .
given that , we have made the simulations displayed in figure [ fig5 ] where snapshots ( a ) , ( b ) and ( c ) show the propagation of an @xmath0-mode wave packet through : + * \(a ) a cluster of seven identical flux tubes in a hexagonal compact configuration .
each individual tube within the cluster has a radius @xmath60 km ( figure [ fig2 ] to the bottom ) , * \(b ) a single monolithic tube whose radius @xmath24 = 1.76 mm is the average radius of the cluster in ( a ) , * \(c ) a hexagonal monolithic flux tube which the shape at the surface is the same as that of the cluster in ( a ) .
we observe from figure [ fig5 ] that unlike the case of small sunspot models , the scattered near fields of the large hexagonal monolithic and monolithic tube models are different .
the close near field of the hexagonal monolithic model ( c ) shows a non - uniform waveform , which indicates a contribution from the outer extents ( or from the gaps between the extents ) to the scattering . the cluster in snapshot ( a ) shows a triangular shape waveform in the left near field
. this can be the result of oscillations in @xmath10-direction as well as in @xmath12-direction from tubes within the cluster under the multiple - scattering regime .
figure [ fig6 ] shows the scattered vertical velocity @xmath29 as a function of time measured at point b ( figure [ fig5 ] ) for the different magnetic features and for @xmath0-mode frequencies . as in the figure [ fig4 ] ,
the incoming wave in an unperturbed field have an amplitude of 1 . unlike the small cluster model
, we observe that the scattered curves of the large cluster are not in phase with that of the single tube of 400 km for all frequencies .
similarly , the scattering curves of the hexagonal monolithic model and its equivalent monolithic tube are no longer in phase , which indicates a different behaviour for both models .
it is also observed that the scattering from the hexagonal monolithic model increases with frequency .
it reaches a maximum at @xmath61 mhz and @xmath55 mhz where it dominates the scattering from the other models .
in previous sections , we measured the scattered surface vertical velocity from a single point in the far field ( point b ) .
the obtained results can be checked with the observation of the solar surface with dopplergrams .
however , we need more quantitative analysis to qualify the scattering , not only from a single point ( b ) but over all the @xmath12-ridge . in this section ,
we use the vertical scattered wave , measured along the line @xmath62 , to compute the scattering cross section @xmath63 for the different sunspot models and frequencies .
the scattering cross section is a very important parameter in the scattering problem .
it is defined for an incident plane wave as the total scattered power over the power per unit area of the incident wave . in our case , for a given frequency , the one - dimensional scattering cross section @xmath63 can be expressed as @xmath64 where @xmath65 and @xmath66 are respectively the amplitudes of the scattered and the incoming vertical velocity at the surface measured at the point @xmath67 .
the distance @xmath68 is the separation between the center of the sunspot model and the point b along the @xmath10 axis ( @xmath69 mm ) .
figure [ fig7 ] shows the scattering cross section computed for the small and the large sunspot models .
@xmath63 is computed over @xmath70 ( from @xmath71 mm to @xmath72 mm ) and from @xmath73 to @xmath6 s. while small monolithic tube and hexagonal monolithic models oscillate in the same ways ( figure [ fig4 ] ) , the left panel of figure [ fig7 ] shows that the scattering cross section for the small monolithic tube is larger than that of the small hexagonal model . the scattering cross section of the small compact cluster model increases with the frequency over that of the two other models , except at the frequency of @xmath55 mhz where it decreases below that of the monolithic tube model .
we attribute this particular behaviour to the absorption caused mainly by the simultaneous oscillations of tubes within the cluster in the @xmath12-direction . unlike the small cluster model , the large compact cluster model in the right panel of figure [ fig7 ] shows a minimum of scattering cross section compared to the scattering from the other models .
this result is explained by the multiple - scattering regime ( @xmath74 mhz ) and the absorption from tubes within the cluster .
the large monolithic tube and hexagonal monolithic models have almost the same scattering cross section for the frequencies @xmath75 mhz , whereas at the frequency @xmath55 mhz , the scattering cross section of the large hexagonal monolithic model increases slightly above that of the monolithic tube model .
this result can be seen clearly in figure [ fig6 ] at the frequency @xmath55 mhz where the scattering from this model is larger than the scattering from the other models .
actually , the geometrical shape of the large hexagonal model imposes a constraint on the oscillation modes that are excited in it in comparison with the cylindrical shape of the monolithic tube . in this case , more oscillation modes are excited inside the monolithic tube than inside the hexagonal model . therefore , we have less absorption and more scattering of waves from the hexagonal model than from the monolithic tube model of the same size .
this is a very important observation which reveals how the geometrical shape affects the input and output of waves in large sunspots particularly for a high frequency .
in this paper , we are interested in finding a way to distinguish between distinct models that characterize the magnetic structure of sunspots .
this can be observed through the scattered wave field when waves interact with these features .
direct numerical simulations have begun to describe the scattering regime for an ensemble of magnetic flux tubes ( e.g. @xcite ) . in the latter paper , we have studied the interaction of an @xmath0-mode of 3 mhz frequency with monolithic and cluster models of the sunspot . in this present study , we have incorporated for the first time a non - circular , or a hexagonal monolithic tube as a third model that can be used as a junction between the previous two models to better interpret the scattered wave field . while the truly sunspot like the latter model or the compact hexagonal cluster could not realistically exist , it is always useful as a first step to understand a minimum of physics or observations using simple models which do not require important or expensive computational resources . to discuss the scattering as a function of @xmath53 , instead of changing the distance between two neighboring tubes within the cluster @xmath7 as in @xcite , we have fixed @xmath7 as in hexagonal compact cluster separation , and we have changed the wavelength @xmath9 of the incoming wave through the variation of the frequency to see if we can have a multiple - scattering regime and absorption from a cluster in a close - packed configuration as in the case of a loose cluster in the same regime . for more general results ,
we have performed simulations with two kinds of a cluster : 1- small cluster as in @xcite which it is made of seven compact tubes of 200 km radius , where the separation distance @xmath76 km , 2- large size cluster composed of seven compact tubes of 400 km radius , where @xmath77 mm . for the frequencies @xmath78 mhz , we have demonstrated that the small cluster ( @xmath79 ) oscillates more like individual tube of 200 km than like monolithic tube of the same size , but with a larger amplitude . at this frequency range ,
the scattering cross section of the small cluster is the largest compared to that of the other models , revealing that this model acts more like a scatterer object under these conditions .
this important result can be verified with helioseismic measurements to distinguish between close - packed and loose configurations of magnetic flux tubes inside sunspots or plage in part , and between fibril and monolithic configurations of sunspots in other part . for the high frequency of 5 mhz ,
the small cluster which is supposed to be in a scattering regime ( @xmath80 ) oscillates in a different way compared to the other models .
however , no signature of a multiple - scattering has been observed in the near field .
nevertheless , a distortion of the scattered wave field in the @xmath12-direction has been observed for this model .
a similar observation was mentioned by @xcite showing a more flattened scattered wave from a spaghetti model .
we think that this particular signature is caused by the simultaneous oscillation in @xmath12-direction of the pairs of tubes aligned in a perpendicular direction to the incoming wave , independently from the scattering regime or the separation distance as shown by @xcite .
this effect combined with the multiple - scattering condition at this frequency can explain the absorption by the cluster observed at this specific frequency in both scattering amplitude and cross section plots .
this result constitutes an another criterion to distinguish a compact fibril sunspot from a monolithic one in this frequency range .
we have to note that this effect should be amplified with the increasing of tubes number inside the cluster .
in contrast to the small cluster , the large size cluster shows a multiple - scattering in the near field and a minimum of scattering cross section , which indicates more absorption by tubes within the cluster .
however , this absorption seems to be not significant to be observed in the scattering cross section plot as in the case of small sunspot model . to have some scattering regime , tubes within cluster have to exchange their scattering through the separation distance @xmath7 . in the case of a cluster of compact tubes , the distance @xmath7 is completely immersed in the magnetic field of the pair of tubes , where for a loose cluster , a part of the space between tubes is outside magnetic field .
therefore , somehow the magnetic field within the cluster of compact tubes does not support the scattering exchange between tubes in the horizontal direction , but rather supports simultaneous motion of tubes acting as a glue that holds the tubes together .
curiously , this characteristic describes the acoustic jacket phenomenon @xcite which is a near field of slow waves around the tube that propagate vertically in a stratified atmosphere carrying energy away , but they are evanescent in the radial direction . we know from previous studies that the gravitational stratification removes resonant absorption of a bundle of magnetic flux tubes and may reduce strong interactions between closely spaced flux tubes .
@xcite found that the mutual induction of the near - field jackets of two tubes on close separations can dramatically alter the scattering properties of the system playing an important role in the multiple - scattering regime .
given that , it is possible that the interaction between jacket modes of tubes within the cluster of compact tubes ( @xmath81 ) inhibits the multiple - scattering regime . in conclusion , in addition to the minimum condition @xmath42 for the small cluster to have a multiple - scattering regime
, the distance @xmath7 between tubes within this model has to be larger than the distance between the locations of the minimum of the magnetic field strength in the pair of tubes .
the case of the large cluster model is different .
it is possible that the scattering from larger size individual tubes within this cluster is not completely absorbed through the jacket mode phenomenon , which explains probably the observation of near - field waves from this model despite @xmath81 . in this context
, we have to note that since this lower limit of multiple - scattering regime depends on the size of the tubes , this distance would be much smaller in the case of the thin flux tube approximation .
independently from the cluster model , our simulations show that the small size hexagonal monolithic model oscillates like its equivalent monolithic tube model for all frequencies , with more scattering for the latter model . a more interesting behaviour is observed for the large size hexagonal monolithic model . this model and its equivalent monolithic tube
have approximately the same scattering cross section for low and mean frequencies .
however , the large hexagonal model shows less absorption in a high frequency .
in fact , due to its geometrical shape , less oscillation modes are excited inside compared to the supported waves in the monolithic tube model .
this is an important result , demonstrating a reasonable effect of the sunspot geometrical shape on the interaction of high - frequency waves with large monolithic models , which must be taken into consideration in addition to the radius and the wavelength in future simulations .
this work is a step toward the understanding of the helioseismic signature of sunspot models .
more improved numerical and analytical investigations are necessary to better interpret the observations .
the author thanks robert cameron and toufik abdelatif for useful discussions . the author also thanks the anonymous referee for constructive comments and suggestions that improve the quality of the paper . | we use numerical simulations to investigate the interaction of an @xmath0-mode wave packet with small and large models of a sunspot in a stratified atmosphere .
while a loose cluster model has been largely studied before , we focus in this study on the scattering from an ensemble of tightly compact tubes .
we showed that the small compact cluster produces a slight distorted scattered wave field in the transverse direction , which can be attributed to the simultaneous oscillations of the pairs of tubes within the cluster aligned in a perpendicular direction to the incoming wave .
however , no signature of a multiple - scattering regime has been observed from this model , while it has been clearly observable for the large compact cluster model .
furthermore , we pointed out the importance of the geometrical shape of the monolithic model on the interaction of @xmath0-mode waves with a sunspot in a high frequency range ( @xmath1 5 mhz ) .
these results are a contribution to the observational effort to distinguish seismically between different configurations of magnetic flux tubes within sunspots and plage .
[ firstpage ] sun : helioseismology , sun : magnetic fields , sun : oscillations , ( sun:)sunspots , sun : faculae , plages |
I recently attended a lecture by a distinguished man of letters. A poet, novelist, playwright and literary critic, this man also edits journals, directs literary festivals, collaborates on documentary projects, teaches full time at a university and is raising a family. When an audience member asked how he managed to find the time for all these things, he said, "Everything I do is in the interests of making time for my true passion: watching TV."
He was addicted, he said, to"Damages"and"Downton Abbey."He claimed to have watched all five seasons of"The Wire"in a single weekend. "I can't get enough TV," he said. "It's all I want to do."
There may have been some self-deprecating hyperbole at work (is it physically, mentally or even temporally possible to watch all 60 episodes of "The Wire" in a single, two-day weekend?), but from where I sat — in an auditorium, hoping that my home DVR was recording"Mad Men"and that my husband wasn't "cheating" and watching it without me — it all sounded refreshingly candid. We are in an oft-noted "new golden age of television." Since the late 1990s, when HBO debuted "The Sopranos"and other cable channels followed with their own original programming, the device once known as the idiot box has become markedly less idiotic.
Meghan Daum
PREVIEWS: Fall TV
Sure, "Jersey Shore"and various public stockades that masquerade as talent shows still crowd the airwaves. But challenging story arcs, complex characters and discernible quality can also get a ton of TV traction these days, which is only highlighted by the way we watch television. No longer at the mercy of whatever happens to be on (or of VHS machines no one could figure out how to program), we can press a button and save a show, order a full season on Netflix, stream it on our phones or download it episode by episode whenever, wherever. We can, in other words, give TV our undivided attention on our own terms.
And that means we can binge watch. Parodied to hilarious effect on"Portlandia,"where a couple forgo eating, sleeping and hygiene and eventually incur job loss because they can't turn away from"Battlestar Galactica,"marathon TV was considered largely harmless until last week. That's when Jim Pagels, an intern at Slate magazine, attacked "pandemic" binge watching and asserted that cliffhangers "need time to breathe" and that watching one episode right after the other would "ruin the whole batch."
The outburst was timed to Sunday's season premiere of "Breaking Bad,"which fans tend to inhale with the same desperate frenzy displayed by the meth junkies the show often features (I suspect I would inject the show intravenously if I could). It didn't go over well. Time magazine, the Wall Street Journal and NPR took Pagels to task. Bob Garfield of "On the Media" brought him in for a mild tongue-lashing and pronounced his own 33-hour "Breaking Bad" binge "one the greatest cultural experiences of my life." Garfield reminded his guest that Dickens' "Great Expectations" was originally presented in serial form but no one questions the purity of reading it cover to cover today.
PAY PER VIEWER: TVs highest paid stars
Pagels, it should be said, thoroughly argued his case (complete with bullet points). But then so did Garfield — big gulps or weekly sips, each has its benefits and drawbacks. We all know that. So why, other than his slightly sanctimonious tone, did Pagels touch such a nerve?
Because binge watching is code for something else entirely. It's a way to distinguish highbrow from lowbrow.
You rarely hear someone brag about watching 16 straight episodes of "Melrose Place." Binges are for the stuff that gets recapped on culture blogs, that finds its way into academic papers with names like "'24,' 'Lost,'and 'Six Feet Under': Post-Traumatic Television in the Post9/11 Era" (yes, that's an actual dissertation).
Marathon TV also takes effort. It distinguishes its practitioners (at least in their own minds) from the casual, passive viewer and turns TV watching into an act of agency, like reading a Russian novel or running an actual marathon. As a result, it turns something that was once a source of shame — prolonged couch potato-dom — into not just a badge of honor but membership in a club.
And one to which even a man of letters will admit paying his dues. Speaking of that, I'm taking next week off. I've got a 53-hour appointment with"Big Love."
[email protected] ||||| Photo byFrank Ockenfels/AMC
Breaking Bad returns this Sunday, July 15, bringing millions of devoted fans back to AMC—although not enough to keep the network on Dish, apparently—and spurring others to catch up with the show’s 46 episodes in a time span that may require copious amounts of Walter White’s purest Blue Sky.
To which I say: Slow down. Even if you aren’t taking crystal meth to fuel your rapid consumption of the best series of the last 10 years (yes, I have seen The Wire), you’re still ruining much of what makes the show—and all TV shows—great.
Advertisement
TV binge-watching is a pandemic that has afflicted many of the nation’s college students, with sites like SideReel, Netflix, and Megavideo—not to mention full-season DVD sets—readily at their disposal. They disappear into their dorm rooms for days at a time and emerge with encyclopedic knowledge of Vincent Chase’s movie career or the Pawnee Parks Department’s budget. As Mary McNamara noted a few months back in the Los Angeles Times, Netflix has even catered its original content toward this consumption model by releasing all episodes of its own new seasons at once, encouraging fans to plow through entire seasons in marathon sessions.
While it’s not surprising that America’s unprincipled youth have flocked to the latest trend, some of our most venerable critics have also hopped on the binge-watching bandwagon. Emily Nussbaum, formerly of New York and now of The New Yorker, went on a Breaking Bad bender last summer. “Binge-watching a show like Breaking Bad is probably the purest way to watch a great series,” she wrote. But if you ask me, she has ruined the entire batch.
“I had sailed past waves of buzz, raves, and backlash, past interviews with Gilligan and Cranston, misleading promo reels, casting news, and Twitter debates,” Nussbaum writes. “Stepping outside the audience this way made it easier to enjoy the show’s story as a whole. But I also felt freer to detach myself from viewing it in episodes at all, taking the narrative in instead as an imagistic poem.” Grantland, meanwhile, declared that “binging on an entire season of a television show without commercial interruption allows you to completely 'immerse' yourself in the world of your new favorite show.” Which is a bit like saying you should “immerse” yourself in Vegas by blowing through all your gambling money by the time your wife and kids have checked into the room.
I’m not saying that you must watch a show when it originally airs in order to fully enjoy it. Catching up on shows after they’ve aired is the only way anyone could keep pace with all the good TV out there these days. But there’s a proper way to do so, one that maintains the integrity of the art form. Before I lay out some guidelines, though, let me explain how marathon viewing destroys much of what is best about TV.
Advertisement
1. Episodes have their own integrity, which is blurred by watching several in a row.
TV series must constantly sustain two narrative arcs at once: that of the individual episode—which has its own beginning, middle, and end—and that of the season as a whole. (Some shows, like Breaking Bad and The Wire, operate on a third: that of the entire series.) To fully appreciate a show, you must pay attention to each of these arcs. This is one of the defining features of television as a medium and one of the things that makes it great. A TV show is not “an imagistic tone poem,” and it shouldn’t be viewed as one.
2. Cliffhangers and suspense need time to breathe.
Taking the time to ponder which Oceanic flight 815 member the Dharma Initiative brought back to the island or why Peggy decided to tell Pete she had his baby are an essential part of the experience of a series. Take the first season of Homeland: Much of the pleasure it provided came from wracking one’s brain each week—and changing one’s mind multiple times—trying to decide whether or not Brody was a double agent. That pleasure evaporates when you simply click “play” on the next episode.
Advertisement
3. Episode recaps and online communities provide key analysis and insight.
Contra David Simon, TV recaps really do enhance one’s experience of a TV show. Even if you're catching up on DVD or Netflix, you can still take the time to read recaps of nearly any episode on the A.V. Club, Hitfix, and here on Slate. They all provide great perspectives that you likely wouldn't have picked up on your own.
4. TV characters should be a regular part of our lives, not someone we hang out with 24/7 for a few days and then never see again.
Our best friends are the ones we see every so often for years, and TV characters should be the same way. I feel like I grew up with Michael Scott, because I spent 22 minutes a week with him every Thursday night for seven years. A friend of mine who recently cranked through all eight seasons of The Office in two weeks (really) probably thinks of Carrell's character like someone he hung out with at an intensive two-week corporate seminar and never saw again. Binge-watching reduces the potential for such deep, Draper-like relationships. While the Grantland piece argues that binges are the only way to forge “deep emotional connections,” in fact, the opposite is true.
5. Taking breaks maintains the timeline of the TV universe.
There are many exceptions to this rule, but TV series tend to place a few diegetic days between episodes and a few months between seasons. Thus, its rhythms match our own—when we watch them on their schedule. Watch an episode of Party Down a few days after finishing the last one, for instance, and notice how all the caterers have also had a few days off since their last gig. Or return to a new season of 30 Rock after a summer away, and see how the TGS writers are also returning from their vacation.
If you need to catch up with a show, here are the guidelines: Wait a minimum of 24 hours between episodes and at least a couple weeks between seasons. If one TV show doesn’t provide a full night’s entertainment for you, pick out a few programs you’ve been meaning to catch up with and watch one episode of each. | – When Slate intern Jim Pagels wrote last week that "TV binge-watching is a pandemic" that ruins "the integrity of the art form," condemnation rained down from some of the mightiest media outlets in the land. How dare he! We are, after all "in an oft-noted 'new golden age of television,'" writes Meaghan Daum of the LA Times, and thanks to new technology we're not at the mercy of TV schedules. "We can give TV our undivided attention on our own terms." Weekly, episodic viewing has its charms, too. "We all know that. So why, other than his slightly sanctimonious tone, did Pagels touch such a nerve?" The answer: "Binge watching is code." You don't brag about binge-watching Jersey Shore—binges are for highbrow stuff. "It distinguishes its practitioners (at least in their own minds) from the casual, passive viewer and turns TV watching into an act of agency, like reading a Russian novel," Daum asserts. Suddenly, "prolonged couch potato-dom" is transformed into "a badge of honor." Click for Daum's full column. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Elder Abuse Prevention Act''.
SEC. 2. FINDINGS.
Congress finds the following:
(1) The number of older individuals in the United States
who are abused, neglected, or exploited is increasing, and a
large percentage of elder abuse cases are not reported to
Federal and State law enforcement authorities.
(2) The number of individuals in the United States aged 65
and older is projected to increase exponentially in the coming
years, and many of these valued citizens will begin to
constitute a vulnerable population at increased risk of abuse
and exploitation in domestic and community-based settings.
(3) The projected increase in the number of individuals in
the United States aged 65 and over is expected to result in a
corresponding increase in the number of cases of elder abuse,
which suggests an urgent need for comprehensive consideration
of means by which such abuse can be prevented, reported, and
prosecuted by Federal and State authorities.
(4) Violent, physical, and sexual assaults upon older
individuals are particularly abhorrent and should be prosecuted
vigorously by Federal and State law enforcement authorities.
Such acts should be deterred by appropriate penalties including
enhanced penalties and the elimination of parole for
individuals convicted of violent sexual offenses against the
elderly.
SEC. 3. NO PAROLE FOR SEXUAL OFFENSES COMMITTED AGAINST OLDER
INDIVIDUALS OR FOR SEXUALLY VIOLENT PREDATORS.
(a) In General.--For each fiscal year after the expiration of the
period specified in subsection (b)(1) in which a State receives funds
for the program referred to in subsection (b)(2), the State shall have
in effect throughout the State laws and policies that prohibit parole
for any individual who--
(1) is convicted of a criminal sexual offense against a
victim who is an older individual, which shall include any such
offense under State law for conduct that would constitute an
offense under chapter 109A of title 18, United States Code, had
the conduct occurred in the special maritime and territorial
jurisdiction of the United States or in a Federal prison; and
(2) is a sexually violent predator.
(b) Compliance and Ineligibility.--
(1) Compliance date.--Each State shall have not more than 3
years from the date of enactment of this Act to comply with
subsection (a), except that--
(A) the Attorney General may grant an additional 2
years to a State that is making good faith efforts to
comply with such subsection; and
(B) the Attorney General shall waive the
requirements of subsection (a) if compliance with such
subsection by a State would be unconstitutional under
the constitution of such State.
(2) Ineligibility for funds.--For any fiscal year after the
expiration of the period specified in paragraph (1), a State
that fails to comply with subsection (a) shall not receive 10
percent of the funds that would otherwise be allocated for that
fiscal year to the State under the Edward Byrne Memorial
Justice Assistance Grant Program under subpart 1 of part E of
title I of the Omnibus Crime Control and Safe Streets Act of
1968 (42 U.S.C. 3750 et seq.).
(c) Reallocation.--Amounts not allocated under the program referred
to in subsection (b)(2) to a State for failure to fully comply with
subsection (a) shall be reallocated under that program to States that
have not failed to comply with such subsection.
(d) Definitions.--For the purposes of this section--
(1) the term ``older individual'' means an individual who
is 65 years of age or older; and
(2) the term ``sexually violent predator'' means a person
who--
(A) has been convicted of a sexually violent
offense; and
(B) has been diagnosed by a qualified mental health
professional as having a mental abnormality or
personality disorder that makes the person likely to
engage in predatory sexually violent offenses, or has
been determined by a court to suffer from such an
illness or disorder.
SEC. 4. AMENDMENT TO THE FEDERAL SENTENCING GUIDELINES.
(a) Request for Immediate Consideration by the United States
Sentencing Commission.--Pursuant to its authority under section 994(p)
of title 28, United States Code, and in accordance with this section,
the United States Sentencing Commission shall--
(1) promptly review the sentencing guidelines applicable to
sexual offenses committed against the elderly;
(2) expeditiously consider the promulgation of new
sentencing guidelines or amendments to existing sentencing
guidelines to provide an enhancement for such offenses; and
(3) submit to Congress an explanation of actions taken by
the Sentencing Commission pursuant to paragraph (2) and any
additional policy recommendations the Sentencing Commission may
have for combating offenses described in paragraph (1).
(b) Considerations in Review.--In carrying out this section, the
Sentencing Commission shall--
(1) ensure that the sentencing guidelines and policy
statements reflect the serious nature of such offenses and the
need for aggressive and appropriate law enforcement action to
prevent such offenses;
(2) assure reasonable consistency with other relevant
directives and with other guidelines;
(3) account for any aggravating or mitigating circumstances
that might justify exceptions, including circumstances for
which the sentencing guidelines currently provide sentencing
enhancements;
(4) make any necessary conforming changes to the sentencing
guidelines; and
(5) assure that the guidelines adequately meet the purposes
of sentencing as set forth in section 3553(a)(2) of title 18,
United States Code.
(c) Emergency Authority and Deadline for Commission Action.--The
United States Sentencing Commission shall promulgate the guidelines or
amendments provided for under this section as soon as practicable, and
in any event not later than the 180 days after the date of enactment of
this Act, in accordance with the procedures set forth in section 21(a)
of the Sentencing Reform Act of 1987, as though the authority under
that Act had not expired. | Elder Abuse Prevention Act - Requires a state that is receiving funds for certain law enforcement assistance programs under the Omnibus Crime Control and Safe Streets Act of 1968 to have in effect laws and policies that prohibit parole for any individual who is: (1) convicted of a criminal sexual offense against a victim who is an older individual (defined as age 65 or older); or (2) a sexually violent predator (defined as a person who has been convicted of a sexually violent offense and who has been diagnosed by a qualified mental health professional as having a mental abnormality or personality disorder that makes the person likely to engage in predatory sexually violent offenses or who has been determined by a court to suffer from such an illness or disorder). Grants states three years to implement such laws and policies (with one additional two-year extension for states making good faith efforts at implementation). Renders any state that does not implement such laws and policies within the required period ineligible for 10% of funding for its law enforcement assistance programs.
Requires the U.S. Sentencing Commission to promptly review its guidelines for sexual offenses committed against the elderly and to consider new guidelines for enhanced sentencing for such crimes. |
SECTION 1. SHORT TITLE; TABLE OF CONTENTS.
(a) Short Title.--This Act may be cited as the ``Social Security
Expansion Act''.
(b) Table of Contents.--The table of contents of this Act is as
follows:
Sec. 1. Short title; table of contents.
Sec. 2. Across-the-board benefit increase.
Sec. 3. Computation of cost-of-living increases.
Sec. 4. Increase in minimum benefit for lifetime low earners based on
years in the workforce.
Sec. 5. Payroll tax on remuneration up to contribution and benefit base
and more than $250,000.
Sec. 6. Tax on net earnings from self-employment up to contribution and
benefit base and more than $250,000.
Sec. 7. Tax on investment gain.
SEC. 2. ACROSS-THE-BOARD BENEFIT INCREASE.
Section 215(a)(1)(B) of the Social Security Act (42 U.S.C.
415(a)(1)(B)) is amended--
(1) by redesignating clause (iii) as clause (iv); and
(2) by inserting after clause (ii) the following new
clause:
``(iii) For individuals who initially become eligible for
old-age or disability insurance benefits, or who die (before
becoming eligible for such benefits) in any calendar year after
2020, the amount determined under clause (i) of this
subparagraph for purposes of subparagraph (A)(i) for such
calendar year shall be increased by--
``(I) for calendar year 2021, 1 percent;
``(II) for each of calendar years 2022 through
2034, the percent determined under this clause for the
preceding year increased by 1 percentage point; and
``(III) for calendar year 2035 and each year
thereafter, 15 percent.''.
SEC. 3. COMPUTATION OF COST-OF-LIVING INCREASES.
(a) In General.--Section 215(i)(1) of the Social Security Act (42
U.S.C. 415(i)(1)) is amended by adding at the end the following new
subparagraph:
``(H) the term `Consumer Price Index' means the Consumer
Price Index for Elderly Consumers (CPI-E, as published by the
Bureau of Labor Statistics of the Department of Labor).''.
(b) Application to Pre-1979 Law.--
(1) In general.--Section 215(i)(1) of the Social Security
Act as in effect in December 1978, and as applied in certain
cases under the provisions of such Act as in effect after
December 1978, is amended by adding at the end the following
new subparagraph:
``(D) the term `Consumer Price Index' means the Consumer
Price Index for Elderly Consumers (CPI-E, as published by the
Bureau of Labor Statistics of the Department of Labor).''.
(2) Conforming change.--Section 215(i)(4) of the Social
Security Act (42 U.S.C. 415(i)(4)) is amended by inserting
``and by section 102 of the Social Security Expansion Act''
after ``1986''.
(c) No Effect on Adjustments Under Other Laws.--Section 215(i) of
the Social Security Act (42 U.S.C. 415(i)) is amended by adding at the
end the following:
``(6) Any provision of law (other than in this title, title VIII,
or title XVI) which provides for adjustment of an amount based on a
change in benefit amounts resulting from a determination made under
this subsection shall be applied and administered without regard to the
amendments made by section 102 of the Social Security Expansion Act.''.
(d) Publication of Consumer Price Index for Elderly Consumers.--The
Bureau of Labor Statistics of the Department of Labor shall prepare and
publish the index authorized by section 191 of the Older Americans
Amendments Act of 1987 (29 U.S.C. 2 note) for each calendar month,
beginning with July of the calendar year following the calendar year in
which this Act is enacted, and such index shall be known as the
``Consumer Price Index for Elderly Consumers''.
(e) Effective Date.--The amendments made by subsection (a) shall
apply to determinations made with respect to cost-of-living computation
quarters (as defined in section 215(i)(1)(B) of the Social Security Act
(42 U.S.C. 415(i)(1)(B))) ending on or after September 30 of the second
calendar year following the calendar year in which this Act is enacted.
SEC. 4. INCREASE IN MINIMUM BENEFIT FOR LIFETIME LOW EARNERS BASED ON
YEARS IN THE WORKFORCE.
(a) In General.--Section 215(a)(1) of the Social Security Act (42
U.S.C. 415(a)(1)) is amended--
(1) by redesignating subparagraph (D) as subparagraph (E);
and
(2) by inserting after subparagraph (C) the following new
subparagraph:
``(D)(i) Effective with respect to the benefits of individuals who
become eligible for old-age insurance benefits or disability insurance
benefits (or die before becoming so eligible) after 2015, no primary
insurance amount computed under subparagraph (A) may be less than the
greater of--
``(I) the minimum monthly amount computed under
subparagraph (C); or
``(II) in the case of an individual who has more than 10
years of work (as defined in clause (iv)(I)), the alternative
minimum amount determined under clause (ii).
``(ii)(I) The alternative minimum amount determined under this
clause is the applicable percentage of \1/12\ of the annual dollar
amount determined under clause (iii) for the year in which the amount
is determined.
``(II) For purposes of subclause (I), the applicable percentage is
the percentage specified in connection with the number of years of
work, as set forth in the following table:
``If the number of years The applicable
of work is: percentage is:
11........................................... 6.25 percent
12........................................... 12.50 percent
13........................................... 18.75 percent
14........................................... 25.00 percent
15........................................... 31.25 percent
16........................................... 37.50 percent
17........................................... 43.75 percent
18........................................... 50.00 percent
19........................................... 56.25 percent
20........................................... 62.50 percent
21........................................... 68.75 percent
22........................................... 75.00 percent
23........................................... 81.25 percent
24........................................... 87.50 percent
25........................................... 93.75 percent
26........................................... 100.00 percent
27........................................... 106.25 percent
28........................................... 112.50 percent
29........................................... 118.75 percent
30 or more................................... 125.00 percent.
``(iii) The annual dollar amount determined under this clause is--
``(I) for calendar year 2016, the poverty guideline for
2015; and
``(II) for any calendar year after 2016, the annual dollar
amount for 2016 multiplied by the ratio of--
``(aa) the national average wage index (as defined
in section 209(k)(1)) for the second calendar year
preceding the calendar year for which the determination
is made, to
``(bb) the national average wage index (as so
defined) for 2014.
``(iv) For purposes of this subparagraph--
``(I) the term `year of work' means, with respect to an
individual, a year to which 4 quarters of coverage have been
credited based on such individual's wages and self-employment
income; and
``(II) the term `poverty guideline for 2015' means the
annual poverty guideline for 2015 (as updated annually in the
Federal Register by the Department of Health and Human Services
under the authority of section 673(2) of the Omnibus Budget
Reconciliation Act of 1981) as applicable to a single
individual.''.
(b) Recomputation.--Notwithstanding section 215(f)(1) of the Social
Security Act, the Commissioner of Social Security shall recompute
primary insurance amounts originally computed for months prior to
November 2014 to the extent necessary to carry out the amendments made
by this section.
(c) Conforming Amendment.--Section 209(k)(1) of such Act (42 U.S.C.
409(k)(1)) is amended by inserting ``215(a)(1)(E),'' after
``215(a)(1)(D),''.
SEC. 5. PAYROLL TAX ON REMUNERATION UP TO CONTRIBUTION AND BENEFIT BASE
AND MORE THAN $250,000.
(a) In General.--Paragraph (1) of section 3121(a) of the Internal
Revenue Code of 1986 is amended by inserting after ``such calendar
year.'' the following: ``The preceding sentence shall apply only to
calendar years for which the contribution and benefit base (as so
determined) is less than $250,000, and, for such calendar years, only
to so much of the remuneration paid to such employee by such employer
with respect to employment as does not exceed $250,000.''.
(b) Conforming Amendment.--Paragraph (1) of section 3121 of the
Internal Revenue Code of 1986 is amended by striking ``Act) to'' and
inserting ``Act), or in excess of $250,000, to''.
(c) Effective Date.--The amendments made by this section shall
apply to remuneration paid after December 31, 2015.
SEC. 6. TAX ON NET EARNINGS FROM SELF-EMPLOYMENT UP TO CONTRIBUTION AND
BENEFIT BASE AND MORE THAN $250,000.
(a) In General.--Paragraph (1) of section 1402(b) of the Internal
Revenue Code of 1986 is amended to read as follows:
``(1) in the case of the tax imposed by section 1401(a),
the excess of--
``(A) that part of the net earnings from self-
employment which is in excess of--
``(i) an amount equal to the contribution
and benefit base (as determined under section
230 of the Social Security Act) which is
effective for the calendar year in which such
taxable year begins, minus
``(ii) the amount of the wages paid to such
individual during such taxable years; over
``(B) that part of the net earnings from self-
employment which is in excess of the sum of--
``(i) the excess of--
``(I) the net earning from self-
employment reduced by the excess (if
any) of subparagraph (A)(i) over
subparagraph (A)(ii), over
``(II) $250,000, reduced by such
contribution and benefit base, plus
``(ii) the amount of the wages paid to such
individual during such taxable year in excess
of such contribution and benefit base and not
in excess of $250,000; or''.
(b) Phaseout.--Subsection (b) of section 1402 of the Internal
Revenue Code of 1986 is amended by adding at the end the following:
``Paragraph (1) shall apply only to taxable years beginning in calendar
years for which the contribution and benefit base (as determined under
section 230 of the Social Security Act) is less than $250,000.''.
(c) Effective Date.--The amendments made by this section shall
apply to net earnings from self-employment derived, and remuneration
paid, after December 31, 2015.
SEC. 7. TAX ON INVESTMENT GAIN.
(a) In General.--Subsection (a) of section 1411 of the Internal
Revenue Code of 1986 is amended by striking ``3.8 percent'' each place
it appears and inserting ``10 percent''.
(b) Conforming Amendment.--The heading for chapter 2A of the
Internal Revenue Code of 1986 is amended by inserting ``AND SOCIAL
SECURITY'' after ``MEDICARE''.
(c) Trust Funds.--
(1) Federal old-age and survivors insurance trust fund.--
Subsection (a) of section 201 of the Social Security Act (42
U.S.C. 401) is amended--
(A) in paragraph (4), by striking the period at the
end and inserting ``; and'';
(B) by inserting after paragraph (4) the following
new paragraph:
``(5) 62 percent of the taxes imposed under section 1411 of the
Internal Revenue Code of 1986, less the amounts specified in clause (3)
of subsection (b) of this section.''; and
(C) in the flush matter at the end--
(i) by striking ``clauses (3) and (4)''
each place it appears and inserting ``clauses
(3), (4), and (5)''; and
(ii) by striking ``clauses (1) and (2)''
and inserting ``clauses (1), (2), and (3)''.
(2) Federal disability insurance trust fund.--Subsection
(b) of such section is amended--
(A) in paragraph (2), by striking the period at the
end and inserting ``; and''; and
(B) by adding at the end the following new
paragraph:
``(3) 9 percent of the taxes imposed under section 1411 of the
Internal Revenue Code of 1986.''.
(d) Effective Date.--The amendments made by this section shall
apply to taxable years beginning after December 31, 2015. | Social Security Expansion Act Amends title II (Old Age, Survivors and Disability Insurance) of the Social Security Act to: increase the primary insurance amount for all eligible beneficiaries, beginning in 2021; revise computation of cost-of-living adjustments to use the Consumer Price Index for Elderly Consumers; increase the special minimum primary insurance amount for lifetime low earners based on years in the workforce. Amends the Internal Revenue Code to: (1) apply employment and self-employment taxes to remuneration up to the contribution and benefit base and to remuneration in excess of $250,000; and (2) increase the tax on investment gain from 3.8% to 10% of the lesser of net investment income for such taxable year or the excess (if any) of the modified adjusted gross income for such taxable year, over the threshold amount, with 62% of such tax allocated to the Federal Old-Age and Survivors Insurance Trust Fund and 9% allocated to the Federal Disability Insurance Trust Fund. |
President Obama said today he is willing to compromise on the George W. Bush tax cuts to preserve them for the middle class.
"We've got to make sure that we're coming up with a solution, even if it's not 100% of what I want or what the Republicans want," Obama said in North Carolina. "There's no reason that ordinary Americans should see their taxes go up next year."
Obama spoke as aides and members of Congress said they are discussing a potential deal for a temporary extension of all the tax cuts, including those for individuals who make more than $200,000 annually and couples who make more than $250,000.
The president, along with Vice President Biden, is scheduled to meet this afternoon with Democratic leaders to discuss the tax cut issue.
Obama and other Democrats want an extension only for the middle class but do not have the votes to overcome a filibuster by Senate Republicans.
Without some kind of deal, all of the George W. Bush tax cuts will expire at the end of the year, meaning a de facto tax hike for everybody.
"A middle-class tax hike would be very tough, not only on working families, it would also be a drag on our economy at this moment," Obama told a crowed at Forsyth Community College in Winston-Salem, N.C.
Any agreement must include an extension of unemployment benefits, Obama said, calling that "a priority."
Republicans such as Sen. Jon Kyl, R-Ariz. -- who is involved in talks with the White House -- said they want a permanent extension of all the tax cuts but may be willing to settle for temporary extensions instead.
Kyl and other Republicans argue that no one's taxes should be raised in tough economic times and that wealthy taxpayers are the ones that create jobs.
White House spokesman Bill Burton declined to comment on many specifics of the negotiations but did say they are going well.
"The president is confident that in the next couple of days or so we will find a way to extend tax cuts for middle-class families and do some other things that the president thinks are important in helping to grow the economy and create jobs," Burton said.
Those other things include unemployment insurance, as well as middle-class tax breaks such as school tuition tax credits.
"As the president has been saying, he's not going to just sign an extension of these tax cuts," Burton said. "He thinks that what we need to do is extend unemployment insurance and do some of these other tax credits that are so stimulative and help to both grow the economy and also help middle-class families who are really struggling right now."
(Posted by David Jackson) ||||| President Obama conceded Monday that he’ll probably have to let the Bush tax cuts for the rich be extended as part of a deal with Republicans, arguing that such an agreement was necessary to ensure that taxes for the middle class don’t increase on Jan. 1.
“We've got to make sure that we’re coming up with a solution, even if it’s not 100 percent of what I want or what the Republicans want,” he said during a speech about the economy at a North Carolina technical community college.
Text Size -
+
reset VIDEO: Obama on tax cuts POLITICO 44
Obama insisted that the U.S. “can’t afford” to extend the tax cuts for the rich permanently, as Republicans desire — a move analysts say will add billions to the already massive federal deficit, but that the GOP argues will stimulate investment and small-business hiring. But the president signaled that he may have no choice other than to extend the breaks at least temporarily, as members of both parties in Congress wrangle over how to retain the expiring cuts as the year’s end deadline looms.
“We've got to find consensus here,” Obama said. Allowing the tax cuts to expire across the board, he said, “would be very tough not only on working families. It would also be a drag on the economy at this moment. So I believe we should keep in place tax cuts for workers and small businesses that are set to expire.”
He added, “There’s no reason that ordinary Americans should see their taxes go up next year.”
Before he touched down in Winston-Salem, N.C., the president spent part of the morning calling lawmakers to shore up support for a deal on tax cuts to be reached by the end of the week, according to the White House. White House deputy press secretary Bill Burton told reporters aboard Air Force One that negotiators have made progress on a deal and that Obama was confident that one would be reached “in the next couple of days or so.”
White House officials hinted on Monday that the president would not sign a compromise extension unless it included the continuation of long-term unemployment insurance and a slew of “stimulative” tax programs that were part of the 2009 Recovery Act. The president alluded to those proposals during his speech at Forsyth Technical Community College. | – Faced with the prospect of no deal on the Bush tax cuts resulting in a middle-class tax hike on Jan. 1, President Obama is signaling a willingness to extend them across the board, reports USA Today. "We've got to find consensus here even if it's not 100% of what I want or what the Republicans want," he said in a speech today. "There’s no reason that ordinary Americans should see their taxes go up next year.” The burgeoning federal deficit "can't afford" to permanently extend the tax cuts to the rich, the president said, but he looks ready to do so temporarily—though Politico notes that the White House has hinted that such a compromise must be tied to the extension of unemployment insurance and other stimuli in the 2009 Recovery Act. Congressional Democrats are meeting today with Obama and Joe Biden to discuss the issue. |
the dpp was a 27-center randomized clinical trial in the u.s . that assessed whether metformin or lifestyle interventions prevent or delay development of diabetes in high - risk individuals .
the dpp enrolled 3,234 overweight or obese people without diabetes but with impaired glucose tolerance and elevated fasting glucose and randomized them to placebo , metformin ( 850 mg twice daily ) , or a lifestyle intervention program consisting of individual and group counseling sessions conducted by dietary and exercise professionals aimed at 7% weight loss and 150 min of physical activity per week .
a fourth arm of 585 subjects assigned to troglitazone ( 400 mg daily ) was stopped because of hepatotoxicity ( 7 ) .
the primary end point was development of diabetes , ascertained by semi - annual measurement of fasting glucose or an annual 75-g oral glucose tolerance test , either of which was confirmed on a second occasion .
the metformin and lifestyle interventions reduced the incidence of diabetes by 31% ( 95% ci 1743 ) and 58% ( 95% ci 4866 ) , respectively , versus placebo ( 6 ) .
the 2,994 participants in the placebo , metformin , and lifestyle arms who gave informed consent for genetic investigation are the subjects of this study , which was approved by institutional review boards at each of the 27 participating sites .
baseline characteristics of the dpp participants enrolled in the genetic study data are n ( % ) or means sd .
we selected snps in two ways : 1 ) snps in high - likelihood candidate genes and 2 ) snps identified by ongoing gwass for type 2 diabetes or related metabolic traits .
the 40 candidate genes were tentatively associated with type 2 diabetes , implicated in monogenic forms of diabetes , known to encode type 2 diabetes drug targets or drug - metabolizing / transporting enzymes , or involved in cellular metabolism , hormonal regulation , or response to exercise ( table 2 ) .
we used tagger ( 8) to capture ( at r 0.8 ) all common ( minor allele frequency > 5% ) variations in european ( ceu ) and african ( yri ) hapmap populations in these candidate genes . for seven additional genes ( ace , casq1 , gckr , irs1 , kcnq1 , lipc , and nos3 ) , rather than attempting full coverage of genetic variation , we selected a limited number of snps previously associated with the phenotypes of interest .
as the study evolved , it became obvious that previous reports of genetic association provided an equally compelling or perhaps even higher prior probability of true association with type 2 diabetes traits than biological function alone ; thus , we also focused on gwass whose results were available at the time this custom - made genotyping array was designed : snps associated with type 2 diabetes in the diabetes genetics initiative ( 9 ) , diagram ( 10 ) , or three smaller 100k snp gwass in which we participated ( 1113 ) ; snps tentatively associated with quantitative glycemic traits ( fasting glucose , the insulinogenic index , and insulin resistance by homeostasis model assessment ) in the diabetes genetics initiative ; or snps associated with obesity ( 14,15 ) or lipid traits ( 1618 ) . for quality control and analytical reasons , we also included some snps previously genotyped in these samples , as well as ancestry - informative markers to derive a global proportion of geographic ancestry in african american ( 19 ) or hispanic ( 20 ) participants .
finally , we included a small number of snps provided by investigators leading ancillary studies approved by the dpp ancillary studies and genetics subcommittees .
number of snps analyzed per selection category we initially designed a 1,536-snp oligonucleotide pool array for the illumina beadarray platform ( illumina , san diego , ca ) . in the 1,445 snps that passed quality control metrics ,
the sample pass rate was 99.8% and the average genotyping call rate per snp was 98.5% . because 91 snps failed genotyping on the oligonucleotide pool array , we assessed the adequacy of the coverage afforded by the successfully genotyped snps in each region .
to rescue relevant snps , we used linkage disequilibrium ( ld ) to select proxy snps highly correlated to those that had failed and genotyped them on a sequenom iplex platform .
we tested the effect of each snp on diabetes incidence under an additive genetic model by cox proportional hazards models , using age , sex , ethnicity , and treatment arm as covariates and including treatment ( metformin or lifestyle ) genotype interaction terms .
in secondary analyses , we stratified participants by treatment arm ; if the interaction p value was nominally significant , only stratified analyses were considered .
we used the mach software ( 21 ) and the hapmap ceu population to impute allelic calls at snps not directly genotyped in the dpp . because of concerns regarding the accuracy of imputation methods in admixed populations , we restricted this procedure to individuals of self - described non - hispanic white ethnicity .
genotype - phenotype correlations on imputed data were considered confirmatory of prior associations , as well as an initial fine - mapping exploration . using the program structure ( 22 ) , we applied these markers trained on the hapmap populations to assign a proportion of global european ancestry to each dpp participant .
we considered two sequential approaches to correct for multiple hypothesis testing based on the number of snps examined ( 23 ) .
we first ran 1,000 permutations in which diabetes outcome was randomly assigned to an individual 's genotype within each ethnicity and treatment group ( keeping sex and age together with genotype , and bmi with diabetes outcome ) .
the p value for the overall null hypothesis is the fraction of permutations ( n/1,000 ) for which the scalar statistic is at least as extreme as that observed for the data ( 24 ) . to estimate the expected proportion of type i errors among the rejected hypotheses
, we also computed false discovery rates ( fdrs ) as in benjamini and hochberg ( 25 ) .
the dpp was a 27-center randomized clinical trial in the u.s . that assessed whether metformin or lifestyle interventions prevent or delay development of diabetes in high - risk individuals .
the dpp enrolled 3,234 overweight or obese people without diabetes but with impaired glucose tolerance and elevated fasting glucose and randomized them to placebo , metformin ( 850 mg twice daily ) , or a lifestyle intervention program consisting of individual and group counseling sessions conducted by dietary and exercise professionals aimed at 7% weight loss and 150 min of physical activity per week .
a fourth arm of 585 subjects assigned to troglitazone ( 400 mg daily ) was stopped because of hepatotoxicity ( 7 ) .
the primary end point was development of diabetes , ascertained by semi - annual measurement of fasting glucose or an annual 75-g oral glucose tolerance test , either of which was confirmed on a second occasion .
the metformin and lifestyle interventions reduced the incidence of diabetes by 31% ( 95% ci 1743 ) and 58% ( 95% ci 4866 ) , respectively , versus placebo ( 6 ) .
the 2,994 participants in the placebo , metformin , and lifestyle arms who gave informed consent for genetic investigation are the subjects of this study , which was approved by institutional review boards at each of the 27 participating sites .
baseline characteristics of the dpp participants enrolled in the genetic study data are n ( % ) or means sd .
we selected snps in two ways : 1 ) snps in high - likelihood candidate genes and 2 ) snps identified by ongoing gwass for type 2 diabetes or related metabolic traits .
the 40 candidate genes were tentatively associated with type 2 diabetes , implicated in monogenic forms of diabetes , known to encode type 2 diabetes drug targets or drug - metabolizing / transporting enzymes , or involved in cellular metabolism , hormonal regulation , or response to exercise ( table 2 ) .
we used tagger ( 8) to capture ( at r 0.8 ) all common ( minor allele frequency > 5% ) variations in european ( ceu ) and african ( yri ) hapmap populations in these candidate genes . for seven additional genes ( ace , casq1 , gckr , irs1 , kcnq1 , lipc , and nos3 ) , rather than attempting full coverage of genetic variation , we selected a limited number of snps previously associated with the phenotypes of interest .
as the study evolved , it became obvious that previous reports of genetic association provided an equally compelling or perhaps even higher prior probability of true association with type 2 diabetes traits than biological function alone ; thus , we also focused on gwass whose results were available at the time this custom - made genotyping array was designed : snps associated with type 2 diabetes in the diabetes genetics initiative ( 9 ) , diagram ( 10 ) , or three smaller 100k snp gwass in which we participated ( 1113 ) ; snps tentatively associated with quantitative glycemic traits ( fasting glucose , the insulinogenic index , and insulin resistance by homeostasis model assessment ) in the diabetes genetics initiative ; or snps associated with obesity ( 14,15 ) or lipid traits ( 1618 ) . for quality control and analytical reasons , we also included some snps previously genotyped in these samples , as well as ancestry - informative markers to derive a global proportion of geographic ancestry in african american ( 19 ) or hispanic ( 20 ) participants .
finally , we included a small number of snps provided by investigators leading ancillary studies approved by the dpp ancillary studies and genetics subcommittees .
we initially designed a 1,536-snp oligonucleotide pool array for the illumina beadarray platform ( illumina , san diego , ca ) . in the 1,445 snps that passed quality control metrics ,
the sample pass rate was 99.8% and the average genotyping call rate per snp was 98.5% . because 91 snps failed genotyping on the oligonucleotide pool array , we assessed the adequacy of the coverage afforded by the successfully genotyped snps in each region . to rescue relevant snps , we used linkage disequilibrium ( ld ) to select proxy snps highly correlated to those that had failed and genotyped them on a sequenom iplex platform .
we tested the effect of each snp on diabetes incidence under an additive genetic model by cox proportional hazards models , using age , sex , ethnicity , and treatment arm as covariates and including treatment ( metformin or lifestyle ) genotype interaction terms .
in secondary analyses , we stratified participants by treatment arm ; if the interaction p value was nominally significant , only stratified analyses were considered .
we used the mach software ( 21 ) and the hapmap ceu population to impute allelic calls at snps not directly genotyped in the dpp . because of concerns regarding the accuracy of imputation methods in admixed populations , we restricted this procedure to individuals of self - described non - hispanic white ethnicity .
genotype - phenotype correlations on imputed data were considered confirmatory of prior associations , as well as an initial fine - mapping exploration . using the program structure ( 22 ) , we applied these markers trained on the hapmap populations to assign a proportion of global european ancestry to each dpp participant .
we considered two sequential approaches to correct for multiple hypothesis testing based on the number of snps examined ( 23 ) .
we first ran 1,000 permutations in which diabetes outcome was randomly assigned to an individual 's genotype within each ethnicity and treatment group ( keeping sex and age together with genotype , and bmi with diabetes outcome ) .
the p value for the overall null hypothesis is the fraction of permutations ( n/1,000 ) for which the scalar statistic is at least as extreme as that observed for the data ( 24 ) . to estimate the expected proportion of type i errors among the rejected hypotheses
, we also computed false discovery rates ( fdrs ) as in benjamini and hochberg ( 25 ) .
supplementary table 1 ( available in an online appendix at http://diabetes.diabetesjournals.org/cgi/content/full/db10-0543/dc1 ) shows that we achieved adequate coverage of all 40 genes in the two targeted populations , with 37 genes reaching at least 80% of common variants captured at r 0.8 in europeans and all 40 reaching at least 70% of common variants captured at that level ( comparable numbers were obtained in africans ) .
the average proportion of european ancestry among the dpp self - described white participants , as determined by ancestry - informative markers , was 98.9% , and the average proportion of west - african ancestry among dpp self - described african american participants was 89.3% . given these results , we used self - described ethnicity as a covariate for these analyses .
table 3 shows the candidate gene regions harboring variants nominally associated with diabetes incidence in the treatment - adjusted models for the full study ( i.e. , there was no evidence for interaction with either intervention ) ; only the top snp within each gene region ( out of 85 nominal associations ) is given .
the most significant associations occurred at snps in the amp kinase ( ampk ) subunit gene prkag2 ( hazard ratio [ hr ] 1.24 , 95% ci 1.091.40 , p = 7.0 10 for the top snp rs5017427 , which is consistent with an experiment - wide 34% fdr ) .
twelve other prkag2 snps were nominally associated with diabetes ( five in the top ten ) .
although most of them are in moderate to high ld with the index snp ( r ranging from 0.49 to 1.0 in hapmap ceu ) , at least two of them ( rs954482 and rs2727537 ) are only weakly correlated with rs5017427 ( r 0.07 and 0.05 , respectively ) .
nevertheless , the consistency of the association signal in this region provides reassurance with regard to the absence of genotyping artifacts in our dataset . of snps previously associated with type 2 diabetes in the 100k amish , framingham , or pima gwass , three ( rs1422930 in odz2 , rs1859441 near col2a1 and senp1 , and rs385909 near sh3yl1 ) had consistent nominal associations with diabetes incidence in the dpp , and two had nominally significant associations ( rs10520926 and rs3136279 ) in the opposite direction . on the other hand , none of the six snps selected from the diagram meta - analysis ( original odds ratio [ or ] ranging from 1.05 to 1.15 ) were nominally significant in the dpp .
fifteen snps in genes that cause either maturity - onset diabetes of the young or neonatal diabetes were nominally associated with diabetes incidence ; one of them , rs11868513 in hnf1b ( not in ld with the previously type 2 diabetes
associated snp rs757210 ) , was strongly associated with diabetes incidence in the placebo arm ( hr 1.69 , 95% ci 1.362.10 , p = 2 10 ) .
finally , 14 snps in genes that encode metformin transporters ( slc22a1 , slc22a2 , and slc47a1 ) were nominally associated with diabetes incidence .
of the 85 nominal associations with diabetes incidence in dpp , only two snps ( rs651164 in slc22a1 and rs3736265 in ppargc1a ) were nominally associated with type 2 diabetes in diagram in a consistent direction ( or 1.08 , 95% ci 1.021.16 , p = 0.01 , and or 1.15 , 95% ci 1.011.31 , p = 0.04 , respectively ) , with 60 other snps not being nominally significantly associated in diagram and 23 snps not captured in that dataset .
candidate gene variants nominally associated with diabetes incidence in the dpp snps in or near biological candidate genes showing nominal association with diabetes incidence in the dpp are shown .
hrs are estimated for the minor allele ( m ) vs. the major allele ( m ) under an additive genetic model .
only the top snp within each gene region is shown ; the full set of results ( including allele frequencies ) is available in supplementary table 2 .
table 4 shows the candidate gene regions harboring variants that have a nominally significant genotype metformin interaction ; only the top snp within each gene region is given ( out of 91 nominal associations ) .
at rs8065082 in slc47a1 , there was a nominal interaction with metformin ( p = 0.006 ) , with the minor allele associated with lower diabetes incidence in the metformin arm ( hr 0.78 , 95% ci 0.640.96 , p = 0.02 ) but not in the placebo arm ( 1.15 , 0.971.37 , p = 0.11 ) .
at this locus , major allele homozygotes did not benefit from metformin with regard to diabetes prevention ( hr 1.07 , 95% ci 0.771.50 , vs. placebo , p = 0.68 ) , whereas minor allele carriers did ( 0.58 , 0.460.73 , vs. placebo , p < 0.001 ; fig .
1 ) . we also noted a nominally significant interaction of a missense snp in slc22a1 ( rs683369 , encoding l160f ) with metformin , with the major allele protecting from diabetes in the metformin arm ( hr 0.69 , 95% ci 0.530.89 , p = 0.004 ) but not the placebo arm ( 1.01 , 0.791.30 , p = 0.91 ) ; the major allele is therefore associated with 31% risk reduction in diabetes incidence but only under the action of metformin . in this arm , the likelihood of developing diabetes depended on the number of phenylalanine alleles ( hr 0.72 , 95% ci 0.590.88 , vs. placebo for ll homozygotes ; 0.92 , 0.661.28 , for heterozygotes ; and 1.44 , 0.563.67 , for ff homozygotes ) .
there were five nominally significant interactions at snps encoding putative drug targets for metformin , in the gene encoding the ampk kinase stk11 and the ampk subunit genes prkaa1 , prkaa2 , and prkab2 , respectively .
a total of 22 snps in the abcc8-kcnj11 region also had nominally significant interactions with metformin , including rs5215 , which is tightly linked to the widely replicated type 2 diabetes associated missense snp rs5219 ( e23k ) in kcnj11 .
candidate gene variants showing a nominally significant interaction with the metformin intervention in the dpp snps in or near biological candidate genes showing a nominally significant interaction with the metformin intervention in the dpp are shown .
hrs are estimated for the minor allele ( m ) vs. the major allele ( m ) under an additive genetic model .
only the top snp within each gene region is shown ; the full set of results ( including allele frequencies ) is available in supplementary table 2 .
this snp is in tight ld with rs2289669 ( r 0.8 ) , whose major allele predicts a poorer response to metformin ( 5 ) . in the dpp ,
major allele homozygotes at rs8065082 did not benefit from metformin with regard to diabetes prevention , whereas minor allele carriers did ( p < 0.001 ) .
table 5 shows the candidate gene regions harboring variants that have a nominally significant interaction with the lifestyle intervention ; only the top snp within each gene region is given ( out of 69 nominal associations ) .
twelve of the top findings were in four ampk subunit genes ( prkaa2 , prkab2 , prkag1 , and prkag2 ) , and 11 snps clustered around the peroxisome proliferator associated receptor coactivators 1 and 1 ( ppargc1a and ppargc1b , respectively ) .
candidate gene variants showing a nominally significant interaction with the lifestyle intervention in the dpp snps in or near biological candidate genes showing a nominally significant interaction with the lifestyle intervention in the dpp are shown .
hrs are estimated for the minor allele ( m ) vs. the major allele ( m ) under an additive genetic model .
only the top snp within each gene region is shown ; the full set of results ( including allele frequencies ) is available in supplementary table 2 .
review of 1,609 snps imputed in non - hispanic white dpp participants ( supplementary table 3 ) revealed the nominal association of other prkag2 snps with diabetes incidence ( best p = 5 10 ) .
imputed snps in the prkaa1 , prkaa2 , and abcc8-kcnj11 regions also had nominally significant interactions with metformin .
we conducted a large - scale genotyping study in the dpp , with the aim to test whether common variants in candidate genes involved in major spheres of human physiology predict diabetes incidence or response to preventive interventions in a multiethnic at - risk population .
we provide evidence supporting a previously reported association of variants in the metformin transporter gene slc47a1 with weaker metformin response , here defined as the reduced ability of metformin to lower diabetes incidence ( 5 ) .
we identified a number of nominal associations with diabetes incidence or metformin response in several compelling candidate genes ; however , none stand strict statistical correction for multiple hypothesis testing by fdrs .
correction for multiple tests requires careful consideration in genetic association studies ( 26 ) . when large numbers of snps are tested , methods that are valid in the presence of correlations due to ld , such as permutation methods or evaluation of fdr , are preferred over those that assume independence of snps .
the scope of the present analysis is guided by technological convenience , and it might be argued that the number of distinct scientific hypotheses formulated , rather than the physical size of the genotyping array , is most relevant to the interpretation of results .
however , what constitutes a single hypothesis ( e.g. , an snp , a gene , an entire pathway , or a constellation of phenotypes ) is subjective . on the other hand , correcting for the equivalent of the universe of independent common variants in the human genome ( empirically estimated at 1 million ) is gaining increasing favor among genetic statisticians . in this context
we previously quantified the power of the dpp to detect modest genetic effects on diabetes incidence ( 28 ) .
assuming there are no gene - treatment interactions , these calculations show that the overall dpp cohort has 83% power to detect a previously reported effect size of 20% for an snp of 10% frequency at an level of 0.05 , while the placebo , lifestyle modification , and metformin arms have 53 , 34 , and 44% power , respectively .
thus , it is not surprising that the dpp does not replicate all gwas - derived findings in that range or that it fails to reach genome - wide significance in discovery efforts
. our null results on diabetes incidence for truly associated variants may be due to the high - risk population at baseline , the short time of follow - up ( 3.2 years on average ) , and/or the use of interventions effective in reducing diabetes incidence .
on the other hand , considering the number of variants likely to influence the phenotypes under study , even submaximal power is likely to provide a number of true positive associations . in this context ,
genotyped and imputed snps in the gene encoding the ampk 2 subunit ( prkag2 ) merit further consideration . while the association of snps in genes that encode metformin transporters with type 2 diabetes in the entire dpp cohort ( if real ) requires explanation , this could be due to a sufficiently strong effect in the metformin arm alone .
alternatively , snps in this region could be capturing variants in other nearby genes : for instance , immediately upstream of slc22a1 and slc22a2 in chromosome 6 lies the gene encoding the insulin - like growth factor 2 receptor ( igf2r ) , an excellent biological candidate .
this study constitutes the first large - scale prospective pharmacogenetic evaluation of metformin action in a controlled clinical trial .
the uk prospective diabetes study ( 29 ) and a diabetes outcome progression trial ( adopt ) ( 30 ) investigators independently showed that a substantial proportion of patients with type 2 diabetes eventually fail metformin therapy , defined by a need for additional pharmacotherapy to control hyperglycemia . given the higher prior probability afforded by the known biological role of slc47a1 in disposing of metformin and the previously reported genetic association of the major allele at snp rs2289669 with poorer metformin response ( 5 ) , validation in the dpp can be convincing without achieving the levels of statistical significance required for novel findings .
our index snp ( rs8065082 ) is in tight ld with rs2289669 ( r 0.8 in hapmap ceu ) and the direction of effect is consistent in dpp , a cohort nearly 10-fold larger than the one documented in the original report from rotterdam ( 5 ) .
( 5 ) and suggest that major allele homozygotes at this locus ( 30% of the european population ) may experience suboptimal responses to metformin treatment .
while our noted association with a missense snp appears compelling , it is not among the most functional human variants described by shu et al .
( 3 ) , and it is in weak ld with rs622342 ( r 0.14 in hapmap ceu ) , a slc22a1 snp associated with metformin response in another report from rotterdam ( 31 ) .
snp rs622342 was included among our tag snps but showed no evidence of an interaction with metformin ( nominal p = 0.69 ) or an effect on diabetes incidence in any arm , raising the possibility that the original finding may have been spurious .
similarly , the slc22a2 missense snp rs316019 ( a270s ) , reported to influence metformin renal excretion and affect its plasma concentrations ( 32 ) , did not significantly interact with metformin in the dpp ( nominal p = 0.35 ) .
our novel findings in the putative metformin drug targets stk11 and ampk require confirmation , as do those in mef2a and mef2d , themselves regulated by ampk ( 33 ) .
one of the most significant interactions with metformin occurred at an snp in hnf4a ; given its role in hepatic gluconeogenesis ( 34 ) , this intriguing result deserves further exploration .
in contrast , the multiple interactions noted in the abcc8-kcnj11 locus reported previously ( 35 ) do not offer a clear mechanism of action . finally , nominal associations with response to lifestyle modification should be replicated in cohorts that underwent a similar intervention . in summary , we have conducted a large - scale genetic association study in the dpp and replicated the association of a polymorphism in a metformin transporter with metformin response .
other hypothesis - generating results require more detailed characterization in the dpp and follow - up in independent samples .
| objectivegenome - wide association studies have begun to elucidate the genetic architecture of type 2 diabetes .
we examined whether single nucleotide polymorphisms ( snps ) identified through targeted complementary approaches affect diabetes incidence in the at - risk population of the diabetes prevention program ( dpp ) and whether they influence a response to preventive interventions.research design and methodswe selected snps identified by prior genome - wide association studies for type 2 diabetes and related traits , or capturing common variation in 40 candidate genes previously associated with type 2 diabetes , implicated in monogenic diabetes , encoding type 2 diabetes drug targets or drug - metabolizing / transporting enzymes , or involved in relevant physiological processes .
we analyzed 1,590 snps for association with incident diabetes and their interaction with response to metformin or lifestyle interventions in 2,994 dpp participants .
we controlled for multiple hypothesis testing by assessing false discovery rates.resultswe replicated the association of variants in the metformin transporter gene slc47a1 with metformin response and detected nominal interactions in the amp kinase ( ampk ) gene stk11 , the ampk subunit genes prkaa1 and prkaa2 , and a missense snp in slc22a1 , which encodes another metformin transporter .
the most significant association with diabetes incidence occurred in the ampk subunit gene prkag2 ( hazard ratio 1.24 , 95% ci 1.091.40 , p = 7 104 ) .
overall , there were nominal associations with diabetes incidence at 85 snps and nominal interactions with the metformin and lifestyle interventions at 91 and 69 mostly nonoverlapping snps , respectively .
the lowest p values were consistent with experiment - wide 33% false discovery rates.conclusionswe have identified potential genetic determinants of metformin response . these results merit confirmation in independent samples . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Price Gouging Act of 2007''.
SEC. 2. FUEL PRICE GOUGING PROHIBITION FOLLOWING MAJOR DISASTERS.
(a) In General.--The Federal Trade Commission Act (15 U.S.C. 41 et
seq.) is amended by inserting after section 24 (15 U.S.C. 57b-5) the
following:
``SEC. 24A. PROTECTION FROM FUEL PRICE GOUGING FOLLOWING MAJOR
DISASTERS.
``(a) Definitions.--In this section:
``(1) Affected area.--The term `affected area' means an
area affected by a major disaster declared by the President
under Federal law in effect on the date of the enactment of
this section.
``(2) Price gouging.--The term `price gouging' means the
charging of an unconscionably excessive price by a supplier in
an affected area.
``(3) Supplier.--The term `supplier' means any person that
sells gasoline or diesel fuel for resale or ultimate
consumption.
``(4) Unconscionably excessive price.--The term
`unconscionably excessive price' means a price charged in an
affected area for gasoline or diesel fuel that--
``(A) represents a gross disparity, as determined
by the Commission in accordance with subsection (e),
between the price charged for gasoline or diesel fuel
and the average price of gasoline or diesel fuel
charged by suppliers in the affected area during the
30-day period ending on the date the President declares
the existence of a major disaster; and
``(B) is not attributable to increased wholesale or
operational costs incurred by the supplier in
connection with the sale of gasoline or diesel fuel.
``(b) Determination of the Commission.--As soon as practicable
after the President declares a major disaster, the Commission shall--
``(1) consult with the Attorney General, the United States
Attorney for the district in which the disaster occurred, and
State and local law enforcement officials to determine whether
any supplier in the affected area is charging or has charged an
unconscionably excessive price for gasoline or diesel fuel
provided in the affected area; and
``(2) establish within the Commission--
``(A) a toll-free hotline that a consumer may call
to report an incidence of price gouging in the affected
area; and
``(B) a program to develop and distribute to the
public informational materials in English and Spanish
to consumers in the affected area on detecting and
avoiding price gouging.
``(c) Price Gouging Involving Disaster Victims.--
``(1) Offense.--During the 180-day period beginning on the
date on which a major disaster is declared by the President, it
shall be unlawful for a supplier to sell, or offer to sell,
gasoline or diesel fuel in an affected area at an
unconscionably excessive price.
``(2) Action by commission.--
``(A) In general.--During the period described in
paragraph (1), the Commission shall conduct
investigations of complaints by consumers of price
gouging by suppliers in an affected area.
``(B) Positive determination.--If the Commission
determines under subparagraph (A) that a supplier is in
violation of paragraph (1), the Commission shall take
any action the Commission determines to be appropriate
to remedy the violation.
``(3) Civil penalties.--A supplier who commits a violation
described in paragraph (1) may, in a civil action brought in a
court of competent jurisdiction, be subject to--
``(A) a civil penalty of not more than $500,000;
``(B) an order to pay special and punitive damages;
``(C) an order to pay reasonable attorney's fees;
``(D) an order to pay costs of litigation relating
to the offense;
``(E) an order for disgorgement of profits earned
as a result of a violation of paragraph (1); and
``(F) any other relief determined by the court to
be appropriate.
``(4) Criminal penalty.--A supplier that knowingly commits
a violation described in paragraph (1) shall be imprisoned not
more than 1 year.
``(5) Action by victims.--A person, Federal agency, State,
or local government that suffers loss or damage as a result of
a violation of paragraph (1) may bring a civil action against a
supplier in any court of competent jurisdiction for
disgorgement, special or punitive damages, injunctive relief,
reasonable attorney's fees, costs of the litigation, and any
other appropriate legal or equitable relief.
``(6) Action by state attorneys general.--An attorney
general of a State, or other authorized State official, may
bring a civil action in the name of the State, on behalf of
persons residing in the State, in any court of competent
jurisdiction for disgorgement, special or punitive damages,
reasonable attorney's fees, costs of litigation, and any other
appropriate legal or equitable relief.
``(7) No preemption.--Nothing in this section preempts any
State law.
``(d) Report.--Not later than 1 year after the date of the
enactment of this section, and annually thereafter, the Commission
shall submit to the Committee on Commerce, Science, and Transportation
of the Senate and the Committee on Energy and Commerce of the House of
Representatives a report describing the following:
``(1) The number of price gouging complaints received by
the Commission for each major disaster declared by the
President during the preceding year.
``(2) The number of price gouging investigations of the
Commission initiated, in progress, and completed as of the date
on which the report is prepared.
``(3) The number of enforcement actions of the Commission
initiated, in progress, and completed as of the date on which
the report is prepared.
``(4) An evaluation of the effectiveness of the toll-free
hotline and program established under subsection (b)(2).
``(5) Recommendations for any additional action with
respect to the implementation or effectiveness of this section.
``(e) Definition of Gross Disparity.--Not later than 180 days after
the date of the enactment of this subsection, the Commission shall
promulgate regulations to define the term `gross disparity' for
purposes of this section.''.
(b) Effect of Section.--Nothing in this section, or the amendment
made by this section, affects the authority of the Federal Trade
Commission in effect on the date of the enactment of this Act with
respect to price gouging. | Price Gouging Act of 2007 - Amends the Federal Trade Commission Act to direct the Federal Trade Commission (FTC), after the President declares a major disaster, to: (1) consult with the Attorney General, the U.S. Attorney for that area, and state and local law enforcement officials to determine whether any supplier is charging unconscionably excessive prices for gasoline or diesel fuel; (2) establish a toll-free hotline for a consumer to report price gouging; and (3) establish a program to develop and distribute public informational materials in English and Spanish to assist consumers in detecting and avoiding price gouging.
Makes it unlawful to charge unconscionably excessive prices for any gasoline or diesel fuel during the 180-day period after the President declares a major disaster.
Requires the FTC, if it determines a supplier is in violation, to take any action it determines appropriate to remedy the violation. Authorizes civil penalties. Requires imprisonment for knowing violations.
Authorizes victims and any state attorney general to bring a civil action against violators of this Act. |
chorea may be the manifestation of a wide variety of degenerative , vascular , metabolic , or toxic disorders involving the central nervous system , in which dysfunction of the basal ganglia , particularly of the striatum , is generally assumed to be responsible .
we report an index case of generalized chorea secondary to the ingestion of propiconazole toxin ( fungicide ) in a young female . in this case ,
the patient was a 35-year - old lady who was a known case of primary juvenile myoclonic epilepsy , on treatment with valproate ( 800 mg / day ) for 7 years .
she was found in unconscious state at home by the family , who then took her to the emergency department of another facility . in view of prior history of epilepsy ,
three days later , due to persistent unresponsive neurological condition , she was referred to our tertiary care hospital for further management . at admission ,
her glasgow coma scale was e2m5v1 , with bilateral small - sized pupils ( 2 mm , reacting ) , heart rate of 64/min , blood pressure 90/60 mmhg on inotropic support with norepinephrine 2 g / kg / min and dopamine 8 g / kg / min .
blood biochemistry and metabolic profile were normal with total leukocyte count-12,000/mm , rbs-85 mg / dl , blood and urine culture were sterile .
electroencephalogram was done to rule out status epilepticus while the patient was on valproate ( blood level-80 ) and no sedatives .
magnetic resonance imaging ( mri ) brain with contrast , and cytological and biochemical analysis of cerebrospinal fluid were normal . on day 3 of her illness when inotropes had been tapered , her heart rate was in the range of 5058/min .
bradycardia ( even with inotropic support ) along with small - sized pupils raised the suspicion of intoxication .
furthermore , relatives were redirected to look for any evidence of suspicious compound at home .
cholinesterase levels were very low ( 134 ) along with an empty bottle of propiconazole toxin ( fungicide ) found at home , confirming the diagnosis of intoxication .
however , on the 9 day of her illness , she developed irregular random flowing movements from one part of the body to another , suggesting generalized chorea [ video 1 ] .
a detailed family history was not positive for any chorea / choreiform movements suggestive of inherited degenerative disorders .
the patient was extensively evaluated for other acquired cause of chorea , but her hematological investigations including peripheral blood film , biochemical , thyroid function , serum valproate levels , and vasculitic workup were negative and repeat mri brain and computed tomography ct abdomen were also normal . she was treated with clonazepam ( 2 mg ) , tetrabenazine ( 75 mg ) , and risperidone ( 2 mg ) with partial improvement in chorea .
our case depicts initial diagnostic dilemma , which is frequent problem in intoxication as history is often concealed .
hence , insecticide poisonings are relatively a common occurrence with wide spectrum of neurological presentations which may affect the function of the central and peripheral nervous system .
clinical manifestations vary greatly , and include movement disorders such as secondary parkinsonism and a wide range of hyperkinetic disorders and even delayed neuropathy .
propiconazole is the triazole class of fungicide and is available as an emulsifiable concentrate ready - to - use liquid [ figure 1 ] .
it is also known as dmi or demethylation - inhibiting fungicide , due to its binding with and inhibition of 14-alpha - demethylase enzyme . to the best of our knowledge ,
this is the index case of generalized chorea following intoxication with propiconazole ( fungicide ) .
the probable mechanism may be excessive acetylcholine activity in the nigrostriatal system due to inactivation of acetylcholinesterase by propiconazole . within the nigrostriatal network , caudate nucleus and
globus pallidus are particularly rich in cholinergic neurons causing less inhibition of pallidothalamic fibers and generalized choreiform movements .
chemical structure of propiconazole ( c15h17cl2n3o2 ) the neurological manifestation with propiconazole intoxication is not reported in literature
. only few cases of extrapyramidal manifestations following cholinergic intoxication secondary to organophosphate poisoning ( opc ) have been described . in the three cases described by joubert et al . , the extrapyramidal manifestations were limited to choreiform movements .
although exact mechanism of action of propiconazole is not clear , but circumstantial evidence , low cholinesterase levels , and response to anticholinergic treatment sufficiently support our diagnosis . with
delayed development of generalized chorea as neurological sequelae , propiconazole may be added as another compound in the causal list of toxic chorea widening the spectrum of acquired toxic causes of chorea still further . in a country like ours , where poisonings are frequent exposure to toxic agents
| chorea is a rare manifestation of poisoning .
we report an index case of a young woman who developed generalized chorea following propiconazole toxin ingestion . as large series on neurological complications of toxic compounds are difficult to be compiled , it is of interest to report our experience .
this report adds one more compound to the increasing list of toxic chorea . |
SECTION 1. PHASE-OUT OF TAX SUBSIDIES FOR ALCOHOL FUELS PRODUCED FROM
FEEDSTOCKS ELIGIBLE TO RECEIVE FEDERAL AGRICULTURAL
SUBSIDIES.
(a) Alcohol Fuels Credit.--Section 40 of the Internal Revenue Code
of 1986 (relating to credit for alcohol used as a fuel) is amended by
adding at the end the following new subsection:
``(g) Phase-Out of Credit for Alcohol Produced From Feedstocks
Eligible To Receive Federal Agricultural Subsidies.--
``(1) In general.--No credit shall be allowed under this
section with respect to any alcohol, or fuel containing
alcohol, which is produced from any feedstock which is a
subsidized agricultural commodity.
``(2) Phase-in of disallowance.--In the case of taxable
years beginning in 1995 and 1996, paragraph (1) shall not apply
and the credit determined under this section with respect to
alcohol or fuels described in paragraph (1) shall be equal to
67 percent (33 percent in the case of taxable years beginning
in 1996) of the credit determined without regard to this
subsection.
``(3) Subsidized agricultural commodity.--For purposes of
this subsection, the term `subsidized agricultural commodity'
means any agricultural commodity which is supported, or is
eligible to be supported, by a price support or production
adjustment program carried out by the Secretary of
Agriculture.''.
(b) Excise Tax Reduction.--
(1) Petroleum products.--Section 4081(c) of the Internal
Revenue Code of 1986 (relating to taxable fuels mixed with
alcohol) is amended by redesignating paragraph (8) as paragraph
(9) and by adding after paragraph (7) the following new
paragraph:
``(8) Phase-out of subsidy for alcohol produced from
feedstocks eligible to receive federal agricultural
subsidies.--
``(A) In general.--This subsection shall not apply
to any qualified alcohol mixture containing alcohol
which is produced from any feedstock which is a
subsidized agricultural commodity.
``(B) Phase-in of disallowance.--In the case of
calendar years 1995 and 1996, the rate of tax under
subsection (a) with respect to any qualified alcohol
mixture described in subparagraph (A) shall be equal to
the sum of--
``(i) the rate of tax determined under this
subsection (without regard to this paragraph),
plus
``(ii) 33 percent (67 percent in the case
of 1996) of the difference between the rate of
tax under subsection (a) determined with and
without regard to this subsection.
``(C) Subsidized agricultural commodity.--For
purposes of this paragraph, the term `subsidized
agricultural commodity' means any agricultural
commodity which is supported, or is eligible to be
supported, by a price support or production adjustment
program carried out by the Secretary of Agriculture.''.
(2) Special fuels.--Section 4041 (relating to tax on
special fuels) is amended by adding at the end the following
new subsection:
``(n) Phase-Out of Subsidy for Alcohol Produced From Feedstocks
Eligible To Receive Federal Agricultural Subsidies.--
``(1) In general.--Subsections (b)(2), (k), and (m) shall
not apply to any alcohol fuel containing alcohol which is
produced from any feedstock which is a subsidized agricultural
commodity.
``(2) Phase-in of disallowance.--In the case of calendar
years 1995 and 1996, the rate of tax determined under
subsection (b)(2), (k), or (m) with respect to any alcohol fuel
described in paragraph (1) shall be equal to the sum of--
``(A) the rate of tax determined under such
subsection (without regard to this subsection), plus
``(B) 33 percent (67 percent in the case of 1996)
of the difference between the rate of tax under this
section determined with and without regard to
subsection (b)(2), (k), or (m), whichever is
applicable.
``(3) Subsidized agricultural commodity.--For purposes of
this subsection, the term `subsidized agricultural commodity'
means any agricultural commodity which is supported, or is
eligible to be supported, by a price support or production
adjustment program carried out by the Secretary of
Agriculture.''.
(3) Aviation fuel.--Section 4084(c) (relating to reduced
rate of tax for aviation fuel in alcohol mixture) is amended by
redesignating paragraph (5) as paragraph (6) and by inserting
after paragraph (4) the following new paragraph:
``(5) Phase-out of subsidy for alcohol produced from
feedstocks eligible to receive federal agricultural
subsidies.--
``(A) In general.--This subsection shall not apply
to any mixture of aviation fuel containing alcohol
which is produced from any feedstock which is a
subsidized agricultural commodity.
``(B) Phase-in of disallowance.--In the case of
calendar years 1995 and 1996, the rate of tax under
subsection (a) with respect to any mixture of aviation
fuel described in subparagraph (A) shall be equal to
the sum of--
``(i) the rate of tax determined under this
subsection (without regard to this paragraph),
plus
``(ii) 33 percent (67 percent in the case
of 1996) of the difference between the rate of
tax under subsection (a) determined with and
without regard to this subsection.
``(C) Subsidized agricultural commodity.--For
purposes of this paragraph, the term `subsidized
agricultural commodity' means any agricultural
commodity which is supported, or is eligible to be
supported, by a price support or production adjustment
program carried out by the Secretary of Agriculture.''.
(c) Effective Dates.--
(1) Credit.--The amendment made by subsection (a) shall
apply to taxable years beginning after December 31, 1994.
(2) Excise taxes.--
(A) In general.--The amendments made by subsection
(b) shall take effect on January 1, 1995.
(B) Floor stock tax.--
(i) In general.--In the case of any alcohol
fuel in which tax was imposed under section
4041, 4081, or 4091 of the Internal Revenue
Code of 1986 before any tax-increase date, and
which is held on such date by any person, then
there is hereby imposed a floor stock tax on
such fuel equal to the difference between the
tax imposed under such section on such date and
the tax so imposed.
(ii) Liability for tax and method of
payment.--A person holding an alcohol fuel on
any tax-increase date shall be liable for such
tax, shall pay such tax no later than 90 days
after such date, and shall pay such tax in such
manner as the Secretary may prescribe.
(iii) Exceptions.--The tax imposed by
clause (i) shall not apply--
(I) to any fuel held in the tank of
a motor vehicle or motorboat, or
(II) to any fuel held by a person
if, on the tax-increase date, the
aggregate amount of fuel held by such
person and any related persons does not
exceed 2,000 gallons.
(iv) Tax-increase date.--For purposes of
this subparagraph, the term ``tax-increase
date'' means January 1, 1995, and January 1,
1996.
(v) Other laws applicable.--All provisions
of law, including penalties applicable with
respect to the taxes imposed by sections 4041,
4081, and 4091 of such Code shall, insofar as
applicable and not inconsistent with the
provisions of this subparagraph, apply with
respect to the floor stock taxes imposed by
clause (i). | Amends the Internal Revenue Code to phase out the tax subsidies for alcohol fuels produced from feedstocks which are eligible to receive Federal agricultural subsidies. |
Published on
In this episode, your host, David Lee Roth, talks about the history of tattoos in Japan. Dave also shares his own Japanese tattoo experiences.
Looking for more of this episode? Head over to iTunes for bonus material that can found exclusively on the Audio-Only version of TheRothShow. Here's the link: http://ow.ly/i3hBS ||||| There are many things to admire about David Lee Roth's home. There's the lushly landscaped property, three football fields' worth by his estimation, hidden behind a 9-foot ivy-covered wall on an otherwise undistinguished suburban Pasadena block. There's the house itself, which he's had for 25 years, a sprawling 1920s Spanish-style mansion with 13 rooms in the basement alone. There's the tennis court in back, filled with a foot and a half of sand to accommodate beach barbecues — he doesn't play tennis. But Roth's favorite thing about his home might be the floors. They are important. "Feel this?" He stamps his foot a few times. "That transfer of shock? The floor has to have some give to it. It's why you don't do ballet on concrete." The effect is much like the inch-thick wooden plank, maybe 15 feet by 15 feet, that comprises the center of Van Halen's stage, where he plies a very specific trade. Acrobatic high-kicks and shimmies. Martial-arts maneuvers and sashays. "I have moves onstage it took me two years to learn." There is nothing actually on most of the floors in the house. The living room is bare, save for a few framed photos from the cover shoot of the first Van Halen album hanging crooked in one corner. ("I'm not much of a furniture collector, and I'm not much of an interior decorator.") A ceiling-high built-in china cabinet is empty. Half a bottle of Jamaican rum rests on the mantle, and on the far wall is a rack of Japanese katana swords. "The real deal," he says, unsheathing one. "Razor sharp. The first time I held one of these swords, I was maybe 9 years old." He demonstrates a few moves, alternately fluid and severe. "Everything has to be internalized or you'll look stiff. Once you can do it physically, it becomes your personality." He's decked out in the now standard offstage getup of mustard-colored overalls he started wearing three years ago while building stage props around the house — a samurai Mario brother. "I just had my sword teacher here from Tokyo," he says, rat-a-tat, as he says nearly everything. "Buddhist monk, about my age. He loves two things about Southern California: the chopped liver from the Jewish delicatessen and these floors. Mojo Dojo. You ever hold one of these?" I have not. He places the sword in my hands and I instantly feel dumb. There's a weight to it that only expensive things have, and I'm not quite sure what to do. "Have you ever cut anything with these?" I ask dumbly. He has not; maybe an apple on a string once. I start to swing it tentatively and I can see him wincing out of the corner of my eye. I hand the sword back to him more carefully than I've ever done anything. There is a not insignificant population for whom David Lee Roth is the ur-rock star, the embodiment of everything splendorous and stupid about that term, as responsible as anyone for establishing, defining, and cementing the debauched libertine, hotel room-trashing, groupie-defiling caricature that is cliché and passé and lionized. Roth is a little less famous for having parlayed that caricature into a life that's rich and weird and singular and driven by very particular and exotic enthusiasms ranging from mountain climbing to martial arts to tending to gunshot victims in the Bronx. But this is something he's actively trying to change. He owes this movable feast to leaving — quitting, getting kicked out of: pick your version of the legend — what was, at the time of his messy exit, the biggest, most over-the-top band in the world. If not for that, he might not be someone who, at 57, would train to be a master swordsman or to speak fluent Japanese. "I wonder who I might have been had I stayed in the band," he says. "Not as interesting, not as involved. I probably would have followed the more traditional, long, slow climb to the middle. Enjoying my accomplishments, living off my residuals. I wouldn't have half the stories to tell." And if not for the fact that Roth rejoined Van Halen in 2007, thus ending rock's most enduring will-they-or-won't-they soap opera, he wouldn't be in the position to try to channel that experience into a sprawling one-man video series and podcast that aspires to do nothing less than tell the history of modern culture through the eyes of someone who has been everywhere, done everything, met everyone, and hired a couple of midgets to be his security detail along the way. He's going to use the internet to save us from the internet. He does not harbor any illusion that his life is easily replicated — celebrities, they aren't just like us — but he would like to, as humbly as one can do such a thing, offer it as an example of what a life can be. "I know what compels me, I know what I'm made of culturally," he says. "It's a variety of neighborhoods. It's Spanish speaking. It's jock. It's graphic arts. It's surfing. It's Hell's Angels. It's Groucho Marx, it's Kurosawa. Throw in some Kenny Chesney. Hey, are you speaking Creole? Great, throw that in too. And not all neighborhoods compel me, I'm not one world, one love." He places the sword back on the wall. "I'm an eye and an ear into a world and a wealth of experience that we're all part of simply by virtue of watching television," he says. "If indeed we live in a Beyoncé world, then this is a view into aspects of that that are rarely discussed and rarely explained. This is of infinite interest to a lot of my colleagues in the fine arts, well and beyond any specific kind of music or part of show business. The stars I see in a lot of people's eyes are because of the uniform, not because of the pilot inside." David Lee Roth has seen some shit and he has to tell you about it. All of it. The hot for teacher has become the teacher.
Fin Costello/Redferns / Getty
The house is sparse and drafty — Roth leaves all the windows open at all times, regardless of weather, to help remain in tune with nature. He lives here alone with Russ, his 7-year-old Australian cattle dog, currently at his side; when there is band business to attend to, it's attended to here by production staffers and assistants, and his sister lives in the suite above the garage. He still has a place in New York City and has spent much of the past year at his new apartment in Tokyo. "This house is the only real tangible thing I ever bought, and I only paid it off, like, two years ago. Beyond that, I own three black pickup trucks of varying sizes, depending on the livestock we're moving." Early on a Saturday night in March, the house is empty and quiet. The overall effect is more Sunset Boulevard than Citizen Kane or Grey Gardens, although his kerchief game is legendary. At the bottom of the grand staircase, in the dark foyer with black-and-white marble floors, Roth instead offers, unprovoked, "It's very end of There Will Be Blood." Maybe he's heard every question. Maybe he's had the time to anticipate every question. "The only thing missing is the log cabin: 'Howdy, y'all lost?' We gotta have company more often, whaddya think, Russ?" The solitude is by careful design, as is nearly everything else about Roth's existence. Before his family wound up in Pasadena, they moved around a lot — Indiana, Massachusetts — and he's said in the past that he decided by a young age that he would never have a steady group of friends. "I was clearly mopey as a kid," he says. "But feeling sorry for yourself is a great motivator. I have a handful of very good friends, and I'm lucky enough to have them in different locales. The closest I have to a show business friend is Konishiki, a 600-pound ex-sumo wrestler in Tokyo. He's my language mentor." Seeing his parents split up in high school didn't really endear Roth to the notion of the nuclear family as an aspiration. (Google "david lee roth + married" and the first result is a rumor buried in some metal message board that he married his male chef in a civil ceremony a decade ago; this elicits a proper guffaw, he's never heard this one.) "I've lived alone my whole adult life. I've had girlfriends, I've had love affairs. Never longer than a year and a half. I'm the drunk who won the lottery, I'm going to be very difficult to convince of a lot of traditional things. I put off getting married when I found out, oh, you don't really have to. I just saw somebody discussing this recently, I think Gloria Steinem — same thing as her life story. There's a lot of us out there." The centerpiece of the downstairs hallway is a relic from a louder time: the front grill of the rickety Opel Kadett station wagon Roth drove when Van Halen were just starting out in the mid-'70s playing bars and parties around L.A., bolted to the wall with the back half of a stuffed deer rammed through the windshield. Alex and Eddie Van Halen were acquaintances then — never quite friends, exactly — also from Pasadena, classically trained prodigies who needed Roth's innate showmanship as much as he needed their chops. By the time Van Halen's debut went gold in 1978, Roth was squiring himself around Hollywood in a pearlescent black Mercedes SEL adorned with a giant skull and crossbones on the hood, complete with flames and a 24-karat gold-leaf tooth. "It was a statement of some sort," he says. "I was a favorite of parking attendants all over town." Van Halen released six albums in six years, the last of which, 1984, was their breakthrough. In 1983, they headlined Steve Wozniak's US Festival in California to 350,000 people and were paid a million dollars, more than any band ever for a single show. Not much more than a year later, they were in shambles, rock's most dramatic divorce. Years of continued commercial success without Roth in the band through the '80s diluted the brand, replacing Roth's wink and a nudge with a sledgehammer. "We lived our lives like roughnecks," he says. "Roustabouts, circus carnies. I wonder if it's still a dream to live the way we lived. I know the success part of it is. Not just the partying, but the travel, the late nights, not just with groupies, but with all kinds of colleagues in a variety of other pursuits. I wonder if I even see that in people's eyes." There is no point in pretending that I didn't have the Van Halen poster over my bed with this guy frozen in an eternal mid-air split, or that I didn't hurt myself repeatedly trying to strike that pose while jumping off swings: face forward, mouth in an "o," hands pointing down and center. The point is not that an 11-year-old tried to impersonate him, it's that he was someone an 11-year-old would try to impersonate. He was rock star as superhero, a human cartoon — Diamond Dave. It was impossible to imagine having a normal conversation with him, and once that becomes your working definition of a rock star, it is a difficult thing to shake. The Fabulous Picasso Recording Brothers sign from the beginning of 1986's "Goin' Crazy" video hangs by the Opal Kadett sculpture, alongside some framed movie posters stacked and leaning against the wall. On the other side of the hall is giant framed print of the cover of Eat 'Em and Smile, while a life-size shot of the photo from the cover of 1987's Skyscraper, which shows Roth scaling the side of a mountain, hangs by the stairs, in case you just came to and are wondering whose house you're in. It was the immediate post-Van Halen stretch in the mid-'80s that Roth considers his most decadent — he calls it his "F. Scott Fitzgerald period" because of the increasingly elaborate parties and groupie-wrangling protocols — free of even the pretense of band diplomacy, pushing the cartoon to its logical extreme, playing up the old-soul vaudeville act that was always on the fringes of Van Halen and the strutting id that was always at its center. "Just a Gigolo" and "California Girls" were pop-friendly MTV staples and made Roth seem like something much bigger than the singer in the world's biggest rock band. A movie deal fell through, and, maybe surprisingly for such an obvious ham, Roth never felt compelled to further pursue the Hollywood route. Between Van Halen and his '80s solo albums, he's sold 42 million records worldwide; this is a metric that's lost meaning over the past decade, but by any measure, it's a lot. It's no-one's-ever-going-to-do-that-again a lot. "Diamond Dave is somewhere between Spider-Man and Spanky from Our Gang," he says, popping open a beer. He's never made a point of apologizing or renouncing past clownishness, never showed regret or embarrassment, never worried about those who didn't get the joke, never OD'd or got sober, never got busted for anything more than a dime bag. "I went through a wild phase where I was that person, and perhaps one hurdle is allowing yourself to develop. Everybody goes through the Harley-Davidson phase, the leather days — that's a great merit badge, and the hardest phase to live through." A close second, though, is a familiar bugaboo to anyone who went platinum in the go-go '80s and found their teased hair and casual hedonism mocked and dwarfed by a dressed-down, purposefully glum zeitgeist. "Two words: Kurt Cobain. I went from playing to 12,000 people to 1,200. From arenas to casinos and state fairs and the local House of Blues. That will cause you to reflect a lot more clearly on your values. Fun wasn't seen as fun anymore." Rather than fight the turn toward cultural irrelevance, he steered into it, especially after a prospective and much ballyhooed 1996 Van Halen reunion imploded before it began. Settling down was no more appealing an option than it had ever been ("I'm 35 and gotta start over, maybe not the best time to start a family, but if you don't want to start a family, then any time is not a good time to start a family"). He eventually became a certified EMT in New York and then completed a tactical medicine training program in Southern California. Not famous enough to headline Madison Square Garden, plenty famous enough to stand out in a tactical medicine training program. "The altitude drop is when somebody realizes who you are and they take you to task. Now you're the guy who gets to do garbage five days in a row instead of one, and doing ambulance-garage garbage is different from I-just-finished-dinner-and-now-I-have-to-dump-the-garbage-darling garbage. That will test you. But I was old enough and smart enough to know what I'd signed up for. These tactics are of value, they're a contribution." For years he went on ambulance calls all over New York City, and found that a life in the music business was good preparation for rushing to the aid of grievously injured people in the less picturesque corners of the city. "My skills were serious," he says. "Verbal judo, staying calm in the face of hyper-accelerated emotion. Same bizarre hours. Same keening velocity." Roth is clearly proud of how he's handled 35 years navigating himself in and out of some very bright spotlights and doesn't begrudge those who may have been less adept. "Up until the past few years, I don't know that Edward was someone who enjoyed any of his celebrity," he says of his once and future costar and frenemy, who married One Day at a Time's Valerie Bertinelli and basically became the Ben Gibbard and Zooey Deschanel of the '80s. "His last name right away is a trumpet. I knew how to use my name to get a good table at a restaurant but also how to use it so you won't recognize me when I'm there. Nine times out of ten I'm sitting next to you and you have no idea. People are surprised by the degree to which I've relinquished the attention, but I was taught this. You gotta learn to exist in both worlds. Uncle Manny taught me this."
Fin Costello/Redferns / Getty
Manny Roth is to Diamond Dave as Alfred is to Batman. The brother of Roth's late ophthalmologist father Nate, Manny owned jazz club Café Wha? in Greenwich Village in the early '60s and booked some of the first shows for up-and-comers Bob Dylan, Jimi Hendrix, and Woody Allen. He gave Richard Pryor his first shot and became his first manager. "David's father would bring him down when he was 7, 8, 9, 10 years old, and I would give him the royal treatment," says Manny, now 93, on the phone from his home in Ojai, California. "I used to fix him up with ice cream, whatever he wanted. I didn't try to turn him onto anything, but maybe it was osmosis. I was in the center of the scene there — all you had to do was carry an empty guitar case and the girls would follow you. I did my share of drugs. I had my long hair and all that crap. Every day was an adventure." By 1974, Manny was divorced with three kids and strung out and in debt, which is about as boilerplate a rock 'n' roll narrative as they come, then pulled himself together and ran another nearby club, the Village Gate. He takes pains to credit Roth's parents for being the type of people who encouraged the pursuit of any interest — but osmosis is a powerful thing to an impressionable child bearing ice cream. "By and large, David's stories are the same stories I tell," he says. "From day to day, his life is a new adventure, he's a superstar. He's in Japan learning the language and the jujitsu. He doesn't sit on his ass. I asked him how are the Japanese ladies over there and he said he'd send me one. I love him dearly. I know he can be a little quirky." How so? "Being a rock star is being a rock star, I don't need to go into the details. What would you do if you were a rock star?"
Fin Costello/Redferns / Getty
Roth opens the door to the downstairs study, filled with rows and rows of wardrobe racks and lorded over by an 8-foot statue that he's decorated with Japanese tattoos, another interest he's engaged with characteristic vigor. If you're in need of a yellow embroidered matador jacket but aren't sure what kind of yellow embroidered matador jacket, have we got a room for you. Road cases are scattered on the patio outside the study — after nearly a year of unplanned dormancy owing to Eddie Van Halen's bout with diverticulitis that scrapped what had been one of the most successful tours of last year, the machine is cranking up again. There's a one-off show in Australia in April, Japan in May, and a festival in Wisconsin in July. "I've read all these," he says, gesturing to the swollen bookshelves. The most important one he's read recently is Christopher Hitchens' memoir Hitch-22. He refers to the art books regularly, overseeing design needs for Van Halen. "I look forward to the longer flights because I'm actually able to read a full paperback uninterrupted. Here there's a barrage of stimuli: the phone, email. Books shaped who I am early on — Jack London, Mark Twain. All those adventure magazines, like Argosy. Guys who went out into the territory and became merchant marines or opened up a bar somewhere in the South Pacific but then somehow came to own banana fields in Ecuador and then and then and then. And now I'm writing The Innocents Abroad in cyberspace." When Howard Stern left for satellite radio in 2006, Roth took over his time slot — his first guest was Uncle Manny — but almost immediately bristled against what he thought was a restrictive format. The idea of parlaying his loquaciousness into something approaching a day job isn't anything new. He's excited by the prospect of no one telling him what can or can't work. Just as reading Jack London books as a child fueled his wanderlust, he's trying to pass this tradition on, using his own war stories to educate a generation driven to complacency. A generation that may not necessarily know who he is. He wants to be stopped by fans who tell him he inspired them to go to Borneo as much as he's stopped by fans who tell him he inspired them to start a band. The upstairs hallway is lined with platinum records — and platinum cassettes, god bless — monuments to the kind of career a rock band would barely think to aspire to today. At a moment when Mumford and Sons can fairly be called one of the biggest bands in the world, it's hard to understate the relative ubiquity and range Van Halen possessed — highbrow and lowbrow, butch and flamboyant, appealing to stoners and jocks, men and women, boys and girls. Who else would Jeff Spicoli hire to play his birthday party? Sitting in the upstairs office, Roth fishes a Marlboro Light out of a pack and tosses it back onto a small wooden school desk — they're good for what he calls his "rusty pipe" of a voice. Lolita Holloway plays on a loop from a stereo in another room. CDs and DVDs line the bookshelves in woozy, haphazard stacks, Russ' crate sits next to a bowl of water and an exercise bike. On the wall by the bathroom door are scribbles that look from a distance like height markings for a growing child, but they're actually ideas and song titles, chicken scratch. The windows open out onto the vast backyard as the sun goes down. "It's the last refuge of the great outdoors without having to leave the city. You're not going to find this in Beverly Hills." In December 2011, Roth posted an eight-minute black-and-white video scrapbook, compiled with the help of editor Shelly Toscano, that served as a de facto (re-)introduction to Roth and his interests. Another, showing Roth herding dogs, was shown during breathers on last year's Van Halen tour. Toscano came on to work for Roth full-time, primarily editing Van Halen promo materials, but his inherent hamminess had discovered a new outlet and a new purpose. Last October, The Roth Show launched: It's a YouTube series shot at the house or on the road, with an audio version available as a podcast, and it's nothing more or less than David Lee Roth speaking for a half hour on, more or less, a single topic. Tattoos. FM and underground radio. The history and semiotics of pop videos by way of Picasso. A long-ago trip to New Guinea. His personal history with drinking and smoking. Slideshows from an unending vacation. The episodes are monologues, history lessons, personal taxonomy, but really, mostly just talking and more talking, social-studies lectures by way of rock 'n' roll Babylon, at carnival-barker cadence. He speaks in dog years. He is the Ken Burns of David Lee Roth. The effect is overwhelming. The show has already amassed hours of Roth unpacking himself, and it's hard to think of any figure of his status, in any field, who has put himself or herself out there to this degree, unfiltered and unabridged, exhaustive and exhausting. His 1997 memoir, Crazy from the Heat, is a jewel of the trashy-bio genre — most of the book is the stuff you'd dog-ear in other trashy bios — but that will soon be a cocktail-napkin scribble by comparison. At a half hour, guest-free, every three weeks, his pace is ferocious. The shows do fine; the podcast is regularly in the iTunes top 200, and each episode has around 20,000 views on YouTube. But Roth is envisioning bigger things. He is only starting to sift through and digitize and catalogue a dozen or so hours of no doubt incriminating video and Super 8 footage he shot backstage and on the road during Van Halen's bacchanalian prime that can serve as the springboard for future episodes. (There has never been an authorized Van Halen documentary; he's taken it upon himself to be the band's de facto archivist.) He seems no less consumed with chronicling himself than a teenage livestreamer, and not just for the benefit of fans he has in the bag. "If I narrate this appropriately, I can illuminate," he says. "Partying means something very different now than it did back then, and there are relatively few individuals in my position who are both willing to discuss it and articulate enough to make it accessible. Do you want a Victorian-style painting with your foot up on the buffalo and a teak wood settee in front of a tent — 'Yes, wonderful hunting foray, can't wait to return home, cheeksie' — or do you want a no-holds-barred truth-told-at-every-juncture reiteration that compels questions? If your only question as a 20-year-old is, 'How did you become a success?' a fair amount of explanation is in The Roth Show." Beyond it being an erudite scrapbook of bad behavior, he wants the show to be a travelogue. He wants guests. He wants sponsors. "You can't just go into a sumo stable and get an interview," he says — he's in pitch mode now, which does not sound markedly different than any of his other modes save for a slight uptick in urgency. "We have an opportunity to talk to a marvelous group of people that few others have access to. I'm pretty conversant on a lot of subjects, and I'm good at asking the questions." It doesn't even matter much whether the stories are apocryphal. The legend that Van Halen wouldn't play if they found brown M&Ms in their backstage jar is cited as a prime example of the era's excess and hubris; the reality is that the request was buried in their contract rider as a test to see whether venues were abiding by the intricate technical specifications for the stage and sound — at first blush, that's more boring, but as a whole, it's a proper snapshot of frontier life. By now these legends add up to a bigger story that is true, even if it's not all, you know, true. There are no lights, not in here, not in many of the rooms; at night in the office, Roth uses the TV for illumination, and beyond that, he's got 25 years of walking in the dark here, he knows every corner, there's not much to bump into. When it gets too dark to see across the desk, he calls downstairs to Mark Rojas, who was a kid when his mother worked for Roth and now shoots The Roth Show. Rojas enters wearing a wool overcoat — it's downright cold now — and holding a flashlight and a floor lamp. He plugs in the lamp, turns it on, and exits. Even a casual music fan might feel intimately familiar with Eddie Van Halen's recent medical chart: oral cancer, hip replacement. Meanwhile, David Lee Roth has been quietly paying the price for a lifetime of hard landings. He's undergone two major lower back surgeries in recent years. The office chair is missing its right armrest — Roth also had his shoulder reattached in four places and needed to be able to sleep in the chair when it was too uncomfortable to lie on his back. "See that?" He points to a framed picture of Elvis Presley mounted on the wall just a foot or two off the floor, between two open French windows. "I had to move that down because I slept with my head against that wall. Dog is here, dog watches door." But Roth doesn't seem fragile — he barely has crow's-feet — and he's inspired and challenged by technology and youth culture in ways that he thinks a lot of his peers aren't, downloading house mixes from Beatport, hailing David Guetta and Skrillex and Deadmau5 ("I'm always curious as to who's got the biggest boom in the room"), marveling at the stagecraft of J-pop boy-band Exile. He can barely name a younger rock band that interests him or that he thinks may bear traces of Van Halen's spiritual DNA. Maybe Kings of Leon. And he's not afraid of the contentiousness of the internet, built to destroy, or at least to embarrass — very different from the unconditional adoration from a hockey arena full of paying fans shouting along to "Panama" in unison. A few years ago, someone posted an isolated vocal track of Roth singing "Dance the Night Away" in the studio, maybe not quite pitch-perfect. "People like to feel superior to someone who's famous," he says with a shrug. "If that had happened in the '80s and was sent out to radio, that would have been a problem. But taken in the main, one lousy vocal take on the internet next to hundreds of non-lousy ones, that just describes a human being. One of the most basic definitions of art is something that compels commentary." Even if that commentary is cruel and unbecoming and coming from entitled peanut gallerists who haven't done a fraction of what he's done in his life? "Sure," he says. "I enjoy the entitlement."
If ever there was any question that David Lee Roth was the odd man out in Van Halen, there certainly isn't now — he's the only person in Van Halen whose last name is not Van Halen. They are a band of brothers in that two of its members are brothers; in reality, though, they are partners in a highly successful corporation that operates under specific and doctrinaire bylaws, which may not be wholly satisfactory to any of the individual parties but are necessary for the continued liquidity of the firm. Roth had never met Van Halen's current bassist Wolfgang, Eddie's 22-year-old son, until they were bandmates; they have yet to so much as step out for a coffee together, which is understandable given that he grew up knowing Roth only as that guy Daddy fucking hates. Roth fully admits that Wolfgang's involvement was never negotiable and that he hasn't spoken to original bassist and odd man out Michael Anthony in "years." "I had no choice, it was Edward's decision, but luckily for all of us, the kid is very good," he says. "If there were a lacking in the program, I wouldn't participate; I have very high standards of musical excellence. But I do feel a little like Sammy Davis Jr. in the Rat Pack." This lack of warm fuzzies isn't scandalous, there's no friction, it's not fodder for whatever would now pass for a rock gossip mill. Roth travels alone on his own bus with Russ, the same bus that housed 13 people on his last solo tour in 2006. He sees his bandmates at the venue a little while before showtime. After the show, he goes out to a dance club or a strip club; they do not. The most shocking secret about Van Halen in 2013 is that they're not volatile or liable to collapse from the weight of interpersonal strife. The machine is built to sustain that — and in fact is fueled by the suspicion that this may not be the case. "We don't really do anything else for a livelihood," he says, lighting another cigarette. "But what we also do is create the aura that it may never happen. Has this served us well? It's served us superbly. We're Friday Night Lights — the kid who breaks his knee and his career is ruined, the father who's drunk and doesn't show up for the important game. It may be painful, but it's better than just winning and winning. We've become one of the great American stories." And what's more American than the institutionalization of something that once felt untamable? Van Halen's seventh album with Roth, A Different Kind of Truth, their first with him in 29 years, came out last spring. It sold around a half a million copies, which is certainly respectable by any 21st-century math. But given the tortuous backstory and the fact that the album is actually really good, or at least as good as anyone could have the nerve to hope for from a Van Halen album in 2012, the response felt fairly muted. The band didn't help matters by swearing off promotional press — of course that was interpreted as a sign of trouble, or that maybe they didn't want to talk about the fact that the album comprised reworked versions of songs that had been languishing in the vaults for as long as Van Halen has existed. ("There's so many people on television telling you why you should buy something," he says. "We felt we were making a sterling statement by not doing that.") Van Halen are playing the long con. "What people don't suspect is that we're rehearsing, individually and as an ensemble, constantly," he says. "Year-round, stopping only for personal injury, and right now the band is doing phenomenally. But it's not like we're the Stones, living and jamming together in the South of France and getting drunk and sharing dames after we practice. It was never like that." Barring further personal injury, Van Halen will continue as it is now: a massive box-office draw that will put out albums every few years, quite possibly featuring songs that weren't conceived in the Carter administration. There will be rumors that the band members aren't all best pals; these will be too true and too mundane to necessitate refuting, but the refuting is part of the deal, as much a part of the band's mystique as any song. What was once the Platonic ideal of rock 'n' roll indulgence is now structured around, literally, family values; for true adventure, Roth looks elsewhere. This is not a complaint, it's a fact. Without the clout and imprimatur of Van Halen, The Roth Show loses a little luster, a little marketability. Predictability is a punch worth rolling with. "The creative process for Van Halen could be more Technicolor," he says. "Let's go somewhere French, Tahiti or the West Indies, woodshed in a studio that's on a boat, travel to little islands, and play the local bar on Wednesday and Friday nights. Somewhere that has international influence to alternately complain about and celebrate and adds more to the emotional menu than, 'We went up to Ed's place.' That mantra hasn't changed in 20 years and isn't going to. Everyone else has very real families, very real tent stakes. To me, travel is still exciting, but for others, it can smack of alienation or disenfranchisement. There are no surprises here at all. We're on that James Bond schedule, every three years." Just as Van Halen's debut appeared smack between disco and punk, they've never been part of any movement or moment that could be considered in vogue culturally, and there isn't much that's popular now that owes them any evident stylistic debt. The Van Halen brothers had musical skills any wonky prog band would have envied, but they were more concerned with getting laid than impressing the conservatory. The played Kinks songs, but in ways that were impossible to replicate at home. They talked about chicks and cars and chicks in cars, but slyly, and with the occasional touch of Tin Pan Alley to boot. They predated — presaged, really — hair metal, yet this iteration of the band was gone by the time their blunter, glossier spawn passed for mainstream pop. But it's difficult to parse what Van Halen means today to a generation that didn't have the poster, whose musical heroes are an @ reply away. Even with a wholly respectable new album, they exist as the cultural equivalent of a broken-in pair of jeans. Or, as Roth says, an action figure, G.I. Joe with the kung-fu grip, not meant to be updated or improved upon. Ask Roth about his, and the band's legacy, and he'll give a good answer, as he always does, about how the band's intricate musicianship makes them impossible to imitate, by design, and how people can ape his moves or his drinking or his shirtlessness without hinting at any of his charisma, and this is all still true watching them now. But the notion of them as roughnecks or roustabouts or circus carnies is frozen in amber. If they have to be a museum piece, he'd at least like to be the tour guide.
Frazer Harrison / Getty
In January of last year, Van Halen played a surprise show to announce the new album — at Café Wha?, easily the smallest venue they'd graced since the mid-'70s. "David called me and said, 'I've got great news. It's taken me 50 years, but I've finally made it,'" says Manny. "He said he wanted to pay for my trip out there, but I wouldn't let him." Between songs, Roth did his familiar ringmaster spiel but took an especially long moment to point out Manny, front and center and beaming in this club the size of the Garden's security barrier. He reminisced about watching his uncle lay down the club's marble floor, about how being in this room made him the kind of person who could front a band way too big to be playing this room. From someone once considered to be the poster boy for rock artifice, it was about as human a moment as you'd ever need to see onstage.
Kevin Mazur/WireImage / Getty Images Van Halen at Cafe Wha?, January 5, 2012
"I'm ready to do it all over again," Manny tells me 15 months later. Roth is worried about his uncle — he's been having respiratory problems, but he sounds strong now, he is in pitch mode. "I have TV connections, I have friends in New York I've stayed in touch with. It would be much bigger than opening a club. I'm back in the game now." He is a lifer, and lifers don't retire. It seems safe to say that this, too, has been learned by osmosis. It's nearly midnight. Roth and I walk to the top of the stairs and Russ saunters over, tail wagging. He asks me if I want to hear Russ sing. I do, of course, because Russ is a dog. "How about some Motown?" Roth inhales deeply and lets loose in a familiar rasp. "Daaaaaancing in the streeeee —" and then Russ joins in with, "Awooooooooooo!" Roth cracks up like they've never done this before, but of course they have. Just because they've done it before doesn't mean it's not entertaining. | – Remember David Lee Roth? The wild Van Halen lead singer had quite a bumpy ride once the band broke up and his solo career ran its course in the 1980s, writes Steve Kandell in BuzzFeed. Starting over, Roth worked as an EMT on ambulance calls in New York City. "The altitude drop is when somebody realizes who you are and they take you to task,” he says. "Now you're the guy who gets to do garbage five days in a row instead of one. … That will test you." Why the tough times? "Two words: Kurt Cobain," says Roth, of America's shift to a gloomy zeitgeist. "I went from playing to 12,000 people to 1,200." But the determined bon vivant left ambulance work for other adventures—like studying Japanese swordplay, hosting a radio show, and posting video monologues on everything from tattoos to pop-video semiotics to an influential sumo wrestler. At 57, he's also back with Van Halen, now a humbler band that practices year-round and refuses to promote itself. Which makes the band members' wild past stand out all the more: "We lived our lives like roughnecks," Roth says. "Roustabouts, circus carnies. I wonder if it's still a dream to live the way we lived. I wonder if I even see that in people's eyes." Click for the full article. |
the interplay of different cell types of origin and distinct oncogenic mutations may determine the tumor subtype .
we have recently found that although both basal and luminal epithelial cells can initiate prostate tumorigenesis , the latter are more likely to undergo transformation in response to a range of oncogenic events . |
|
fanconi syndrome results from generalized dysfunction of the proximal renal tubule leading to impaired reabsorption defects in amino acids , glucose , sodium , potassium , bicarbonate , and phosphorus .
phosphate wasting owing to chronic renal loss and the inadequate synthesis of 1,25(oh)2 vitamin d together produce hypophosphatemia , glycosuria with normal glucose level , aminoaciduria , hypouricemia , and hypokalemia .
the symptoms include fatigue , muscle weakness , bone pain , fracture , and bone deformity .
numerous drugs are associated with the development of acquired fanconi syndrome including antiretrovirals , aminoglycosides , and salicylates .
recently , the nephrotoxicity of adefovir dipivoxyl ( adv ) , including fanconi syndrome , secondary to antiretroviral therapy in hiv patients was reported .
adv is excreted unchanged in the urine through glomerular filtration , and tubular secretion and administration of 60 mg daily and above have been associated with nephrotoxicity .
but , as a potent nucleotide analog against both the wild - type and lamivudine ( lam)-resistant hepatitis b virus ( hbv ) , adv has been found nephrotoxicity at a 10 mg daily dose in hbv patients . thus , the renal safety of adv attract new concern . here , we diagnosed acquired fanconi syndrome conjunction with severe hypophosphatemic osteomalacia in 2 chronic hepatitis b - positive patients on prolonged low - dose ( 10 mg daily ) adv therapy .
the first patient was a 56-year - old male with a 1-year history of progressive bone pain involving his knees , ankles , lower back and ribs with difficulty in walking .
he had a 7-year history of chronic hepatitis caused by hbv infection and had received lam therapy for 2-year .
because the virus developed resistance to lam , he had been receiving adefovir at a daily dose of 10 mg / day for 60 months before the development of these symptoms .
the patient was not previously on any medication or herbal remedy known to affect skeletal health or result in nephrotoxicity .
the laboratory results showed the feature of proximal renal tubule dysfunction , particularly severe hypophosphatemia [ table 1 ] .
bone scintigraphy revealed multifocal lesions including multiple ribs , costochondral junctions , costovertebral junctions , sacrum , both posterior iliac bones , both proximal tibia , right calcaneus , and the left second metatarsophalangeal joint area , which were suggestive of metabolic bone disorder ( 99mtc - hydroxymethylene diphosphonate whole - body bone scintigraphy showed multiple foci of increased radiotracer uptake in the thoracic spine , the sacroiliac region , the rib cage , the shoulders , the knees , and the ankles ) [ figure 1 ] . bailing capsule ,
the patient refused phosphate supplementation for personal reason . even so bone pain was significantly reduced , and laboratory findings [ table 1 ] returned to normal within months of discontinuation of adv and supplementation with calcium carbonate , and cholecalciferol .
renal parameters at baseline and during follow - up 99mtc - hydroxymethylene diphosphonate scintigraphy demonstrates significant abnormal uptake in calvaria , maxilla , both scapulae , ribs the second patient was a 57-year - old male who presented with a 3-year history of generalized bone pain involving neck , shoulders , hip and knees , which resulted in an antalgic gait .
he had a history of chronic hepatitis b - induced cirrhosis for which lam 100 mg daily was commenced in 7-year before he was admitted into the hospital .
adv at a daily dose of 10 mg daily was added in owing to increasing serum aminotransferases and hepatitis b dna levels , suggestive of lam resistance .
the patient was also not previously on any medication or herbal remedy known to affect skeletal health or result in nephrotoxicity .
as presenting with diffuse musculoskeletal pain , the patient was first diagnosed as seronegative spondyloarthropathy in an outpatient department .
after admitted into the hospital , hypophosphatemia and urinary phosphate wasting was confirmed by regular test .
a diagnosis of hypophosphatemic osteomalacia in the context of fanconi syndrome secondary to adefovir therapy was made .
dual energy x - ray absorptiometry and x - rays revealed diffused decrease in bone density .
radiography showed ischemic necrosis of the femoral head on both sides ( right , garden iii fracture ; left , garden iii fracture ) and femoral neck fractures [ figure 2a ] .
magnetic resonance imaging of both hip joints showed fractures across the right and left femoral neck and bone edema , which had low - intensity on t1-weighted images and high - intensity on t2-weighted images [ figure 2b ] .
so the patient was diagnosed as fanconi syndrome and pathological femoral neck fracture associated with osteomalacia induced by low - dose adv treatment .
adefovir was , therefore , ceased and entecavir commenced at a daily dose of 1 mg .
the symptoms were improved by supplementation with elemental phosphate , calcium carbonate , calcitriol and bailing capsule after several months .
transverse section t1-weighted image demonstrates low - intensity femoral neck fractures and the t2-weighted image shows high - intensity bone edema in both femoral necks
the first patient was a 56-year - old male with a 1-year history of progressive bone pain involving his knees , ankles , lower back and ribs with difficulty in walking .
he had a 7-year history of chronic hepatitis caused by hbv infection and had received lam therapy for 2-year .
because the virus developed resistance to lam , he had been receiving adefovir at a daily dose of 10 mg / day for 60 months before the development of these symptoms .
the patient was not previously on any medication or herbal remedy known to affect skeletal health or result in nephrotoxicity .
the laboratory results showed the feature of proximal renal tubule dysfunction , particularly severe hypophosphatemia [ table 1 ] .
bone scintigraphy revealed multifocal lesions including multiple ribs , costochondral junctions , costovertebral junctions , sacrum , both posterior iliac bones , both proximal tibia , right calcaneus , and the left second metatarsophalangeal joint area , which were suggestive of metabolic bone disorder ( 99mtc - hydroxymethylene diphosphonate whole - body bone scintigraphy showed multiple foci of increased radiotracer uptake in the thoracic spine , the sacroiliac region , the rib cage , the shoulders , the knees , and the ankles ) [ figure 1 ] . bailing capsule ,
the patient refused phosphate supplementation for personal reason . even so bone pain was significantly reduced , and laboratory findings [ table 1 ] returned to normal within months of discontinuation of adv and supplementation with calcium carbonate , and cholecalciferol .
renal parameters at baseline and during follow - up 99mtc - hydroxymethylene diphosphonate scintigraphy demonstrates significant abnormal uptake in calvaria , maxilla , both scapulae , ribs
the second patient was a 57-year - old male who presented with a 3-year history of generalized bone pain involving neck , shoulders , hip and knees , which resulted in an antalgic gait .
he had a history of chronic hepatitis b - induced cirrhosis for which lam 100 mg daily was commenced in 7-year before he was admitted into the hospital .
adv at a daily dose of 10 mg daily was added in owing to increasing serum aminotransferases and hepatitis b dna levels , suggestive of lam resistance .
the patient was also not previously on any medication or herbal remedy known to affect skeletal health or result in nephrotoxicity .
as presenting with diffuse musculoskeletal pain , the patient was first diagnosed as seronegative spondyloarthropathy in an outpatient department .
after admitted into the hospital , hypophosphatemia and urinary phosphate wasting was confirmed by regular test .
a diagnosis of hypophosphatemic osteomalacia in the context of fanconi syndrome secondary to adefovir therapy was made .
dual energy x - ray absorptiometry and x - rays revealed diffused decrease in bone density .
radiography showed ischemic necrosis of the femoral head on both sides ( right , garden iii fracture ; left , garden iii fracture ) and femoral neck fractures [ figure 2a ] .
magnetic resonance imaging of both hip joints showed fractures across the right and left femoral neck and bone edema , which had low - intensity on t1-weighted images and high - intensity on t2-weighted images [ figure 2b ] .
so the patient was diagnosed as fanconi syndrome and pathological femoral neck fracture associated with osteomalacia induced by low - dose adv treatment .
adefovir was , therefore , ceased and entecavir commenced at a daily dose of 1 mg .
the symptoms were improved by supplementation with elemental phosphate , calcium carbonate , calcitriol and bailing capsule after several months .
transverse section t1-weighted image demonstrates low - intensity femoral neck fractures and the t2-weighted image shows high - intensity bone edema in both femoral necks
as for these two patients , with normal levels of 1,25-dihydroxyvitamin d3 and urinalysis and 24 h urine collection confirmed phosphate wasting , we considered that the impaired phosphate reabsorption could have been caused by dysfunction of the proximal renal tubule dysfunction , not by deficiency of vitamin d. increased levels of potassium , glucose , n - acetylglucosaminidase and 2-microglobulin of urine samples collected over 24 h also supported our conclusion .
the quick resolution of serum phosphate levels suggests a close association between adv and tubular kidney function .
all nucleoside analogs have the potential ability of inhibition of human dna polymerase gamma involved in mitochondrial dna replication , which can lead to varying clinical manifestations of mitochondrial toxicity .
the pathophysiology of adefovir - related nephrotoxicity is multifactorial , initially dependent upon its entry into renal tubular cells through the human renal organic anion transporter - i ( hoat-1 ) .
the importance of the hoat-1 transporter was shown by a reduction in adefovir nephrotoxicity through its inhibition in patients on nonsteroidal anti - inflammatory agents .
adefovir was initially developed as an antiretroviral agent for hiv but was abandoned due to the high rate of nephrotoxicity with higher doses .
but the side effect is rarely reported at a dose of 10 mg daily , which was approved for the treatment of chronic hepatitis b in 2002 .
although these two patients had taken lamivudin for several years , which also has nephrotoxicity , both of them developed typical symptoms of osteomalacia several year after they cease the suspicious drug .
so we concluded the acquired fanconi 's syndrome was secondary to adefovir therapy . among the nucleoside analogs approved for use in hepatitis b , entecavir , with a favorable safety profile and low incidence of resistance ,
it 's worth noting that among all the nephrotoxicity cases caused by adefovir were almost middle or old people and predominantly male , we conjecture host cofactors such as age , gender , and medical comorbidities , may also influence the risk of nephrotoxicity .
the clinical pattern of renal toxicity is dose - dependent and usually presents as reversible proximal renal tubular toxicity , characterized by slight rises in serum creatinine levels and decreases in serum phosphate levels .
for the first patient , he got complete remission both in symptom and laboratory indicators after the therapy is stopped rapidly . the second case who was delayed for misdiagnose as as , and loss the opportunity of reversing .
the agent not only caused frank renal insufficiency and troublesome renal tubular acidosis , but also hypophosphatemia , and femoral neck fractures occurred .
we noted a distinct increase in serum bone - specific alkaline phosphatase levels in both patients during the weeks after adefovir cessation and the administration of supplemental phosphate and vitamin d. this occurred in the absence of abnormalities in the other hepatic enzymes .
we , therefore , interpret this change as a reflection of skeletal recovery in response to the removal of the nephrotoxin and the supplementation of phosphate , a crucial substrate for bone mineralization . despite large clinical trials advocating the safety of adv at 10 mg daily
to the best of our knowledge , there have been less than 10 reported cases of fanconi syndrome with prolonged adv use , even less has secondary necrosis of the femoral head .
the long - term consequences of proximal tubular dysfunction in patients treated with adv deserve further study , and there is increasing evidence that adv is associated with fanconi syndrome .
as they can be initially clinically silent yet lead to serious medical problems . because osteomalacia often presents with diffuse bone pain , the disease can be confused with various musculoskeletal diseases , clinicians prescribing this drug should be aware of this potential complication , so these patients should have regular monitoring of hypophosphatemia , hypoalbuminemia and proteinuria , including 2-microglubulin as an early indicator of tubulopathy .
a proactive surveillance and their prompt management are recommended in case of the development of irreversible tubulointerstitial injury and necrosis of the femoral head .
xiao - bing wang contributed in the conception of the work , drafting and revising the draft , approval of the final version of the manuscript , and agreed for all aspects of the work .
xiao - chun zhu contributed in the conception of the work , conducting the study , revising the draft , approval of the final version of the manuscript , and agreed for all aspects of the work .
xiao - ying huang contributed in the conception of the work , revising the draft , approval of the final version of the manuscript , and agreed for all aspects of the work .
wen - jing ye contributed in the analysis of data for the work , conducting the study , revising the draft , approval of the final version of the manuscript , and agreed for all aspects of the work .
liang - xing wang contributed in the conception and design of the work , revising the draft , approval of the final version of the manuscript , and agreed for all aspects of the work . | fanconi syndrome results from a generalized abnormality of the proximal tubules of the kidney and owing to phosphate depletion can cause hypophosphatemic osteomalacia .
adefovir dipivoxyl ( adv ) effectively suppresses hepatitis b virus replication but exhibits nephrotoxicity when administered at a low dosage .
we report two cases of fanconi syndrome induced by adv at 10 mg / day to call for regular screening for evidence of proximal tubular dysfunction and detailed bone metabolic investigations for prompt detection of adv nephrotoxicity is critically important to ensure timely drug withdrawal before the development of irreversible tubulointerstitial injury . |
we consider a system of @xmath0 identical bosons in @xmath3 dimensions , described by a wave function @xmath4 . here
@xmath5 is the subspace of @xmath6 consisting of wave functions @xmath7 that are symmetric under permutation of their arguments @xmath8 .
the hamiltonian is given by @xmath9 where @xmath10 denotes a one - particle hamiltonian @xmath11 ( to be specified later ) acting on the coordinate @xmath12 , and @xmath13 is an interaction potential . note the mean - field scaling @xmath14 in front of the interaction potential , which ensures that the free and interacting parts of @xmath15 are of the same order .
the time evolution of @xmath16 is governed by the @xmath0-body schrdinger equation @xmath17 for definiteness , let us consider factorized initial data @xmath18 for some @xmath19 satisfying the normalization condition @xmath20 . clearly ,
because of the interaction between the particles , the factorization of the wave function is not preserved by the time evolution .
however , it turns out that for large @xmath0 the interaction potential experienced by any single particle may be approximated by an effective mean - field potential , so that the wave function @xmath21 remains approximately factorized for all times . in other words
we have that , in a sense to be made precise , @xmath22 for some appropriate @xmath23 .
a simple argument shows that in a product state @xmath24 the interaction potential experienced by a particle is approximately @xmath25 , where @xmath26 denotes convolution .
this implies that @xmath23 is a solution of the nonlinear hartree equation @xmath27 let us be a little more precise about what one means with @xmath28 ( we omit the irrelevant time argument ) .
one does not expect the @xmath29-distance @xmath30 to become small as @xmath2 .
a more useful , weaker , indicator of convergence should depend only on a finite , fixed , @xmath31 may be taken to grow like @xmath32 . ]
number , @xmath31 , of particles . to this end
we define the reduced @xmath31-particle density matrix @xmath33 where @xmath34 denotes the partial trace over the coordinates @xmath35 , and @xmath36 denotes ( in accordance with the usual dirac notation ) the orthogonal projector onto @xmath16 . in other words , @xmath37 is the positive trace class operator on @xmath38 with operator kernel @xmath39 the reduced @xmath31-particle density matrix @xmath40 embodies all the information contained in the full @xmath0-particle wave function that pertains to at most @xmath31 particles .
there are two commonly used indicators of the closeness @xmath41 : the projection @xmath42 and the trace norm distance @xmath43 it is well known ( see e.g. @xcite ) that all of these indicators are equivalent in the sense that the vanishing of either @xmath44 or @xmath45 for some @xmath31 in the limit @xmath2 implies that @xmath46 for all @xmath47 .
however , the rate of convergence may differ from one indicator to another .
thus , when studying rates of convergence , they are not equivalent ( see section [ measures of convergence ] below for a full discussion ) .
the study of the convergence of @xmath48 in the mean - field limit towards @xmath49 for all @xmath50 has a history going back almost thirty years .
the first result is due to spohn @xcite , who showed that @xmath51 for all @xmath50 provided that @xmath13 is bounded .
his method is based on the bbgky hierarchy , @xmath52 } + \frac{1}{n } \sum_{1 { \leqslant}i < j { \leqslant}k } { \big[{w(x_i - x_j ) } \mspace{2mu } , { \gamma_n^{(k)}(t)}\big ] } \\ + \frac{n - k}{n } \sum_{i = 1}^k \operatorname{tr}_{k+1 } { \big[{w(x_i - x_{k+1 } ) } \mspace{2mu } , { \gamma_n^{(k+1)}(t)}\big]}\,,\end{gathered}\ ] ] an equation of motion for the family @xmath53 of reduced density matrices .
it is a simple computation to check that the bbgky hierarchy is equivalent to the schrdinger equation for @xmath21 . using a perturbative expansion of the bbgky hierarchy
, spohn showed that in the limit @xmath2 the family @xmath53 converges to a family @xmath54 that satisfies the limiting bbgky obtained by formally setting @xmath55 in .
this limiting hierarchy is easily seen to be equivalent to the hartree equation via the identification @xmath56 .
we refer to @xcite for a short discussion of some subsequent developments . in the past few years considerable
progress has been made in strengthening such results in mainly two directions .
first , the convergence @xmath57 for all @xmath50 has been proven for singular interaction potentials @xmath13 .
it is for instance of special physical interest to understand the case of a coulomb potential , @xmath58 where @xmath59 . the proofs for singular interaction potentials are considerably more involved than for bounded interaction potentials .
the first result for the case @xmath60 and @xmath61 is due to erds and yau @xcite .
their proof uses the bbgky hierarchy and a weak compactness argument . in @xcite , schlein and elgart extended this result to the technically more demanding case of a semirelativistic kinetic energy , @xmath62 and @xmath58 .
this is a critical case in the sense that the kinetic energy has the same scaling behaviour as the coulomb potential energy , thus requiring quite refined estimates .
a different approach , based on operator methods , was developed by frhlich et al . in @xcite , where the authors treat the case @xmath60 and @xmath58 .
their proof relies on dispersive estimates and counting of feynman graphs . yet
another approach was adopted by rodnianski and schlein in @xcite .
using methods inspired by a semiclassical argument of hepp @xcite focusing on the dynamics of coherent states in fock space , they show convergence to the mean - field limit in the case @xmath60 and @xmath63 .
the second area of recent progress in understanding the mean - field limit is deriving estimates on the rate of convergence to the mean - field limit .
methods based on expansions , as used in @xcite and @xcite , give very weak bounds on the error @xmath64 , while weak compactness arguments , as used in @xcite and @xcite , yield no information on the rate of convergence . from a physical point of view , where @xmath0 is large but finite , it is of some interest to have tight error bounds in order to be able to address the question whether the mean - field approximation may be regarded as valid .
the first reasonable estimates on the error were derived for the case @xmath65 and @xmath58 by rodnianski and schlein in their work @xcite mentioned above .
in fact they derive an explicit estimate on the error of the form @xmath66 for some constants @xmath67 . using a novel approach inspired by lieb - robinson bounds , erds and
schlein @xcite further improved this estimate under the more restrictive assumption that @xmath13 is bounded and its fourier transform integrable .
their result is @xmath68 for some constants @xmath69 . in the present article
we adopt yet another approach based on a method of pickl @xcite .
we strengthen and generalize many of the results listed above , by treating more singular interaction potentials as well as deriving estimates on the rate of convergence .
moreover , our approach allows for a large class of ( possibly time - dependent ) external potentials , which might for instance describe a trap confining the particles to a small volume .
we also show that if the solution @xmath70 of the hartree equation satisfies a scattering condition , all of the error estimates are uniform in time . the outline of the article is as follows
. section [ measures of convergence ] is devoted to a short discussion of the indicators of convergence @xmath45 and @xmath44 , in which we derive estimates relating them to each other . in section [ l2 potentials ] we state and
prove our first main result , which concerns the mean - field limit in the case of @xmath29-type singularities in @xmath13 ; see theorem [ theorem for l^2 potentials ] and corollary [ corollary of main theorem ] . in section [ section : singular potentials ] we state and
prove our second main result , which allows for a larger class of singularities such as the nonrelativistic critical case @xmath60 and @xmath71 ; see theorem [ theorem for singular potentials ] . for
an outline of the methods underlying our proofs , see the beginnings of sections [ l2 potentials ] and [ section : singular potentials ] .
we would like to thank j. frhlich and e. lenzmann for helpful and stimulating discussions .
we also gratefully acknowledge discussions with a. michelangeli which led to lemma [ lemma : relationship between es ] . except in definitions , in statements of results and where confusion is possible , we refrain from indicating the explicit dependence of a quantity @xmath72 on the time @xmath50 and the particle number @xmath0 .
when needed , we use the notations @xmath73 and @xmath74 interchangeably to denote the value of the quantity @xmath75 at time @xmath50 . the symbol @xmath76 is reserved for a generic positive constant that may depend on some fixed parameters .
we abbreviate @xmath77 with @xmath78 . to simplify notation
, we assume that @xmath79 .
we abbreviate @xmath80 and @xmath81 .
we also set @xmath82 .
for @xmath83 we use @xmath84 to denote the sobolev space with norm @xmath85 , where @xmath86 is the fourier transform of @xmath87 .
integer indices on operators denote particle number : a @xmath31-particle operator @xmath88 ( i.e. an operator on @xmath89 ) acting on the coordinates @xmath90 , where @xmath91 , is denoted by @xmath92 . also , by a slight abuse of notation , we identify @xmath31-particle functions @xmath93 with their associated multiplication operators on @xmath89 .
the operator norm of the multiplication operator @xmath87 is equal to , and will always be denoted by , @xmath94 .
we use the symbol @xmath95 to denote the form domain of a semibounded operator .
we denote the space of bounded linear maps from @xmath96 to @xmath97 by @xmath98 , and abbreviate @xmath99 .
we abbreviate the operator norm of @xmath100 by @xmath101 . for two banach spaces ,
@xmath96 and @xmath97 , contained in some larger space , we set @xmath102 and denote by @xmath103 and @xmath104 the corresponding banach spaces .
this section is devoted to a discussion , which might also be of independent interest , of quantitative relationships between the indicators @xmath45 and @xmath44 . throughout this section
we suppress the irrelevant index @xmath0 .
take a @xmath31-particle density matrix @xmath105 and a one - particle condensate wave function @xmath106 .
the following lemma gives the relationship between different elements of the sequence @xmath107 , where , we recall , @xmath108 [ lemma : relationship between es ] let @xmath105 satisfy @xmath109 let @xmath106 satisfy @xmath110 .
then @xmath111 let @xmath112 be an orthonormal basis of @xmath89 with @xmath113
. then @xmath114 therefore , @xmath115 this yields @xmath116 and the claim follows .
the bound in is sharp .
indeed , let us suppose that @xmath117 for some function @xmath87 .
then @xmath118 where the second inequality follows by restricting the supremum to product states @xmath119 and writing @xmath120 .
the next lemma describes the relationship between @xmath121 and @xmath122 , where , we recall , @xmath123 [ lemma : estimates between r and e ] let @xmath105 be a density matrix and @xmath106 satisfy @xmath110 .
then [ r < > e ] @xmath124 it is convenient to introduce the shorthand @xmath125 thus , @xmath126 which is . in order to prove it is easiest to use the identity @xmath127 valid for any one - dimensional projector @xmath128 and nonnegative density matrix @xmath129 .
this was first observed by seiringer ; see @xcite . for the convenience of the reader we recall the proof of .
let @xmath130 be the sequence of eigenvalues of the trace class operator @xmath131 .
since @xmath128 is a rank one projection , @xmath88 has at most one negative eigenvalue , say @xmath132 .
also , @xmath133 implies that @xmath134 .
thus , @xmath135 , which is . now yields @xmath136 then follows from @xmath137 alternatively , one may prove without by using the polar decomposition and the cauchy - schwarz inequality for hilbert - schmidt operators .
up to constant factors the bounds are sharp , as the following examples show . here
we drop the irrelevant index @xmath31 .
consider first @xmath138 where @xmath139 .
as above we set @xmath140 .
one finds @xmath141 so that is sharp up to a constant factor .
it is not hard to see that if @xmath142 and @xmath143 commute then can be replaced with the stronger bound @xmath144 . in order to show that in general is sharp up to a constant factor , consider @xmath145 where @xmath139 .
one readily sees that @xmath142 is a density matrix ( in fact , a one - dimensional projector ) .
a short calculation yields @xmath146 as well as @xmath147 using @xmath148 we therefore find @xmath149 as desired .
this section is devoted to the case @xmath150 .
our method relies on controlling the quantity @xmath151 to this end , we derive an estimate of the form @xmath152 which , by grnwall s lemma , implies @xmath153 in order to show , we differentiate @xmath154 and note that all terms arising from the one - particle hamiltonian vanish .
we control the remaining terms by introducing the time - dependent orthogonal projections @xmath155 we then partition @xmath156 appropriately and use the following heuristics for controlling the terms that arise in this manner .
factors @xmath157 are used to control singularities of @xmath13 by exploiting the smoothness of the hartree wave function @xmath23 .
factors @xmath158 are expected to yield something small , i.e. proportional to @xmath154 , in accordance with the identity @xmath159 . for the following it is convenient to rewrite the hamiltonian as @xmath160 where @xmath161 .
we may now list our assumptions . *
the one - particle hamiltonian @xmath11 is self - adjoint and bounded from below . without loss of generality
we assume that @xmath162 .
we define the hilbert space @xmath163 as the form domain of @xmath164 with norm @xmath165 * the hamiltonian is self - adjoint and bounded from below .
we also assume that @xmath166 . *
the interaction potential @xmath13 is a real and even function satisfying @xmath167 , where @xmath168 . *
the solution @xmath70 of satisfies @xmath169 where @xmath170 are defined through @xmath171 here @xmath172 denotes the dual space of @xmath96 , i.e. the closure of @xmath29 under the norm @xmath173 .
we now state our main result . [
theorem for l^2 potentials ] let @xmath174 satisfy @xmath175 , and @xmath176 satisfy @xmath177 . assume that assumptions ( a1 )
( a4 ) hold
. then @xmath178 where @xmath179 we may combine this result with the observations of section [ measures of convergence ] .
[ corollary of main theorem ] let the sequence @xmath174 , @xmath180 , satisfy the assumptions of theorem [ theorem for l^2 potentials ] as well as @xmath181 then we have @xmath182 corollary [ corollary of main theorem ] implies that we can control the condensation of @xmath183 particles .
assumption ( a3 ) allows for singularities in @xmath13 up to , but not including , the type @xmath184 in three dimensions . in the next section
we treat a larger class of interaction potentials .
assumption ( a4 ) is typically verified by solving the hartree equation in a sobolev space of high index ( see e.g. section [ example : semirelativistic kinetic energy ] ) . instead of requiring a global - in - time solution @xmath70
, it is enough to have a local - in - time solution on @xmath185 for some @xmath186 . [
remark : time - dependent potentials ] if @xmath187 , or in other words if @xmath188 and @xmath189 are integrable in @xmath50 over @xmath190 , then all estimates are uniform in time .
this describes a scattering regime where the time evolution is asymptotically free for large times .
such an integrability condition requires large exponents @xmath191 , which translates to small exponents @xmath192 , i.e. an interaction potential with strong decay .
the result easily extends to time - dependent one - particle hamiltonians @xmath193 . replace ( a1 ) and ( a2 ) with * the hamiltonian @xmath194 is self - adjoint and bounded from below .
we assume that there is an operator @xmath195 that such that @xmath196 for all @xmath50 .
define the hilbert space @xmath197 as in ( a1 ) . *
the hamiltonian @xmath198 is self - adjoint and bounded from below .
we assume that @xmath199 for all @xmath50 .
we also assume that the @xmath0-body propagator @xmath200 , defined by @xmath201 exists and satisfies @xmath202 for all @xmath50 .
it is then straightforward that theorem [ theorem for l^2 potentials ] holds with the same proof .
[ remark : alternative assumptions for hardy ] in some cases ( see e.g. section [ example : nonrelativistic kinetic energy ] below ) it is convenient to modify the assumptions as follows . replace ( a3 ) and ( a4 ) with * the interaction potential @xmath13 is a real and even function satisfying @xmath203 for some constant @xmath204 . without loss of generality
we assume that @xmath205 . *
the solution @xmath70 of satisfies @xmath206 then theorem [ theorem for l^2 potentials ] and corollary [ corollary of main theorem ] hold with @xmath207 the proof remains virtually unchanged .
one replaces with , as well as with @xmath208 which is an easy consequence of .
we list two examples of systems satisfying the assumptions of theorem [ theorem for l^2 potentials ] .
consider nonrelativistic particles in @xmath209 confined by a strong trapping potential .
the particles interact by means of the coulomb potential : @xmath58 , where @xmath210 .
the one - particle hamiltonian is of the form @xmath211 , where @xmath212 is a measurable function on @xmath209 .
decompose @xmath212 into its positive and negative parts : @xmath213 , where @xmath214 .
we assume that @xmath215 and that @xmath216 is @xmath217-form bounded with relative bound less than one , i.e. there are constants @xmath218 and @xmath219 such that @xmath220 thus @xmath221 is positive , and it is not hard to see that @xmath11 is essentially self - adjoint on @xmath222 .
this follows by density and a standard argument using riesz s representation theorem to show that the equation @xmath223 has a unique solution @xmath224 for each @xmath225 .
it is now easy to see that assumptions ( a1 ) and ( a2 ) hold with the one - particle hamiltonian @xmath226 for some @xmath227 .
let us assume without loss of generality that @xmath228 .
next , we verify assumptions ( a3 ) and ( a4 ) ( see remark [ remark : alternative assumptions for hardy ] ) .
we find @xmath229 where the second step follows from hardy s inequality and translation invariance of @xmath230 , and the third step is a simple consequence of .
this proves ( a3 ) .
next , take @xmath231 . by standard methods
( see e.g. the presentation of @xcite ) one finds that ( a4 ) holds .
moreover , the mass @xmath232 and the energy @xmath233 } \biggr|_t\ ] ] are conserved under time evolution . using the identity @xmath234 and hardy s inequality one sees that @xmath235 and therefore @xmath236 for all @xmath50 .
we conclude : theorem [ theorem for l^2 potentials ] holds with @xmath237 .
more generally , the preceding discussion holds for interaction potentials @xmath238 , where @xmath239 denotes the weak @xmath240 space ( see e.g. @xcite ) .
this follows from a short computation using symmetric - decreasing rearrangements ; we omit further details .
this example generalizes the results of @xcite , @xcite and @xcite .
consider semirelativistic particles in @xmath209 whose one - particle hamiltonian is given by @xmath241 .
the particles interact by means of a coulomb potential : @xmath61 .
we impose the condition @xmath242 .
this condition is necessary for both the stability of the @xmath0-body problem ( i.e. assumption ( a2 ) ) and the global well - posedness of the hartree equation .
see @xcite for details .
it is well known that assumptions ( a1 ) and ( a2 ) hold in this case . in order to show ( a4 )
we need some regularity of @xmath70 . to this end , let @xmath243 and take @xmath244 .
theorem 3 of @xcite implies that has a unique global solution in @xmath245 .
therefore sobolev s inequality implies that ( a4 ) holds with @xmath246 thus @xmath247 , and ( a3 ) holds with appropriately chosen values of @xmath248 .
we conclude : theorem [ theorem for l^2 potentials ] holds for some continuous function @xmath249 .
( in fact , as shown in @xcite , one has the bound @xmath250 . )
this example generalizes the result of @xcite .
define the time - dependent projectors @xmath155 write @xmath251 and define @xmath252 , for @xmath253 , as the term obtained by multiplying out and selecting all summands containing @xmath31 factors @xmath254 .
in other words , @xmath255 if @xmath256 we set @xmath257 .
it is easy to see that the following properties hold : 1 .
@xmath252 is an orthogonal projector , 2 .
@xmath258 , 3 .
@xmath259 . next ,
for any function @xmath260 we define the operator @xmath261 it follows immediately that @xmath262 and that @xmath263 commutes with @xmath192 and @xmath252 .
we shall often make use of the functions @xmath264 we have the relation @xmath265 thus , by symmetry of @xmath266 , we get @xmath267 the correspondence @xmath268 of yields the following useful bounds . [ replacing q s with n s ] for any nonnegative function @xmath269 we have @xmath270 the proof of is an immediate consequence of . in order to prove we write , using symmetry of @xmath266 as well as , @xmath271 which is the claim . next
, we introduce the shift operation @xmath272 , @xmath273 , defined on functions @xmath87 through @xmath274 its usefulness for our purposes is encapsulated by the following lemma . [ pulling projectors through ]
let @xmath275 and @xmath88 be an operator on @xmath276 .
let @xmath277 , @xmath278 , be two projectors of the form @xmath279 where each @xmath280 stands for either @xmath143 or @xmath254 .
then @xmath281 where @xmath282 and @xmath283 is the number of factors @xmath254 in @xmath277 .
define @xmath284 then , @xmath285 the claim follows from the fact that @xmath286 commutes with @xmath287 .
let us abbreviate @xmath289 from ( a3 ) and ( a4 ) we find @xmath290 ( see below ) .
then @xmath291 , where @xmath292 .
thus , for any @xmath293 independent of @xmath50 we have @xmath294 } \psi}\rangle}\,.\ ] ] on the other hand , it is easy to see from ( a3 ) and ( a4 ) that @xmath295 . combining these observations , and noting that @xmath296 by ( a2 ) , we see that @xmath297 is differentiable in @xmath50 with derivative @xmath298 } \psi}\big\rangle}\,,\ ] ] where @xmath299 .
thus , @xmath300}\psi}\bigg\rangle}\,.\ ] ] by symmetry of @xmath266 and @xmath301 we get @xmath302}\psi}\big\rangle}\,.\ ] ] in order to estimate the right - hand side , we introduce @xmath303 on both sides of the commutator in .
of the sixteen resulting terms only three different types survive : @xmath304 } q_1 p_2\psi}\big\rangle } & & \mathrm{(i ) } \\ & \frac{{\mathrm{i}}}{2 } { \big\langle{\psi } \mspace{2mu } , { q_1 p_2 { \big[{(n-1 ) w_{12 } - n w^\varphi_1 - n w^\varphi_2 } \mspace{2mu } , { \widehat{m}}\big ] } q_1 q_2\psi}\big\rangle } & & \mathrm{(ii ) } \\ & \frac{{\mathrm{i}}}{2 } { \big\langle{\psi } \mspace{2mu } , { p_1 p_2 { \big[{(n-1 ) w_{12 } - n w^\varphi_1 - n w^\varphi_2 } \mspace{2mu } , { \widehat{m}}\big ] } q_1 q_2\psi}\big\rangle } & & \mathrm{(iii)}\,.\end{aligned}\ ] ] indeed , lemma [ pulling projectors through ] implies that terms with the same number of factors @xmath254 on the left and on the right vanish .
what remains is @xmath305 the remainder of the proof consists in estimating each term . _
term @xmath306 . _
first , we remark that @xmath307 this is easiest to see using operator kernels ( we drop the trivial indices @xmath308 ) : @xmath309 therefore , @xmath310 } q_1 p_2\psi}\big\rangle } \;=\ ; \frac{- { \mathrm{i}}}{2 } { \big\langle{\psi } \mspace{2mu } , { p_1 p_2 { \big[{w^\varphi_1 } \mspace{2mu } , { \widehat{m}}\big ] } q_1 p_2\psi}\big\rangle}\,.\ ] ] using lemma [ pulling projectors through ] we find @xmath311 this gives @xmath312 by ( a3 ) , we may write @xmath313 by young s inequality , @xmath314 where @xmath315 are defined through @xmath316 therefore , @xmath317 taking the infimum over all decompositions yields @xmath318 note that ( a3 ) and ( a4 ) imply @xmath319 so that the right - hand side of is finite . summarizing , @xmath320 _ term @xmath321 . _ applying lemma [ pulling projectors through ] to ( ii ) yields @xmath322 so that @xmath323 the second term of is bounded by @xmath324 where we used the bound as well as .
the first term of is bounded using cauchy - schwarz by @xmath325 this follows by applying to @xmath326 .
thus we get the bound @xmath327 we now proceed as above . using the decomposition
we get @xmath328 then young s inequality gives @xmath329 which implies that @xmath330 putting all of this together we get @xmath331 } \ , \alpha\,.\ ] ] _ term @xmath332 .
_ the final term ( iii ) is equal to @xmath333 } q_1 q_2\psi}\big\rangle } \;=\ ; \frac{{\mathrm{i}}}{2 } { \big\langle{\psi } \mspace{2mu } , { p_1 p_2 ( n-1 ) w_{12}{\bigl({\widehat{m } - \widehat{\tau_{-2 } m}}\bigr ) } q_1 q_2\psi}\big\rangle } \\
\;=\ ; { \mathrm{i}}\frac{n-1}{n } { \big\langle{\psi } \mspace{2mu } , { p_1 p_2 w_{12 } q_1 q_2\psi}\big\rangle}\,,\end{gathered}\ ] ] where we used lemma [ pulling projectors through ] .
next , we note that , on the range of @xmath334 , the operator @xmath335 is well - defined and bounded .
thus ( iii ) is equal to @xmath336 where we used lemma [ pulling projectors through ] again .
we now use cauchy - schwarz to get @xmath337 using the estimate we get finally @xmath338 _ conclusion of the proof .
_ we have shown that the estimate holds with @xmath339}\ , , \\
a_n(t ) & \;=\ ; \frac{b_n(t)}{n}\,.\end{aligned}\ ] ] using @xmath29-norm conservation @xmath340 and interpolation we find @xmath341 .
thus , @xmath342 the claim now follows from the grnwall estimate .
in this section we extend the results of the section [ l2 potentials ] to more singular interaction potentials .
we consider the case @xmath343 , where @xmath344 for example in three dimensions @xmath345 , which corresponds to singularities up to , but not including , the type @xmath346 .
of course , there are other restrictions on the interaction potential which ensure the stability of the @xmath0-body hamiltonian and the well - posedness of the hartree equation . in practice
, it is often these latter restrictions that determine the class of allowed singularities . in the words of @xcite ( p. 169 ) , it is `` venerable physical folklore '' that an @xmath0-body hamiltonian of the form , with @xmath60 and @xmath347 for @xmath348 , produces reasonable quantum dynamics in three dimensions .
mathematically , this means that such a hamiltonian is self - adjoint ; this is a well - known result ( see e.g. @xcite ) .
the corresponding hartree equation is known to be globally well - posed ( see @xcite ) .
this section answers ( affirmatively ) the question whether , in the case of such singular interaction potentials , the mean - field limit of the @xmath0-body dynamics is governed by the hartree equation . as in section [ l2 potentials ] , we need to control expressions of the form @xmath349 .
the situation is considerably more involved when @xmath350 is not locally integrable .
an important step in dealing with such potentials in our proof is to express @xmath13 as the divergence of a vector field @xmath351 .
this approach requires the control of not only @xmath352 but also @xmath353 , which arises from integrating by parts in expressions containing the factor @xmath354 . as it turns out ,
@xmath355 , defined through @xmath356 does the trick .
this follows from an estimate exploiting conservation of energy ( see lemma [ lemma : energy estimate ] below ) . the inequality @xmath357 and the representation yield @xmath358 * the one - particle hamiltonian @xmath11 is self - adjoint and bounded from below . without loss of generality
we assume that @xmath162 .
we also assume that there are constants @xmath359 such that @xmath360 as an inequality of forms on @xmath361 . *
the hamiltonian is self - adjoint and bounded from below .
we also assume that @xmath166 , where @xmath362 is defined as in assumption ( a1 ) .
* there is a constant @xmath363 such that @xmath364 as an inequality of forms on @xmath365 . *
the interaction potential @xmath13 is a real and even function satisfying @xmath366 , where @xmath367 . * the solution @xmath70 of satisfies @xmath368
where @xmath369 is equipped with the norm @xmath370 next , we define the microscopic energy per particle @xmath371 as well as the hartree energy @xmath372 } \biggr\vert_t\,.\ ] ] by spectral calculus , @xmath373 is independent of @xmath50 .
also , invoking assumption ( b5 ) to differentiate @xmath374 with respect to @xmath50 shows that @xmath374 is conserved as well . summarizing , @xmath375 we may now state the main result of this section . [ theorem for singular potentials ] let @xmath174 and assume that assumptions ( b1 ) ( b5 ) hold .
then there is a constant @xmath376 , depending only on @xmath3 , @xmath11 , @xmath13 and @xmath143 , such that @xmath377 where @xmath378 and @xmath379 we have convergence to the mean - field limit whenever @xmath380 and @xmath381 .
for instance if we start in a fully factorized state , @xmath382 , then @xmath383 and @xmath384 so that the theorem [ theorem for singular potentials ] yields @xmath385 and the analogue of corollary [ corollary of main theorem ] holds .
theorem [ theorem for singular potentials ] remains valid for a large class of time - dependent one - particle hamiltonians @xmath194 .
see section [ section : remark on time - dependent potentials ] below for a full discussion .
[ [ example - nonrelativistic - particles - with - interaction - potential - of - critical - type ] ] example : nonrelativistic particles with interaction potential of critical type ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ consider nonrelativistic particles in @xmath209 with one - particle hamiltonian @xmath60 .
the interaction potential is given by @xmath390 .
this corresponds to a critical nonlinearity of the hartree equation .
we require that @xmath391 , which ensures that the @xmath0-body hamiltonian is stable and the hartree equation has global solutions . to see this , recall hardy s inequality in three dimensions , @xmath392 one easily infers that assumptions ( b1 )
( b3 ) hold . moreover , assumption ( b4 ) holds for any @xmath393 . in order to verify assumption ( b5 )
we refer to @xcite , where local well - posedness is proven .
global existence follows by standard methods using conservation of the mass @xmath394 , conservation of the energy @xmath395 , and hardy s inequality .
together they yield an a - priori bound on @xmath396 , from which an a - priori bound for @xmath397 may be inferred ; see @xcite for details .
write @xmath401 as well as @xmath402 inserting @xmath403 in front of every @xmath266 in and multiplying everything out yields @xmath404 we want to find an upper bound for the left - hand side . in order to control the last term on the right - hand side for negative interaction potentials , we need to use some of the kinetic energy on the left - hand side . to this end
, we split the left - hand side by multiplying it with @xmath405 .
thus , using , we get @xmath406 the rest of the proof consists in estimating each line on the right - hand side of separately
. there is nothing to be done with the first line .
_ the fourth line on the right - hand side of is bounded in absolute value by @xmath411 where in the last step we used lemma [ pulling projectors through ] . using cauchy - schwarz ,
we thus get @xmath412 where in the second step we used lemma [ replacing q s with n s ] . using @xmath413 we find @xmath414 _ line 5 . _ finally , we turn our attention to the fifth line on the right - hand side of , which is bounded in absolute value by @xmath415 where @xmath416 one finds , using , lemma [ pulling projectors through ] and lemma [ replacing q s with n s ] , @xmath417 let us now consider @xmath422 . in order to deal with the singularities in @xmath423 , we write it as the divergence of a vector field @xmath424 , @xmath425 this is nothing but a problem of electrostatics , which is solved by @xmath426 with some constant @xmath76 depending on @xmath3 . by the hardy - littlewood - sobolev inequality
, we find @xmath427 thus if @xmath428 then @xmath429 . denote by @xmath430 multiplication by @xmath431 . for the following it is convenient to write @xmath432 , where a summation over @xmath433 is implied .
recalling lemma [ pulling projectors through ] , we therefore get @xmath434 integrating by parts yields @xmath435 let us begin by estimating the first term .
recalling that @xmath436 , we find that the first term on the right - hand side of is equal to @xmath437 where we used young s inequality , assumption ( b1 ) , and lemma [ replacing q s with n s ] .
recalling that @xmath438 , we conclude that the first term on the right - hand side of is bounded by @xmath439 next , we estimate the second term on the right - hand side of .
it is equal to @xmath440 we estimate @xmath441 by introducing @xmath442 on the left .
the term arising from @xmath443 is bounded by @xmath444 the term arising from @xmath334 in the above splitting is dealt with in exactly the same way .
thus we have proven that the second term on the right - hand side of is bounded by @xmath445 _ conclusion of the proof .
_ putting all the estimates of the right - hand side of together , we find @xmath447 next , from @xmath448 we deduce @xmath449 now , recalling that @xmath450 , we find @xmath451 therefore , @xmath452 plugging in yields @xmath453 next , we observe that assumption ( b1 ) implies @xmath454 so that we get @xmath455 now we claim that @xmath456 this follows from the general estimate @xmath457 which itself follows from the elementary inequality @xmath458 the claim of the lemma now follows from by using assumption ( b1 ) .
we start exactly as in section [ l2 potentials ] .
assumptions ( b1 ) ( b5 ) imply that @xmath355 is differentiable in @xmath50 with derivative @xmath460}\psi}\big\rangle } \notag \\ & \;=\ ; 2 \mathrm{(i ) } + 2 \mathrm{(ii ) } + \mathrm{(iii ) } + \text { complex conjugate}\,,\end{aligned}\ ] ] where @xmath461 } q_1 p_2\psi}\big\rangle}\ , , \\
\mathrm{(ii ) } & \;{\mathrel{\mathop:}=}\ ; \frac{{\mathrm{i}}}{2 } { \big\langle{\psi } \mspace{2mu } , { q_1 p_2 { \big[{(n-1 ) w_{12 } - n w^\varphi_1 - n w^\varphi_2 } \mspace{2mu } , { \widehat{n}}\big ] } q_1 q_2\psi}\big\rangle}\ , , \\
\mathrm{(iii ) } & \;{\mathrel{\mathop:}=}\ ; \frac{{\mathrm{i}}}{2 } { \big\langle{\psi } \mspace{2mu } , { p_1 p_2 { \big[{(n-1 ) w_{12 } - n w^\varphi_1 - n w^\varphi_2 } \mspace{2mu } , { \widehat{n}}\big ] } q_1 q_2\psi}\big\rangle}\,.\end{aligned}\ ] ] _ term @xmath306 .
_ using we find @xmath462 } q_1 p_2\psi}\big\rangle } \big\rvert } \\ & \;=\ ; { \big\lvert { \big\langle{\psi } \mspace{2mu } , { p_1 p_2 { \big[{w^\varphi_1 } \mspace{2mu } , { \widehat{n}}\big ] } q_1 p_2\psi}\big\rangle } \big\rvert } \\ & \;=\ ; { \big\lvert { \big\langle{\psi } \mspace{2mu } , { p_1 p_2 w^\varphi_1 { \bigl({\widehat{n } - \widehat{\tau_{-1 } n}}\bigr ) } q_1 p_2\psi}\big\rangle } \big\rvert}\,,\end{aligned}\ ] ] where we used lemma [ pulling projectors through ] .
define @xmath463 thus , @xmath464 by . _
term @xmath321 .
_ using lemma [ pulling projectors through ] we find @xmath465 } q_1 q_2\psi}\big\rangle } \big\rvert } \\ & \;=\ ; { \bigg\lvert { \bigg\langle{\psi } \,\mspace{2mu},\ , { q_1 p_2 { \biggl({\frac{n-1}{n } w_{12 } - w^\varphi_2}\biggr ) } \ , \widehat{\mu } \ , q_1 q_2\psi}\bigg\rangle } \bigg\rvert } \\
\label{term ii for beta } & \;{\leqslant}\ ; \underbrace{{\big\lvert { \big\langle{\psi } \mspace{2mu } , { q_1 p_2 w_{12 } \,\widehat{\mu}\ , q_1 q_2\psi}\big\rangle } \big\rvert}}_{{=\mathrel{\mathop:}}\ ; \mathrm{(a ) } } + \underbrace{{\big\lvert { \big\langle{\psi } \mspace{2mu } , { q_1 p_2 w^\varphi_2 \,\widehat{\mu}\ , q_1 q_2\psi}\big\rangle } \big\rvert}}_{{=\mathrel{\mathop:}}\ ; \mathrm{(b)}}\,.\end{aligned}\ ] ] one immediately finds @xmath466 in ( a ) we split @xmath467 with a resulting splitting @xmath468 .
the easy part is @xmath469 in order to deal with @xmath470 we write @xmath471 as the divergence of a vector field @xmath424 , exactly as in the proof of lemma [ lemma : energy estimate ] ; see and the remarks after it .
we integrate by parts to find @xmath472 the first term of is equal to @xmath473 where in the second step we used , in the third lemma [ replacing q s with n s ] , and in the last , young s inequality , and .
the second term of is equal to @xmath474 where we used lemma [ pulling projectors through ] .
we estimate the first term of . the second term is dealt with in exactly the same way .
we find @xmath475 in summary , we have proven that @xmath476 _ term @xmath332 . _ using lemma [ pulling projectors through ] we find @xmath477 } q_1 q_2\psi}\big\rangle } \big\rvert } \;=\ ; ( n-1 ) { \big\lvert { \big\langle{\psi } \mspace{2mu } , { p_1 p_2 w_{12 } { \bigl({\widehat{n } - \widehat{\tau_{-2 } n}}\bigr ) } q_1 q_2\psi}\big\rangle } \big\rvert } \,.\ ] ] defining @xmath478 we have @xmath479 as usual we start by splitting @xmath467 with the induced splitting @xmath480 .
thus , using lemma [ pulling projectors through ] , we find @xmath481 where in the fifth step we used lemma [ replacing q s with n s ] . in order to estimate @xmath482
we introduce a splitting of @xmath423 into `` singular '' and `` regular '' parts , @xmath483 where @xmath75 is a positive ( @xmath0-dependent ) constant we choose later . for future reference
we record the estimates let us start with @xmath486 . as in
, we use the representation @xmath487 then and imply that @xmath488 integrating by parts , we find @xmath489 using @xmath490 and lemma [ replacing q s with n s ] we find that the first term of is bounded by @xmath491 where in the second step we used the estimate . next ,
using lemma [ pulling projectors through ] , we find that the second term of is equal to @xmath492 we estimate the first term ( the second is dealt with in exactly the same way ) : @xmath493 summarizing , @xmath494 finally , we estimate @xmath495 where @xmath496 is some partition of the unity to be chosen later .
the need for this partitioning will soon become clear . in order to bound the term with @xmath497 , we note that the operator norm of @xmath498 on the full space @xmath499 is much larger than on its symmetric subspace .
thus , as a first step , we symmetrize the operator @xmath500 in coordinate @xmath501 .
we get the bound @xmath502 using @xmath503 we find @xmath504 where @xmath505 the easy part is @xmath506 let us therefore concentrate on @xmath507 with @xmath508 arising from the splitting @xmath509 .
we start with @xmath510 by cauchy - schwarz and symmetry of @xmath266 . here
@xmath511 is any complex square root . in order to estimate this
we claim that , for @xmath512 , @xmath513 indeed , by , we have @xmath514 the operator @xmath515 is equal to @xmath516 , where @xmath517 thus , @xmath518 from which follows immediately . using , we get @xmath519
now let us choose @xmath520 for some @xmath521 .
then @xmath522 implies @xmath523 similarly , we find @xmath524 thus we have proven @xmath525 going back to , we see that @xmath526 what remains is to estimate is the term of @xmath527 containing @xmath528 , @xmath529 using @xmath530 we find @xmath531 where @xmath532 since @xmath533 we find @xmath534 thus , @xmath535 and we get @xmath536 by .
next , using lemma [ pulling projectors through ] , we find @xmath537 where , as above , the splitting @xmath508 arises from writing @xmath509 .
thus , @xmath538 by cauchy - schwarz and symmetry of @xmath266 .
using we get @xmath539 similarly , @xmath540 plugging all this back into , we find that @xmath541 summarizing : @xmath542 from which we deduce @xmath543 let us set @xmath544 and optimize in @xmath545 and @xmath546 .
this yields the relations @xmath547 which imply @xmath548 with @xmath549 .
thus , @xmath550 where @xmath551 satisfies .
theorem [ theorem for singular potentials ] can be extended to time - dependent external potentials @xmath194 without too much sweat .
the only complication is that energy is no longer conserved .
we overcome this problem by observing that , while the energies @xmath554 and @xmath374 exhibit large variations in @xmath50 , their difference remains small . in the following we estimate the quantity @xmath555 by controlling its time derivative . *
the hamiltonian @xmath194 is self - adjoint and bounded from below .
we assume that there is an operator @xmath195 that such that @xmath196 for all @xmath50 .
we define the hilbert space @xmath197 as in ( a1 ) , and the space @xmath556 as in ( b5 ) using @xmath557 .
we also assume that there are time - independent constants @xmath359 such that @xmath558 for all @xmath50 .
+ we make the following assumptions on the differentiability of @xmath194 .
the map @xmath559 is continuously differentiable for all @xmath293 , with derivative @xmath560 for some self - adjoint operator @xmath561 .
moreover , we assume that the quantities @xmath562 are continuous and finite for all @xmath50 . *
the hamiltonian @xmath198 is self - adjoint and bounded from below .
we assume that @xmath199 for all @xmath50 .
we also assume that the @xmath0-body propagator @xmath200 , defined by @xmath201 exists and satisfies @xmath202 for all @xmath50 .
* there is a time - independent constant @xmath363 such that @xmath563 for all @xmath50 .
assume that assumptions ( b1 )
( b3 ) , ( b4 ) , and ( b5 ) hold .
then there is a continuous nonnegative function @xmath564 , independent of @xmath0 and @xmath565 , such that @xmath566 with @xmath386 defined in .
we start by deriving an upper bound on the energy difference @xmath567 .
assumptions ( b1 ) and ( b2 ) and the fundamental theorem of calculus imply @xmath568 by inserting @xmath569 on both sides of @xmath570 we get ( omitting the time argument @xmath571 ) @xmath572 the first two terms of are equal to @xmath573 the third term of is bounded , using lemmas [ replacing q s with n s ] and [ pulling projectors through ] , by @xmath574 the last term of is equal to @xmath575 thus , using assumption ( b1 ) we conclude that @xmath576 for all @xmath50 . here , and in the following , @xmath577 denotes some continuous nonnegative function that does not depend on @xmath0 .
next , we observe that , under assumptions ( b1 ) ( b3 ) , the proof of lemma [ lemma : energy estimate ] remains valid for time - dependent one - particle hamiltonians .
thus , implies @xmath578 plugging this into yields @xmath579 therefore , @xmath580 next , we observe that , under assumptions ( b1 ) ( b3 ) , the derivation of the estimate in the proof of theorem [ theorem for singular potentials ] remains valid for time - dependent one - particle hamiltonians .
therefore , @xmath581 applying grnwall s lemma to the sum of and yields @xmath582 plugging this back into yields @xmath583 which is the claim . | we consider the time evolution of a system of @xmath0 identical bosons whose interaction potential is rescaled by @xmath1 .
we choose the initial wave function to describe a condensate in which all particles are in the same one - particle state .
it is well known that in the mean - field limit @xmath2 the quantum @xmath0-body dynamics is governed by the nonlinear hartree equation . using a nonperturbative method
, we extend previous results on the mean - field limit in two directions .
first , we allow a large class of singular interaction potentials as well as strong , possibly time - dependent external potentials .
second , we derive bounds on the rate of convergence of the quantum @xmath0-body dynamics to the hartree dynamics . |
SECTION 1. FINDINGS.
The Congress finds that--
(1) at least 30 million Americans lack access to even the
most basic health services;
(2) access to health care is especially difficult for those
Americans who--
(A) live in medically underserved rural communities
or inner city neighborhoods;
(B) lack public or private health insurance
coverage and the ability to pay directly for care;
(C) must move for work purposes, such as migrant
farmworkers;
(D) are members of minority groups, or who speak
limited English; or
(E) are members of other vulnerable groups,
including persons who are homeless or are high-risk
pregnant women, infants and children;
(3) the consequences of poor access to health care is
evidenced in elevated infant and childhood mortality rates,
dangerously low childhood immunization rates, overutilization
of hospital emergency rooms or other inappropriate providers of
primary care services, and hospitalization rates for
preventable conditions that are significantly higher than the
national average;
(4) efforts to provide access to essential health care
services for medically underserved Americans will not only
contribute to improved health status, but will also result in
less unnecessary care and reduced overall costs of health care;
and
(5) the federally qualified health centers, including the
community and migrant health centers which serve more than 6
million needy Americans, provide an effective and proven model
for extending access to all medically underserved Americans.
SEC. 2. ESTABLISHMENT OF NEW PART UNDER THE MEDICAID PROGRAM TO PROVIDE
FUNDS FOR A NEW FEDERALLY QUALIFIED HEALTH CENTERS GRANTS
PROGRAM.
(a) In General.--Title XIX of the Social Security Act is amended by
inserting after the title heading the following:
``Part A--Payment to States for Medical Assistance''.
(b) Purpose.--Section 1901 of the Social Security Act (42 U.S.C.
1396) is amended--
(1) in the first sentence--
(A) by striking ``and (2)'' and inserting ``(2)'';
and
(B) by striking ``self care,'' and inserting ``self
care; and (3) grants to assist entities in providing
health care services to medically underserved
individuals,''; and
(2) by amending the second sentence to read as follows:
``The sums made available under this section shall be used for
making payments--
(A) under this part to States which have submitted,
and had approved by the Secretary, State plans for
medical assistance; and
(B) under part B to entities meeting the
requirements under such part.''.
(c) Effective Date.--The amendments made by subsections (a) and (b)
shall become effective on October 1, 1993.
SEC. 3. ESTABLISHMENT OF NEW PROGRAM TO PROVIDE FUNDS TO ALLOW
FEDERALLY QUALIFIED HEALTH CENTERS AND OTHER ENTITIES OR
ORGANIZATIONS TO PROVIDE EXPANDED SERVICES TO MEDICALLY
UNDERSERVED INDIVIDUALS.
(a) In General.--Title XIX of the Social Security Act (42 U.S.C.
1396 et seq.) is amended by adding at the end the following new part:
``Part B--Grants to Qualified Entities for Health Services
``health services access program
``Sec. 1941. (a) Establishment of Health Services Access Program.--
From amounts appropriated under section 1901, the Secretary shall,
acting through the Bureau of Health Care Delivery Assistance, award
grants under this section to federally qualified health centers (each
in this part referred to as an `FQHC') and other entities and
organizations submitting applications under this section (as described
in subsection (c)) for the purpose of providing access to services for
medically underserved populations (as defined in section 330(b)(3) of
the Public Health Service Act) or in high impact areas (as defined in
section 329(a)(5) of the Public Health Service Act) not currently being
served by an FQHC.
``(b) Eligibility for Grants.--(1) The Secretary shall award grants
under this section to entities or organizations described in this
paragraph and paragraph (2) which have submitted a proposal to the
Secretary to expand such entities or organizations operations
(including expansions to new sites (as determined necessary by the
Secretary)) to serve medically underserved populations or high impact
areas not currently served by an FQHC and which--
``(A) have as of January 1, 1993, been certified by the
Secretary as an FQHC under section 1905(l)(2)(B); or
``(B) have submitted applications to the Secretary to
qualify as FQHC's under section 1905(l)(2)(B); or
``(C) have submitted a plan to the Secretary which provides
that the entity will meet the requirements to qualify as an
FQHC when operational.
``(2)(A) The Secretary shall also make grants under this section to
public or private nonprofit agencies, health care entities or
organizations which meet the requirements necessary to qualify as an
FQHC except, the requirement that such entity have a consumer majority
governing board and which have submitted a proposal to the Secretary to
provide those services provided by an FQHC as defined in section
1905(l)(2)(B) and which are designed to promote access to primary care
services or to reduce reliance on hospital emergency rooms or other
high cost providers of primary health care services, provided such
proposal is developed by the entity or organizations (or such entities
or organizations acting in a consortium in a community) with the review
and approval of the Governor of the State in which such entity or
organization is located.
``(B) The Secretary shall provide in making grants to entities or
organizations described in this paragraph that no more than 10 percent
of the funds provided for grants under this section shall be made
available for grants to such entities or organizations.
``(c) Application Requirements.--(1) In order to be eligible to
receive a grant under this section, an FQHC or other entity or
organization must submit an application in such form and at such time
as the Secretary shall prescribe and which meets the requirements of
this subsection.
``(2) An application submitted under this section must provide--
``(A)(i) for a schedule of fees or payments for the
provision of the services provided by the entity designed to
cover its reasonable costs of operations; and
``(ii) for a corresponding schedule of discounts to be
applied to such fees or payments, based upon the patient's
ability to pay (determined by using a sliding scale formula
based on the income of the patient);
``(B) assurances that the entity or organization provides
services to persons who are eligible for benefits under title
XVIII, for medical assistance under a State plan approved under
part A or for assistance for medical expenses under any other
public assistance program or private health insurance program;
and
``(C) assurances that the entity or organization has made
and will continue to make every reasonable effort to collect
reimbursement for services--
``(i) from persons eligible for assistance under
any of the programs described in subparagraph (B); and
``(ii) from patients not entitled to benefits under
any such programs.
``(d) Limitations on Use of Funds.--(1) From the amounts awarded to
an entity or organization under this section, funds may be used for
purposes of planning but may only be expended for the costs of--
``(A) assessing the needs of the populations or proposed
areas to be served;
``(B) preparing a description of how the needs identified
will be met;
``(C) development of an implementation plan that
addresses--
``(i) recruitment and training of personnel; and
``(ii) activities necessary to achieve operational
status in order to meet FQHC requirements under
1905(l)(2)(B).
``(2) From the amounts awarded to an entity or organization under
this section, funds may be used for the purposes of paying for the
costs of recruiting, training and compensating staff (clinical and
associated administrative personnel (to the extent such costs are not
already reimbursed under part A or any other State or Federal program))
to the extent necessary to allow the entity to operate at new or
expanded existing sites.
``(3) From the amounts awarded to an entity or organization under
this section, funds may be expended for the purposes of acquiring
facilities and equipment but only for the costs of--
``(A) construction of new buildings (to the extent that new
construction is found to be the most cost-efficient approach by
the Secretary);
``(B) acquiring, expanding, or modernizing of existing
facilities;
``(C) purchasing essential (as determined by the Secretary)
equipment; and
``(D) amortization of principal and payment of interest on
loans obtained for purposes of site construction, acquisition,
modernization, or expansion, as well as necessary equipment.
``(4) From the amounts awarded to an entity or organization under
this section, funds may be expended for the payment of services but
only for the costs of--
``(A) providing or arranging for the provision of all
services through the entity necessary to qualify such entity as
an FQHC under section 1905(l)(2)(B);
``(B) providing or arranging for any other service that an
FQHC may provide and be reimbursed for under this title; and
``(C) providing any unreimbursed costs of providing
services as described in section 330(a) of the Public Health
Service Act to patients.
``(e) Priorities in the Awarding of Grants.--(1) The Secretary
shall give priority in awarding grants under this section to entities
which have, as of January 1, 1993, been certified as an FQHC under
section 1905(l)(2)(B) and which have submitted a proposal to the
Secretary to expand their operations (including expansion to new sites)
to serve medically underserved populations for high impact areas not
currently served by an FQHC. The Secretary shall give first priority in
awarding grants under this section to those FQHCs or other entities
which propose to serve populations with the highest degree of unmet
need, and which can demonstrate the ability to expand their operations
in the most efficient manner.
``(2) The Secretary shall give second priority in awarding grants
to entities which have submitted applications to the Secretary which
demonstrate that the entity will qualify as an FQHC under section
1905(l)(2)(B) before it provides or arranges for the provision of
services supported by funds awarded under this section, and which are
serving or proposing to serve medically underserved populations or high
impact areas which are not currently served (or proposed to be served)
by an FQHC.
``(3) The Secretary shall give third priority in awarding grants in
subsequent years to those FQHCs or other entities which have provided
for expanded services and project and are able to demonstrate that such
entity will incur significant unreimbursed costs in providing such
expanded services.
``(f) Return of Funds to Secretary for Costs Reimbursed From Other
Sources.--To the extent that an entity or organization receiving funds
under this part is reimbursed from another source for the provision of
services to an individual, and does not use such increased
reimbursement to expand services furnished, areas served, to compensate
for costs of unreimbursed services provided to patients, or to promote
recruitment, training, or retention of personnel, such excess revenues
shall be returned to the Secretary.
``(g) Termination of Grants.--(1)(A) With respect to any entity
that is receiving funds awarded under this section and which
subsequently fails to meet the requirements to qualify as an FQHC under
section 1905(l)(2)(B) or is an entity that is not required to meet the
requirements to qualify as an FQHC under section 1905(l)(2)(B) but
fails to meet the requirements of this section, the Secretary shall
terminate the award of funds under this section to such entity.
``(B) Prior to any termination of funds under this section to an
entity, the entities shall be entitled to 60 days prior notice of
termination and, as provided by the Secretary in regulations, an
opportunity to correct any deficiencies in order to allow the entity to
continue to receive funds under this section.
``(2) Upon any termination of funding under this section, the
Secretary may (to the extent practicable)--
``(A) sell any property (including equipment) acquired or
constructed by the entity using funds made available under this
section or transfer such property to another FQHC, in which
case the Secretary shall reimburse any costs which were
incurred by the entity in acquiring or constructing such
property (including equipment) which were not supported by
grants under this section; and
``(B) recoup any funds provided to an entity terminated
under this section.
``(h) Limitation on Amount of Expenditures.--The amount of funds
that may be expended under this title to carry out the purposes of this
part shall be for fiscal year 1994, $200,000,000, for fiscal year 1995,
$400,000,000, for fiscal year 1996, $600,000,000, for fiscal year 1997,
$800,000,000, for fiscal year 1998, $800,000,000, and for fiscal years
thereafter such sums as provided by Congress.''.
(b) Effective Date.--The amendments made by subsection (a) shall
become effective with respect to services furnished by a federally
qualified health center or other qualifying entity described in this
section beginning on or after October 1, 1993.
SEC. 4. STUDY AND REPORT ON SERVICES PROVIDED BY COMMUNITY HEALTH
CENTERS AND HOSPITALS.
(a) In General.--The Secretary of Health and Human Services (in
this section referred to as the ``Secretary'') shall provide for a
study to examine the relationship and interaction between community
health centers and hospitals in providing services to individuals
residing in medically underserved areas. The Secretary shall ensure
that the National Rural Research Centers participate in such study.
(b) Report.--The Secretary shall provide to the appropriate
committees of Congress a report summarizing the findings of the study
within 90 days of the end of each project year and shall include in
such report recommendations on methods to improve the coordination of
and provision of services in medically underserved areas by community
health centers and hospitals.
(c) Authorization.--There are authorized to carry out the study
provided for in this section $150,000 for each of fiscal years 1994 and
1995. | Amends title XIX (Medicaid) of the Social Security Act to establish a part B (Health Services Access) to fund grants to federally-qualified health centers (FQHCs) and other entities for the expansion and development of primary health care service programs for medically underserved populations.
Sets forth grant eligibility criteria and the requirements grant applications must meet. Outlines limitations on the use of grant funds. Establishes priorities for the awarding of grants, with the highest priority for those FQHCs and other entities proposing to expand operations to serve medically underserved populations with the highest degree of unmet need in the most efficient manner. Requires entities receiving program funds to return excess revenues to the Secretary. Requires termination of grants to entities failing to meet certain requirements.
Directs the Secretary to study and report to the Congress on the relationship between community health centers and hospitals in providing such services. Authorizes appropriations. |
the goal of sustainability , as stated in the national environmental policy act1 ( nepa ) , and recently ( 2011 ) reiterated by the national research council2 is to create and maintain conditions , under which humans and nature can exist in productive harmony , that permit fulfilling the social , economic , and other requirements of present and future generations .
productive harmony between humans and nature depends on the integrity of ecosystems in terms of structure , function , and their capacity to produce goods and services that humans require,3 including clean and plentiful water . to achieve the nepa goal ,
it is essential to sustain adequate , accessible supplies of clean water for the support of human health , ecosystems , and their attendant social and economic benefits . indeed ,
sustainable management of aquatic ecosystems is a worldwide priority.4 groundwater , wetlands , lakes and reservoirs , upland streams , great rivers , estuaries , and coastal oceans are all valuable water resources , and all are embedded in or in the case of most coastal systems strongly connected to watersheds .
the quality of water and the quantities available depend to a great extent on the properties of watersheds : geology , topography , climate , land cover , and human uses .
many watersheds are dominated by human uses , including mining , oil and gas extraction , urban development , and agriculture ( fig .
1 ) ; this dominance can be expected to increase in concert with human population growth and urbanization .
urban watersheds , many of which exhibit impervious land cover in high proportions , combined sewer overflows ( csos ) , polluted runoff , and contaminated effluents , present major challenges to the integrity and sustainability of water resources .
these conditions have been shown to cause severe impairment of aquatic ecosystems.5,6 in epidemiological studies , watersheds with csos have been associated with higher rates of human illness.7,8 in agricultural watersheds , water quality and quantity can be altered by irrigation and degraded by runoff of sediment , nutrients , pesticides , and herbicides , along with microbes , antibiotics , and hormones associated with livestock operations.9,10 these stressors pose risks to aquatic life , recreation , and drinking water sources . in many watersheds
not affected by urbanization or agriculture , various anthropogenic disturbances such as mining , drilling , timber harvest , and some recreational uses can degrade water quality and aquatic habitats .
it is important to note that these effects are not always local ; altered water flow and pollutant loads can have effects far downstream , stretching from headwaters to estuaries and coastal waters.11 cornwell et al.12 provide a comprehensive account of the occurrence and effects of multiple stressors ( chemicals , microbes , invasive species , etc ) on humans and ecosystems in the usa canada great lakes and their watersheds . in cases where waters are wholly or relatively unimpaired , new and expanding human uses of watersheds
population growth requires development of land and infrastructure for residential , commercial , transportation , and supporting uses .
rapid expansion of oil and gas extraction by means of deep drilling and hydraulic fracturing is putting increasing demands on water resources and the capacity of watersheds to sustain water quantity and quality . climate change and extreme events
( eg , major floods and droughts ) , whether natural or forced by climate change , also threaten watershed integrity and sustainability .
a recent study ( 2015 ) shows that water temperatures in watersheds of the us mid - atlantic region are rising significantly , with many implications for aquatic life and human uses , including losses of cold water fisheries , increases in pathogens and invasive species , and greater fluxes of nutrients that contribute to eutrophication and harmful algae blooms.13 in the interest of sustainability , watersheds that presently supply plentiful clean water and fully support aquatic life need to be identified , cataloged and targeted for enhanced protection.14 in this article , we consider the advantages of assessing the relationships between ecosystem services , human health , and socioeconomic values in the context of water quality , water quantity , landscapes , the condition of watersheds , and the connectivity of waters from highlands to oceans .
fuller understanding and communication of these relationships is essential if watersheds and water resources are to be managed comprehensively and sustainably .
first , we present a few examples of how human activities on land can be translated through water resources to impacts on human health and welfare .
next , we present in conceptual form how this knowledge of watershed sustainability can be developed , while addressing some of the inherent difficulties : complexity , uncertainty , causal inference , an array of geographic and temporal scales , and data incompatibilities ( eg , matching human health data with environmental data ) .
finally , we discuss the benefits of assessing , predicting , and communicating the qualities and values of watersheds as human - ecological systems .
modern history demonstrates that the introduction of new technologies can have significant unanticipated consequences for both human health and ecological integrity .
for example , in the pre-1970s , the environmental persistence , bioaccumulation , and toxicity of industrial chemicals were largely uncharacterized .
our lack of understanding and attention unequivocally contributed to environmental and human health injuries , which proved costly to correct in terms of both time and financial resources.15 although the mechanistic linkages between human health and ecosystems are complex and poorly understood , it is important to understand these linkages if we are to resolve many of the most significant ecological public health challenges to sustainable watersheds . whereas larger scale crises could be developing , such as the effects of global climate change or encroachment of new landscapes , dramatic localized environmental disasters capture public attention and are immediately disruptive to daily life .
as reported by luoma et al.16 , the headwater drainage basin of the clark fork river ( butte , mo , usa ) was a site of copper and zinc ore mining , and smelting beginning in the nineteenth century ; more than 1 billion ( 1 10 ) metric tons of ore and waste rock were produced before the smelter closed in 1980.17 the currently impaired condition of the clark fork river appears to be a legacy from historic waste inputs , which persist in the system as contaminated sediments , water , and biota.18,19 the river s trout fishery remains far less productive than expected for a montana stream.20 losses of environmental goods and services have been described as unambiguous , with the most extreme effects found in butte and deer lodge valley .
metals associated with the mines contaminate waters hundreds of kilometers downstream.16 it has been shown recently ( 2014 ) that even very low densities of mines in a watershed can affect fish assemblages negatively in connected streams at the regional scale , well beyond the stream reaches in proximity to the mines.21 the environmental debate raised by reduced diversity of the benthic community and histopathological lesions in trout also raised concerns about potential effects on human health .
health effects in the area were a problem historically ; miranda et al.15 speculated that human exposure to cadmium ( linked to cardiorenal disease ) , arsenic , and radon ( both linked to cancers ) associated with mining provided plausible links to the observed high rates of these diseases .
the loss of environmental goods and services was directly linked to changes in human well - being,16 including higher than expected cancer rates.22 this example is a cautionary tale : in hindsight , if care had been taken all along to sustain the clark fork river ecosystem and its services , much social and economic harm could have been prevented .
more recent ( 2014 ) examples of great scope and consequence in the us include ( 1 ) a spill of industrial chemicals into the elk river , west virginia , which contaminated a major public water supply , restricting water use for ~300,000 residents , fueling hundreds of complaints of illness and discomfort and leading to several hospitalizations,23 and ( 2 ) loss of drinking water supplies to about 500,000 residents in the toledo , ohio area , because of toxic algae blooms ( primarily related to phosphorus pollution ) in lake erie.24 although in both cases the immediate effects on human health were less than catastrophic , the economic and social ( well - being ) effects were enormous ( on the order of $ 100 million and approximately 800,000 people temporarily deprived of safe water ) , while the long - term effects on river and lake ecosystems have yet to be reported .
the potential ecological effects , in turn , may reverberate in the economic and social spheres for years or decades if , for example , the ecosystems that support fisheries have been damaged by the pollution
. the goal of preventing contamination of water resources , if attained , would preclude such events .
nevertheless , responses to and outcomes of these watershed - related disasters could have been improved by such means as more rapid detection of spills and the presence of harmful contaminants , improved knowledge of health and ecological effects , and prediction of adverse events . in support of these ends
, technologies are now available for monitoring site - specific water quality continuously and remotely , synoptic monitoring of surface water quality and algae blooms over large areas of coastal and inland waters using satellite imaging , and detecting and quantifying minute concentrations of thousands of contaminants in water .
advances in molecular technology have led to the documentation of full genomic sequences of several multi - cellular organisms , ranging from nematodes to humans .
the related molecular fields of proteomics and metabolomics are beginning to advance rapidly as well .
in addition , advances in bioinformatics and mathematical modeling provide powerful approaches for elucidating patterns of biological response embedded in the massive datasets produced during genomics research .
thus , changes or differences in the expression patterns of entire genomes at the level of the mrna , protein , and metabolism can be assessed rapidly .
collectively , these emerging approaches may greatly enhance the ability to detect and predict problems , as well as establish causal mechanisms , thereby addressing major challenges to understanding the integration of ecosystem services and public health .
many of the responses to various stressors are evolutionarily conserved , so that there are correspondences between indicators of human health and ecological health .
for example , consider how animal , some plant , and microbial species respond to stressors , including emerging contaminants ( both synthetic and natural ) , or parasites .
it may be that the growing knowledge of comparative genomics between terrestrial and aquatic vertebrate , invertebrate , and plant species will substantially improve our ability to extrapolate effects currently derived in mammals to aquatic phyla in the future.25 in the opposite direction , we have the example of zebra fish ( danio rerio ) embryos as model organisms for screening chemicals for human toxicology.26 creatures that spend all or most of their lives in water , with long - term , immersive exposures to aquatic stressors , can be sensitive sentinels for risks to human health.27
by watershed epidemiology we mean adapting some of the precepts and methods of human epidemiology to the geographic scales of watersheds in order to gain a more complete understanding of connections between the environment , water resources , and the health and well - being of human populations .
kolok et al.28 made a strong case for watersheds as natural boundaries in epidemiological evaluations of human effects of waterborne agricultural chemicals , especially endocrine - disrupting ( hormone - like ) compounds .
monitoring , assessment , and diagnosis of impairments in watersheds traditionally have been focused on ecological condition , based on various physical , chemical , and biological indicators .
these types of indicators , although they can be informative , or even comprehensive from an ecological perspective , are insufficient if we are to consider watersheds within an evolving ecological public health paradigm.29 clearly , humans have altered and will continue to alter watersheds while concomitantly remaining intimately dependent upon the goods and services provided by these ecological systems . these alterations , or impacts , result from combinations of biological , physical , and socioeconomic phenomena . until recently , evaluating and managing the effects of human activities on ecosystems , or managing the impacts of environmental goods and services on public health , have been undertaken as separate activities and treated as fundamentally distinct phenomena .
the nature and science of the new paradigm are beginning to be explored , as are the consequences of decision making that jointly affects ecosystem services and public health . despite the challenges of measuring and communicating watershed sustainability , we quote norstrm et al.30 in reference to global sustainable development goals : .
recent social - ecological systems - based approaches for measuring multiple ecosystem services and human wellbeing [ sic ] provide hopeful avenues for developing integrated and scalable indicators .
coutts et al.29 present a discussion of the growing understanding of the critical dynamic relationships between ecosystems and human health , with a suite of conceptual models that illustrate how this understanding has evolved historically .
further , a comprehensive information tool , the eco - health relationship browser,31 has been made available by the us environmental protection agency ( epa ) for exploring and documenting ecosystem public health connections . here , we outline the development and application of a system of indicators and models , integrating the ecological condition and integrity of watersheds with economic values , social values , and human health outcomes , which will be scalable from small watersheds to regional and national assessments for the us ( fig .
when fully operational , this system will extend beyond conceptual and knowledge models into the spatial and temporal dimensions as a tool for planning and managing watersheds and water resources for sustainability .
the epa , in cooperation with states , tribes , and other federal agencies , conducts periodic national assessments of lakes , rivers and streams , estuaries , and wetlands through its national aquatic resource surveys.32 because the surveys are conducted with probabilistic designs , they are spatially unbiased and thus can be used to assess the overall condition of water resources at national , regional , and , in some cases , finer scales ( eg , states ) . these data are supplemented and complemented by other comprehensive monitoring programs , including the national water quality assessment program , operated by the u.s .
currently , epa is conducting research , building on the national assessments , to ( 1 ) develop predictive models of watershed integrity , ( 2 ) incorporate human health indicators into watershed assessments , and ( 3 ) estimate the economic values of the goods and services supplied by water and watershed ecosystems .
several complementary datasets are available at national scales ; examples include land cover and other physical variables,34 quality and quantity of surface and groundwater,33 and a wide array of human health statistics , which can be explored online.35 in table 1 , we list several types of data that , in combination with a variety of models , will be used to quantify the connections in figure 2 . at the national scale , these types of data applied across thousands of watersheds ( or millions depending on the scale of analysis )
will provide substantial statistical power to elicit associations between the environment , ecosystems , and indicators of human health and welfare . beyond the direct effects of poor water quality on human health caused by pathogens , harmful algae , and toxic contaminants ,
watershed epidemiology can be used to explore less direct associations between ecological integrity and human well - being : for example , we could explore correlations of health and economic data with the availability and quality of recreational , cultural , and esthetic experiences .
it should be possible also to investigate demographic variability in associations across subpopulations , eg , by ethnicity , gender , and income level for considerations of environmental justice .
examples of statistical modeling of these relationships could include functional ( regression ) analysis of indicators as functions of stressors , multivariate classification and ordination of watersheds and human populations , and bayesian network analysis .
the results could be used to form hypotheses about the mechanisms and causes of associations , which could be tested with additional analysis or experimentally and incorporated into mechanistic models for predictive and diagnostic purposes . the conceptual model depicted in figure 2
many of the known linkages between human health and the condition of the environment are supported largely by correlational evidence . that is , higher rates of some diseases are observed in the presence of higher exposures to environmental stressors .
an example is the epidemiological evidence for illnesses associated with water contact recreation , where exposures are estimated by water contact history , proximity of beaches to pollution sources , and the presence of indicator organisms as surrogates for the pathogens that cause illness.40 even when these associations are robust , ie , consistent over time and space , or observed mostly in exposed versus unexposed control groups , they are not fully probative of cause and effect and may be controversial or challenged in the regulatory realm .
therefore , it can be important to establish a greater standard of proof through experimentation and elucidation of causal mechanisms .
integrating ecosystem services with public health in the context of sustainable watershed management implies that we can derive mechanistic understandings of the relationships between environmental factors and human health , thereby establishing cause and effect . for regulation and management of specific stressors or sources
however , many environmental concerns , especially at the scale of entire watersheds where there may be cumulative effects of multiple stressors from multiple sources41 and a variety of conditions that modify stressor response relationships are laden with complexity and uncertainty .
scientific uncertainty about particular issues may result in controversy among scientists , and confusion among nonscientists for example , the case of the harmful dinoflagellate pfiesteria spp .
these disparities can delay , interfere with , or distort policy decisions.16 however , when watersheds are viewed as ecosystems , with feed backs and emergent properties , full mechanistic understanding may be precluded , or less useful than a more holistic systems view.42 the ecological human health and sustainability paradigms , in combination , should foster policy and management approaches to water resources and watersheds that are more flexible , adaptive , and effective than traditional rigid regulatory regimes , but also compatible with regulation.43 there is a need to better understand ecological and human exposures ( ie , environmental concentrations and routes of exposure ) to chemical and microbial contaminants as precursors to adverse effects.44,45 identifying geochemical conditions and other factors that affect contaminant bioavailability is one key question .
for example , biological effects of some contaminants vary depending on the ph , alkalinity , hardness , and specific ion concentrations in water,46 which in turn are determined by interactions of watershed geology and , often , human influences . moreover
, a particular exposure may differ in its effects on different species , individuals , and populations .
establishing relationships between exposures and effects requires weight - of - evidence postulates common to many fields of science.47,48 biological plausibility based on detailed mechanistic understanding is also critical in identifying causal relationships between exposures and effects .
the level of certainty that would be derived from defined exposures combined with defined effects and biological plausibility , while a notable goal , may be unrealistic given the uncertainties associated with the integration of ecosystem services and public health . in the real world , multiple exposures and multiple effects are ubiquitous .
it is clear that , while the future lies in uncertainty , we must be ever more vigilant in developing sophisticated statistical insights to permit us to use effectively the ever - increasing availability of large data sets , particularly when things ( watersheds in this case ) have to be ranked or classified.49 vast amounts of environmental , human health , and other relevant data are freely available and continue to accrue at an accelerating rate : for example , continuous remote water quality monitoring by automated sensors , and synoptic monitoring of lakes , reservoirs , and coastal waters by satellites are becoming routine .
one important issue is that , although our targets are watersheds , most health and demographic data are classified by political jurisdictions ( states , counties , census blocks ) , which virtually never correspond geographically with watersheds .
significant efforts in geographic analysis will be required to match data - sets , eg , for watershed epidemiology investigations .
temporal lags between exposures and effects are another element of complexity ; cancers , for example , can develop years or decades after exposures to environmental carcinogens .
further , we may find that some particular types of data strongly applicable to our questions may be sparse , unavailable , or of unsuitable quality .
some effort in primary data collection will be necessary , both to address specific questions and to validate model predictions .
we now have the tools ( massive computing , geographical information systems ) to work with big data , and to take advantage of these opportunities to greatly increase and integrate knowledge of relationships between ecosystems and human health .
understanding and communicating these relationships should lead eventually to greater awareness of the roles watersheds play in human well - being , and hence to better management and stewardship of watersheds and water resources .
watersheds , where human and ecological systems are inseparable , and which hierarchically span several orders of geographic magnitude , are suitable , perhaps ideal , units for applying the ecological public health paradigm . | sustainable management of aquatic ecosystems is a worldwide priority ; the integrity of these systems depends , in turn , on the integrity of the watersheds ( catchments ) in which they are embedded . in this article , we present the concepts , background , and scientific foundations for assessing , both nationally and at finer scales , the relationships between ecosystem services , human health , and socioeconomic values in the context of water quality , water quantity , landscapes , the condition of watersheds , and the connectivity of waters , from headwaters to estuaries and the coastal ocean .
these assessments will be a foundation for what we have termed watershed epidemiology , through which the connections between ecosystems and human health can be explored over broad spatial and temporal scales .
understanding and communicating these relationships should lead to greater awareness of the roles watersheds play in human well - being , and hence to better management and stewardship of water resources . the u.s .
environmental protection agency is developing the research , models , and planning tools to support operational national assessments of watershed sustainability , building upon ongoing assessments of aquatic resources in streams , rivers , lakes , wetlands and estuaries . |
In this grab taken from video made available on Sunday, Nov. 26. 2017, a train passes by dead reindeer, near Mosjoen, North of Norway. A Norwegian reindeer herder says that freight trains have killed... (Associated Press)
In this grab taken from video made available on Sunday, Nov. 26. 2017, a train passes by dead reindeer, near Mosjoen, North of Norway. A Norwegian reindeer herder says that freight trains have killed... (Associated Press)
STAVANGER, Norway (AP) — A Norwegian reindeer herder says that freight trains have killed more than 100 of the animals on the tracks in three days.
Torstein Appfjell, a distraught reindeer herder in Helgeland county, said Sunday that the worst incident happened Saturday when 65 animals were mown down.
Appfjell told The Associated Press by telephone it was "totally tragic" and "unprecedented" for so many reindeer to lose their lives in this way. A total of 106 reindeer were killed since Thursday.
Appfjell represents four families in the area with a total of around 2,000 reindeer. He said that in the worst previous 12-month period, 250 animals were killed in train accidents.
VG newspaper reports that Bane NOR, which operates the trains, has now reduced speeds in the area. ||||| In what has been called a "bloodbath," a total of 106 Norwegian reindeer on their winter migration have been killed by freight trains in the north of the country since Wednesday, Norway's public broadcaster NRK said late on Sunday.
Sixty-five animals were mown down on a track on Saturday, and another 41 between Wednesday and Friday, NRK said.
The owner of the 65 dead reindeer, Ole Henrik Kappfjell, described the killings to NRK as "a senseless animal tragedy ... a psychological nightmare."
The reindeer deaths come as herders take the animals on a search for winter grazing grounds. The annual journey often proves dangerous, with many of the reindeer hit by cars or trains, or dying by drowning.
Read more: More than 300 reindeer killed by lightning storm in Norway
Watch video 04:13 Share Endangered fjords in Norway Send Facebook google+ Whatsapp Tumblr linkedin stumble Digg reddit Newsvine Permalink http://p.dw.com/p/2URnJ Endangered fjords in Norway
'Unprecedented' numbers
More than 2,000 reindeer were hit and killed along the same northern stretch of railway line between 2013 and 2016. One herder, Torstein Appfjell, however, told The Associated Press that the number of reindeer killed in recent incidents was "unprecedented."
Documentary filmmaker Jon Erling Utsi, who took photographs of the dead reindeer from Saturday's incident, called it a "bloodbath," adding that some animals did not die directly and had to be shot to put them out of their suffering.
Watch video 04:13 Share Effects of Global Warming in Norway Send Facebook google+ Whatsapp Tumblr linkedin stumble Digg reddit Newsvine Permalink http://p.dw.com/p/2n7Rm Effects of Global Warming in Norway
Norway has a reindeer population of around 250,000, with most of them living in the country's far north.
Herders have called on the railway operator Bane NOR to install a fence along the track, but so far there has been no funding for it. According to the VG newspaper, however, the company has instructed its trains to drive more slowly in the area.
Read more: Reindeer shortage: Will climate change ruin Christmas?
Watch video 02:11 Share Finland: Climate change threatens reindeer Send Facebook google+ Whatsapp Tumblr linkedin stumble Digg reddit Newsvine Permalink http://p.dw.com/p/2ayO4 Finland: Climate change threatens reindeer
tj/rt (AP, AFP) ||||| Over 100 reindeer have died in just three days after being mown down by freight trains in Norway.
Reindeer herder Torstein Appfjell, 59, called the deaths "totally tragic" and said the number of animals killed was "unprecedented".
Mr Appfjell, who said he was "dizzy with anger", said the worst incident happened on Saturday when 65 animals were mown down.
The herder, who looks after around 2,000 reindeer in Helgeland in the North of the country, said 106 reindeer have been killed since Thursday.
At least 250 reindeer were killed by trains over the last 12 months.
Led by their herders, groups of animals have been migrating from their summer pasture in the mountainous regions towards the coast.
But as they head towards their winter home, many have been caught on the train tracks dividing their route.
Warnings for trains to drive slowly through the migration area failed to reach the drivers due to "a technical failure", according to Norwegian news website NRK.
Local train operator, Bane Nor has now reduced the speed of their vehicles in the area, according to local media.
Local residents, who say the slaughter of the reindeer on the tracks happens every year, are calling for a barrier to be built along the rails to protect the migrating animals. | – A reindeer herder describes himself as "dizzy with anger" after freight trains barreled into more than 100 reindeer in Norway over three days, killing 106. Torstein Appfjell, who Sky News reports oversees about 2,000 reindeer, calls the deaths "totally tragic" and frames that toll in that period as "unprecedented." Appfjell says the deadliest day was Saturday, when 65 were killed, reports the AP. Deutsche Welle has this tough detail: Per a documentary filmmaker who was on the scene, some animals were not killed upon impact and had to be shot. The animals are undertaking their winter migration, which sees them travel from mountainous pastures to the coast in search of food. It's a perilous journey, and deaths are not uncommon. Though Appfjell says train accidents previously killed a maximum of 250 reindeer over 12 months, Deutsche Welle reports a far higher toll, saying 2,000 have lost their lives along the northern railway line over the 2013 to 2016 period. Suggestions that a fence be added along the line have gone nowhere due to a lack of funding, though train drivers have now been told to reduce their speeds in the area. (More than 300 reindeer lost their lives in a very different kind of accident last year.) |
figure [ fig : spectrum ] and [ fig : lightcurve ] show the _ chandra _ high energy ( heg ) and medium energy ( meg ) gratings spectra , and the lightcurve for a @xmath2 ks observation of the single giant hr 9024 .
the lightcurves for a hard and a soft spectral band , shown in fig .
[ fig : lightcurve ] , indicate an harder spectrum during the two peaks of the emission ( @xmath3 ks and @xmath4 ks from the start of the observation ) , typical of stellar flares .
table [ tab : table ] summarizes the characteristics of source and the parameters of the _ chandra _ observation .
.stellar parameters and hetgs observation . [ cols="^,^,^,^,^,^,^,^ " , ] [ tab : table ] [ [ spectral - analysis ] ] spectral analysis : + + + + + + + + + + + + + + + + + + the high resolution spectra provide several plasma diagnostics , from the analysis of both continuum and emission lines , and from the lightcurves in different spectral bands or in single lines .
the evolution of temperature and emission measure ( em ) during the flare allows to construct a model of the flaring structure(s ) ( reale et al .
1997 ) .
* t is derived from the fit to the continuum emission , selecting spectral regions `` line - free '' ( on the basis of predictions of atomic databases such as aped [ smith et al .
2001 ] , chianti [ dere et al . 1997 ] ) ; the fit also provides an estimate for em from the normalization parameter . * the emission measure distribution ( dem ) is derived through a markov - chain monte - carlo analysis using the metropolis algorithm ( mcmc[m ] ; kashyap & drake 1998 ) on a set of line flux ratios ( o lines are the coolest ar the hottest , i.e. logt[k ] 6.2 - 7.8 ) .
coronal abundances are evaluated on the basis of the derived dem ; the abundance is a scaling factor in the line flux equation to match the measured flux .
[ [ abundances - and - dem ] ] abundances and dem : + + + + + + + + + + + + + + + + + + + fig .
[ fig : dem_abund ] show the abundances ( _ left _ ) and dem ( _ right _ ) derived for the flare peak , and the quiescent emission .
* abundances variations between flaring and quiescent phases * very hot corona , also outside the flaring phase , as found also from an _ xmm - newton _ observation showing no obvious flare ( gondoin 2003 ) [ [ hydrodynamic - modeling ] ] hydrodynamic modeling : + + + + + + + + + + + + + + + + + + + + + + we aim at reproducing the observed evolution of temperature , t , and emission measure , em .
the 1d hydrodynamic model , solves time - dependent plasma equations with detailed energy balance ; a time - dependent heating function defines the energy release triggering the flare .
the coronal plasma is confined in a closed loop structure : plasma motion and energy transport occur only along magnetic field lines .
parameters : ( 1 ) loop semi - length @xmath5 cm ( a first estimate of l is obtained from the observed decay time ) ; ( 2 ) footpoint heating ; ( 3 ) initial atmosphere : hydrostatic , @xmath6 k ; however , the initial conditions do not affect the evolution of the plasma after a very short time . fig .
[ fig : modelfit ] show the comparison of observed t and em evolution , and x - ray lightcurve , with the corresponding quantities synthesized from the hydrodynamic model .
the observed evolution is reproduced reasonably well by a model characterized by : * loop semi - length @xmath5 cm ( @xmath1 ) ; * impulsive ( 20 ks , shifted by 15 ks preceding the beginning of observation ) footpoint heating triggering the flare ; no sustained heating ( i.e. pure cooling ) ; * volumetric heating @xmath7 erg/@xmath8/s , heating rate @xmath9 erg / s ; * from the normalization of the model lightcurve we derive an estimate of loop aspect ratio @xmath10 , i.e. the loop cross - section has radius @xmath11 cm .
the evolution of this very hot spectrum is reproduced by an hydrodynamic loop model with @xmath12 .
this loop model has roughly the same parameters of models satisfyingly reproducing other large flares : e.g. flares in pre - main sequence stars ( favata et al . 2005 ) .
large flares observed in very active stars seem to have very similar characteristics , possibly with important implications for the physics of these phenomena .
large flares as the one observed for hr 9024 are very unusual in single evolved stars , while being more common in active binary system
. future work : ( 1 ) finer spectroscopic analysis , and hydrodynamic modeling ; detailed comparison of observed spectra with synthetic spectra derived from hydrodynamic model .
the high resolution spectroscopy together with the high signal for this observation provides a large amount of constraints to the model .
( 2 ) explore possible evidence of non equilibrium ionization effects .
( 3 ) determine robust constraints on abundance variations during flare ( 4 ) analysis of fe fluorescent emission : we can obtain constraints on the geometry of the emitting plasma , in particular on the height of the illuminating source , i.e. the loop sizes , obtaining a cross - check to the results of the hydrodynamic modeling .
this observation provides the first clear evidence of fluorescence in post - pms stars other than the sun ( i.e. fluorescence from photosphere ; while in pms stars there is evidence that the fluorescence emission is coming from the accretion disks ) .
1997 , a&as , 125 , 149 favata , f. et al . 2005 , apjs , coup special issue , in press gondoin , p. , 2003
, a&a , 409 , 263 kashyap & drake 1998 , apj , 503 , 450 reale , f. et al .
1997 , a&a , 352 , 782 smith , r.k .
2001 , apj , 556 , l91 | we analyze a _ chandra _ hetgs observation of the single g - type giant hr 9024 .
the high flux allows us to examine spectral line and continuum diagnostics at high temporal resolution , to derive plasma parameters ( thermal distribution , abundances , temperature , ... ) .
a time - dependent 1d hydrodynamic loop model with semi - length 10@xmath0 cm ( @xmath1 ) , and impulsive footpoint heating triggering the flare , satisfactorily reproduces the observed evolution of temperature and emission measure , derived from the analysis of the strong continuum emission .
the observed characteristics of the flare appear to be common features in very large flares in active stars ( also pre - main sequence stars ) , possibly indicating some fundamental physics for these very dynamic and extreme phenomena in stellar coronae . |
we consider the grossone , @xmath5 formalism of yaroslav sergeyev @xcite .
see also @xcite for other discussions of this structure and its applications .
this paper begins in the stance that there are no completed infinite sets .
we shall show that the grossone is then naturally interpreted as a generic finite natural number and that the sergeyev natural number construction @xmath1 can be seen as a generic finite set . by a _
finite set , i mean that @xmath6 represents the properties of an arbitrarily chosen finite set of integers taken from @xmath7 to a maximal integer represented by @xmath8 this means that for any given finite realization of @xmath9 , @xmath0 will be larger than all the other integers in that realization . in this sense
, we can say that @xmath0 is larger than any particular integer .
the grossone @xmath0 is a symbol representing the highest element in a generic finite set of natural numbers .
infinity is not the issue here .
the issue is clarity of construction and the possibility of calculation with either limited or unlimited means .
the original intent for the grossone and the set @xmath1 was that @xmath6 should represent the infinity of the natural numbers in a way that makes the counting of infinity closer to computational issues than does the traditional cantorian approach . in sergeyev s
original approach , @xmath6 is understood to be an infinite set , but one does not use the usual property of a cantorian infinite set that an infinite set is in @xmath10 correspondence with a proper subset of itself .
rather , one regards special subsets such as the even integers as having their own measurement as parts of @xmath11 for example , in the original approach it is fixed that grossone @xmath0 is even , since it is postulated that the sets of even and odd numbers have the same number of elements , namely @xmath12 analogously , @xmath13 are integral for finite @xmath14 in our approach , which in this sense is a relaxation of the sergeyev original approach , we interpret the grossone as a _ generic _ natural number and so it can be interpreted to be either even or odd .
we shall see , in section @xmath15 how this has specific computational consequences in the summation of series .
both interpretations are useful .
the generic integer intepretation that we use here has the advantage that the assumptions made are just those that can be applied to finite sets of natural numbers
. this will become apparent as we continue with more theory and examples .
the present paper is self - contained , and does not take on any axioms from the work of sergeyev .
we use the notation @xmath0 as in the sergeyev grossone , so that our work can be compared with his . as we shall see , the result is that a different conceptual approach , via generic finite sets , has a very close relationship with the sergeyev theory of an arithmetic of the infinite !
our theory can be seen as a relaxation of the sergeyev theory where we no longer assume that @xmath0 is divisible by arbitrary natural numbers . as explained in section 3
, we adopt the _ transfer principle : any statement @xmath16 using @xmath5 is true if there is a natural number @xmath17 such that @xmath18 is a true statement about finite natural numbers @xmath19 for all @xmath20 _ this is the criterion determining the truth of statements about @xmath5 creating a theory distinct from the sergeyev theory , but sharing many of its notations and conceptual moves .
see section 3 for further discussion and , in particular , with a comparison with the axioms of @xcite .
we reflect here many of these issues in the interpretation of @xmath6 as a generic finite set .
our @xmath6 is not a set but rather a symbolic structure that stands for any finite set . as a symbolic structure @xmath6
is inductively defined so that any of the following list of symbols denotes @xmath21 @xmath22 @xmath23 @xmath1 @xmath24 @xmath25 @xmath26 for any finite natural number @xmath14 this means that there are infinitely many possible symbolic structures that indicate @xmath11 each structure is finite as a symbolic structure .
depending upon the size of the set to which we shall refer , any and all of these symbols can be used for the reference .
any given positive integer @xmath19 can occur , and does occur , in all such representatives past a certain point .
consequently , we say that @xmath19 _ belongs _ to @xmath6 for all natural numbers @xmath14 and we say that @xmath0 _ can be greater than _ any given natural number @xmath14 in this form we create a language in which speaking about @xmath6 is very similar to speaking about an infinite set , but @xmath6 , being a symbol for a generic finite set , is not be interpreted as a cantorian infinite set .
the issues about subsets and @xmath10 correspondences do not arise .
the paper is organized into sections @xmath27 and @xmath28 devoted to this point of view about genericity .
the style of writing for sections @xmath29 and @xmath28 is sometimes polemic , as it is written in the form of a speaker who is convinced that only finite sets should be allowed in mathematics .
infinite sets are seen to be taken care of by the concept of generic finite set .
we show how to apply the grossone formalism by first thinking about how a computation could be written finitely , and then how it may appear in the limit where the grossone @xmath0 is itself regarded as an infiinite number . except for certain situations where we can consider limits as @xmath0 becomes infinite
, we do not use the original intepretations of the grossone as an infinite quantity .
an example of a situation where one can shift from one interpretation to the other is in a summation of the form @xmath30 where @xmath31 is a function defined on the natural numbers .
for example , consider @xmath32 we can regard this as standing for all specific formulas of the type @xmath33 where @xmath17 is any natural number , _ or _ we can regard it as an generic summation with an generic result of @xmath34 the generic result can be regarded in as formally similar to the infinite result that comes from taking @xmath0 infinite .
both ways of thinking about the answer tell us how the summation behaves when the number of summands is @xmath0 and @xmath0 very large .
other examples will be discussed in the body of the paper .
secion @xmath35 is written from another point of view . in this section
we take a categorical and relative point of view about being infinite . "
we accept that in bare set theory a set is infinite if it is in @xmath10 correspondence with a proper subset of itself .
but in another category we ask that this @xmath10 correspondence be an injection in that category that is not a surjection .
this means , for example , that a circle or a sphere in the topological category is not infinite ( hence finite by definition ) since there is no homeomorphism of a sphere to a proper subset of itself .
we examine the model @xmath6 for the extended natural numbers from this point of view , and show that it can be construed as set theoretically infinite , but topologically finite .
we hope that these two points of view for interpretation will enrich the intriguing subject of the grossone and it uses .
in yaroslav sergeyev s theory of numeration , he considers a completion of the natural numbers @xmath36 to a set that contains an infinite number @xmath5 referred to as _
grossone_. the completion is denoted by the notation @xmath6 in the form = \ { 1,2 , 1 - 2 , 1 - 1 , 1 } .
the grossone , @xmath0 , behaves just like " a very large integer , so that @xmath6 as a set is conceived to have all of its properties analagous to those of a finite set of integers such as @xmath37 while segeyev does not quite explicitly say that @xmath6 and @xmath38 are logical twins just so long as @xmath17 is very very large , this is the basic idea that we explore in this paper .
for example , it should not be the case that @xmath6 should appear infinite in the sense of having a @xmath10 correspondence with a proper subset of itself .
some of the usual attempts obviously fail .
for example if we try to map @xmath6 to itself by the map @xmath39 then @xmath40 and so we need a larger domain .
this mirrors the problem for the finite set into the infinite set @xmath11 but what set is this @xmath6 ?
how should we interpret its two appearances of the three dots " ?
in the case of @xmath41 the three dots refer to the peano axioms for the ( usual ) natural numbers , assuring us that given a natural number @xmath19 then there is a successor to that number indicated by @xmath42 and that @xmath43 is never equal to @xmath14 the principle of mathematical induction is then used to characterize the set @xmath44 if @xmath45 is any subset of @xmath41 containing @xmath7 and having the property that @xmath46 implies that @xmath47 then @xmath48 at first glance one is inclined to guess that @xmath6 consists in two copies of @xmath49 one ascending @xmath50 and one descending @xmath51 if we then take the union @xmath52 is this @xmath53 i submit that this _ is not _ the desired @xmath11 for one thing , if one starts counting upward @xmath54 one never leaves the left half of this union . and if one starts counting down , one never leaves the right half of the union .
this is not in analogy to the large finite set , where either counting down or counting up will cover all of the territory .
also , there appears to be an injection of the union to itself that is not a surjection .
just add one to every element of the left half and subtract one from every element of the right half .
all in all , we must search for a different model for @xmath55 or a different category for it to call its home .
i believe that the simplest interpretation for @xmath6 is that it is a _ generic finite set .
_ this means that we interpret @xmath0 as a generic large " natural number .
@xmath0 is not any specific natural number unless we want to take it as such .
then since @xmath6 is a generically finite set , it has no @xmath10 correspondence with a proper subset of itself .
in fact , @xmath6 is not a set at all .
it is a symbolic construct that represents the _ form _ of a finite set .
it is not a set just as an algebraic variable @xmath56 , standing for a number , is not a number . as such , @xmath6 has no members , nor does it have any subsets . as a symbolic construct however , @xmath6 has some nice features .
we all agree on the equalities @xmath57 @xmath58 @xmath59 @xmath60 @xmath61 for any _ specific _ natural number @xmath14 this is the nature of @xmath6 as a symbolic construct . in this sense
it gives the appearance of acting like an infinite set .
this property of the notations that we use is an inheritance from the set theory notation where exactly the same symbolic phenomenon seems to indicate infinity where one only has the form of a series of numbers and the peano property of knowing that for a given @xmath19 there is an @xmath62 in the usual notation for the set of natural numbers , we have @xmath63 for any specific natural number @xmath14 thus we can also regard @xmath41 as a generic symbol , but it is a generic symbol that is incomplete if it is to be viewed as representing a finite set .
the symbolic construction for @xmath6 allows us to start anew and eliminate the notion of a completed infinity from our work .
if we interpret @xmath6 as some particular finite set , then it has @xmath17 members where @xmath17 is some specific finite natural number .
it can have either an odd or an even number of members .
it is subject to permutations just as any finite set is so subject .
the availability of such interpretations makes our viewpoint different from the original interpretation of the grossone . in the original interpretation
, @xmath0 is always infinite , and it is divisible by any natural number . in our interpretation @xmath0 as a symbolic entity is not a finite natural number .
it stands for any top value in some finite set , but not any particular value . for
this reason we can do arithmetic with @xmath0 and say that @xmath0 is taken to be greater than @xmath19 for any standard natural number @xmath19 and that @xmath64 _ in the generic context , for a statement about @xmath0 to be true , it must be true for all sufficiently large substitutions of natural numbers for @xmath8 _ for example @xmath65 is true because @xmath66 is true for all natural numbers @xmath17 that are larger than @xmath67 some statements are true for all numbers . thus @xmath68 is true since @xmath69 for all natural numbers @xmath70 in this way , we can use the grossone as an infinite number when we wish .
series of the form @xmath71 where @xmath72 are any real or complex numbers or algebraic expressions , are well defined .
for example , we can write @xmath73 and this is a generic finite sum in the form of a geometric series .
we can operate algebraically on such sums .
for example , @xmath74 whence @xmath75 and so , for @xmath76 @xmath77 note that we can replace @xmath0 in this formula .
for example , @xmath78 it is our intent to keep the interpretation that @xmath0 is a generic natural number , so in this sense @xmath79 can be regarded as specializing to the function @xmath80 for natural numbers @xmath14 but @xmath79 is also the generic expression for this function , and as such we can think of it as expressing cases where @xmath19 is taken to infinity . at this point
we can note the effects of taking @xmath0 even or odd , or the effect of taking a limit as @xmath0 becomes arbitrarily large .
for example if the absolute value of @xmath56 is less than 1 , then @xmath81 becomes arbitrarily small as @xmath0 becomes arbitrarily large .
generically , we can regard @xmath0 as an infinite number if we wish . in that case
we have that @xmath82 is infinitesimally close to @xmath83 but the expression @xmath84 is a more accurate rendition of the actual situation . working with generic formalism allows us to dispense with limits in many cases and adds detail that can be sometimes ignored in the usual category of working with limits .
note in the last example , that if we take @xmath85 then @xmath86 if we take @xmath0 even , then @xmath87 if we take @xmath0 to be odd , thn @xmath88 we see that @xmath89 is a well - defined symbolic value that can be either positive or negative , depending on its instantiation as a number .
this is in direct contrast to sergeyev s usage for the grossone , where @xmath0 is taken to be even and so @xmath82 takes only the value @xmath90 our interpretation reflects the fact that the corresponding finite series oscillate between @xmath91 and @xmath90 here is another example
. let @xmath92 this is of course a rendition of @xmath93 for large generic @xmath70 we can apply the binomial theorem to conclude that @xmath94 thus @xmath95 from this expression we can see the limit structure that leads to the usual series formula for @xmath96 and we see the luxury in the exact formula for @xmath97 by not taking the limit and examining the exact structure of the formula for large generic @xmath5 we see more and can write down more exact approximations for specific values @xmath17 substituted for the generic @xmath8 another example will show even more .
let @xmath98 denote a list of all the prime numbers up to a generic natural number @xmath8 note that @xmath99 denotes a _
generic prime number_. this can be compared with sergeyev s concept of infinite prime numbers @xcite .
define an analogue of the riemann zeta function via @xmath100 [ \frac{1}{1- \frac{1}{p_{2}^{s } } } ] \cdots [ \frac{1}{1- \frac{1}{p_{\g1}^{s } } } ] = \sigma_{n \in n(\g1 ) } \frac{1}{n^{s}}.\ ] ] here @xmath101 denotes all natural numbers that can be constructed as products of prime powers from the set of primes @xmath102 note that there is no limit to the size of elements of @xmath103 if we wish to keep bounds on that , we will have to introduce further notation . even though @xmath104 is a finite product , it produces natural numbers of arbitrarily high size .
thus a better finite zeta function will be given by @xmath105 then @xmath106 where @xmath107 denotes the generically finite set of natural numbers whose prime factorization is from the first @xmath0 primes , and whose prime powers are no higher than @xmath8 for a particular finite instantiation of @xmath0 this zeta function , being a finite sum , can be computed for any complex number @xmath108 this truncation of the usual limit version of the zeta function allows new computational investigations of its properties .
this example can be compared with the work in @xcite and @xcite .
one possible advantage of our approach is that it is essentially a finite approach .
we are suggesting that it is useful to examine the rieman zeta function , defined just for a finite collection of prime numbers , and then to examine how these finitized zeta functions behave as the number of primes used gets larger and larger .
we are not concerned here with extra fine structure of the zeta function at its limit , but rather with extra structure visible in the finite approximations to it .
this material will be explored further in a separate paper .
but the reader may ask , does not the construction of @xmath6 connote an infinite set if we regard the grossone @xmath0 as standing for an infinite evaluation , larger than any natural number ?
well , dear reader , what will you have ?
should we not be able , in this putative infinity , to count down from @xmath0 by successive subtractions to some number that we found by counting up from @xmath7 by successive additions ?
this is true of any finite set . yet
if this is true then it must be that for some natural numbers @xmath109 we have @xmath110 and so we find that @xmath111 and so the set @xmath6 would be neither generic , nor infinite , but simply one of the multitude of finite sets , in fact , the one of the form@xmath112 once again , @xmath113 is not a set at all , but the form of a generic finite set .
it is a symbol for the finite set structure , not a set at all . but this gives us freedom to regard this symbolic structure as something like a set , and in it , we indeed can not countdown from the grossone to a finite natural number .
in many aspects of mathematics there is no need for any infinite set .
it is sufficient to have the concept of a generic finite set . for this
we need the notion of a generic natural number .
thus one speaks of the set of natural numbers from @xmath7 to @xmath19 , @xmath114 , and one usually conceives this as referring to some , as yet unspecified number @xmath14 one writes a formula such as @xmath115 and it is regarded as true for any specific number @xmath14 the concept of a generic @xmath19 requires a shift of attention , but no actual change in the formalism of handling finite sets and finite series .
to write @xmath116 is conceptually different from the above formula with @xmath19 . in the formula with @xmath19
we are referring to _ some specific natural number @xmath19_. in the formula with grossone , we are indicating a _ form _ that is true when @xmath0 is replaced by a specific integer @xmath117 and we are also indicating the behaviour of a corresponding limit or infinite sum . * the transfer principle .
* once we take a notation for a generic natural number as with sergeyev s grossone , we are lead to use the concept of genericity . in this view
@xmath0 is symbolic , not infinite , but can be regarded as indefinitely large .
we can regard it as larger than any given number that is named .
this is a way of thinking instantiated by our _ transfer principle : any statement @xmath16 using @xmath5 is true if there is a natural number @xmath17 such that @xmath18 is a true statement about finite natural numbers @xmath19 for all @xmath20 _ this principle provides a transfer statement that allows us to apply the grossone in many particular situations .
the principle and its consequences are distinct from the sergeyev use of the grossone , and constitute a relaxation of that usage .
for example , if we are working with a computer that is limited to a specific size of natural number , then from the outside , as theorists , we can easily say that @xmath0 will be greater than any number allowed in that computational domain .
we can think of @xmath118 as a set that is larger than any specific finite set we care to name , but it is still generically finite and does not partake of the cantorian property of being in @xmath10 correspondence with a proper subset of itself .
it is like a finite set , but it is not any particular finite set .
the concept of a generic set is different from the concept of a set in the same way that the variable @xmath56 is different from a specific number in elementary algebra .
a generic set such as @xmath6 does not have a cardinality in the sense of cantor .
it is not in @xmath10 correspondence with any specific finite set , but if a specific natural number @xmath19 is given for @xmath0 , then the resulting set is of cardinality @xmath14 this is precisely analogous to the situation with an algebraic @xmath56 that in itself has no numerical value , but any subsititution of a number for @xmath56 results in the specific value of that number . just as we endow algebraic expressions with the same properties as the numbers that they abstract , we endow generic sets with the properties of the finite sets which they stand for
* the zhigljavsky axioms .
* the transfer principle can be compared with axiomatizations for the sergeyev system .
for example , in @xcite , zhigljavsky takes the following set of axioms : 1 .
grossone , @xmath0 is the largest natural number so that @xmath119 2 .
the grossone @xmath0 is divisible by any finite natural number @xmath14 3 . if a certain statement is valid for all @xmath19 large enough , then this statement is also valid for @xmath8 4 .
assume we have a numerical sequence @xmath120 which converges to @xmath91 as @xmath121 then this sequence has a grossone - based representative @xmath122 whose last term @xmath123 is necessarily an infinitesimal quantity .
we see that axiom 1 corresponds to our formalization of @xmath6 as a generic set of consective natural numbers .
but we do not use formally the concept of the set of all natural numbers " .
we do not assume axiom 2 .
our transfer principle is essentially the _ same _ as axiom 3 .
we agree with axiom 4 in the sense that one has the generic set @xmath124 and the symbol @xmath123 can be used to stand for a number whose absolute value is less than that of any given finite non - zero real number .
( in fact it follows from the transfer principle that @xmath125 for any positive natural number @xmath14 ) this means that in both our system and in the zhigljavsky system , infinitesimals naturally arise .
how they are treated may depend upon the further adoption of a theory of extended real analysis and is left open for specific applications .
our point in this paper has been that the symbolic constructions of yaroslav sergeyev can be regarded as generic finite sets .
does this preclude an infinite interpretation ?
in the case of a series such as @xmath126 ( @xmath56 not equal to @xmath7 ) , we would like to give the interpretation that if @xmath0 is an infinite integer , and @xmath127 , then s is infinitesimally close to @xmath128 the problem with this is the same as the corresponding problem in the calculus .
an integral is a limit of finite summations .
we are led to imagine ( by the leibniz @xcite notation for example ) that the integral @xmath129 is an uncountably infinite sum of infinitesimal terms of the form @xmath130 it takes the language of non - standard analysis @xcite to make formal sense out of this statement .
we can , using the grossone formalism , go in the other direction and articulate that integral as a generic finite sum . in order to do this
we have to make a choice of method of integration and then write the formula in generic fashion .
for example , consider @xmath131 we use riemann integration and choose an interval from @xmath132 to @xmath19 on the real line .
we divide the interval @xmath133 $ ] up into sub - intervals of length @xmath134 this gives the partition @xmath135 of the interval @xmath136.$ ] we then have that , when the integral converges , @xmath137 thus the following finite generic expression stands for this integral .
@xmath138 we can even write @xmath139 where @xmath140 is our finite version of the exponential function from the previous section . here
@xmath141 where the limit is the classical limit and @xmath19 runs over the classical natural numbers .
we can not write a limit as @xmath0 approaches something since @xmath0 is generic and does not approach anything other than itself .
let us write @xmath142 to mean that @xmath143 and @xmath144 differ by an infinitesimal amount ( in the sense of the discussion above ) or , equivalently , that @xmath145 thus @xmath146 when @xmath147 note that @xmath142 has the following properties , making it an equivalence relation ( the first three properties ) and more .
1 . @xmath148 for any @xmath149 2 .
if @xmath150 , then @xmath151 3 . if @xmath150 and @xmath152 then @xmath153 4 . if @xmath142 and @xmath154 then @xmath155 and @xmath156 5 .
if @xmath150 then @xmath157 where @xmath158 denotes the prinicipal @xmath19-th root of the number @xmath56 ( working over the complex numbers ) .
this implies that @xmath159 in appropriate cases where one checks the limiting behaviour .
if @xmath160 where @xmath161 is infinitesimal , then it may happen that @xmath162 in this case one must examine the size of @xmath161 in relation to the size of @xmath163 here is an example . in the derivation that follows
we will use all of the above properties of @xmath164 and we leave it for the reader to check that each of the steps is valid .
we have the classical formula @xmath165 this is often seen as a consequence of the series formula for @xmath166 we can write the equation @xmath167 in particular , the well - known formula @xmath168 becomes the following generic limit formula @xmath169 in fact , we can , since this formula refers to properties of all formulas where @xmath0 is replaced by a specific integer , take roots and solve for @xmath170 as follows .
@xmath171 @xmath172 @xmath173 @xmath174 @xmath175 one can verify that this last limit formula is indeed a correct limit formula for @xmath176 well - known limit formulas appear from this formula when we replace @xmath0 by @xmath177 in it .
we then have @xmath178 in this version the finite versions are of the form @xmath179 and the limit formula is @xmath180 @xmath181 where @xmath182 denotes the imaginary part of the given complex number .
here we have successive square roots of @xmath183 and we can use the formula @xmath184 when @xmath185 from this it is easy to derive the famous formula of @xmath186 @xmath187 the grossone notation and the notion of generic finite sets allows us to write this derivation in a concise and precise manner .
in articulating the notion of the generic finite we have examined an interpretation of the grossone extension of the natural numbers of yaroslav sergeyev as a generic finite set .
this works because the grossone extension puts a cap , the grossone @xmath0 , at the top of its model of the natural numbers and so formally resembles our way of thinking about a finite set such as @xmath188 where there is a least element , a greatest element , and a size for the set in the numerical sense of counting or ordinality . by working with this correspondence
we may end up with evaluations of size that are dfferent than the sergeyev theory . as case in point
is the finite sets of even natural numbers .
@xmath189 @xmath190 @xmath191 @xmath25 @xmath192 the _ generic set _ of even natural numbers is @xmath193 this notation is different from the way sergeyev would denote the even natural numbers , since he would place them inside " the set @xmath194 as such there could be no appearance of @xmath195 in the sergeyev representation of the even numbers . in sergeyev
s language the _ size _ of the set of even natural numbers is @xmath196 since it comprises half of the natural numbers
. we can discuss _ size _ in our way by the transfer principle of the previous section .
the size of a specific finite set of even numbers @xmath197 is the number of elements of the set , which is @xmath19 in this case .
if the size for the specializations ( replacing @xmath0 by specific natural numbers @xmath19 ) of a generic set @xmath198 has the form @xmath199 for a function @xmath31 of natural numbers , then we say that _ @xmath198 has size @xmath200 _ thus we assign @xmath0 as the size for @xmath201 above .
this moves our theory in the direction of cantorian counting for sets even though we have not invoked a notion of @xmath10 correspondence for infinite sets .
the point of this example is to show that there are are natural differences between the generic finite set approach and the sergeyev approach to the grossone .
there are other relationships of the generic finite set concept and standard set theory .
for example , in standard set theory we take as representative ordinals @xmath202 @xmath203 @xmath204 @xmath205 and so the _ generic finite ordinal _ is @xmath206 this should be held in contrast to the first infinite ordinal @xmath207 the generic finite ordinal has @xmath208 members and so we could name it @xmath209 in the tradition of making ordinals .
however , ordinals made in this generic fashion are not well - ordered since there is no end to the descending sequence @xmath210 this means that a theory of ordinals based on generic finite sets will have a character of its own
. this will be the subject of a separate paper .
in this section , we take a different approach to the grossone . we assume here the existence of infinite sets and the usual terminology of point set topology . with
that we discuss , by using categories , relative notions of infinity .
after all , a circle is not homeomorphic to any proper subset of itself . therefore the point set for the circle , uncountable in pure set theory , is _ finite _ in the category of topological spaces !
we formalize this notion below and indicate how it can be interfaced with the grossone .
we recall that a _ category _ @xmath3 is a collection of _ objects _ and _ morphisms _ where a morphism is associated to two objects and is usually written as @xmath211 where @xmath2 and @xmath212 are objects .
note that the morphism @xmath213 provides a directed arrow from @xmath2 to @xmath214 without further axioms the concept of a category is the same as the concept of a directed multi - graph .
the axioms for a category are as follows : 1 . given morphisms @xmath211 and @xmath215 there is a well - defined morpism called the _ composition of @xmath213 and @xmath216 _ and denoted @xmath217 the object @xmath2 is called the _ domain _ of @xmath213 and the object @xmath212 is called the _ codomain _ or _ range _ of @xmath218 2 .
every object @xmath2 has a unique _ identity morphism _
@xmath219 such that for any @xmath211 , @xmath220 and for any @xmath221 , @xmath222 3 .
if @xmath223 @xmath224 and @xmath225 then @xmath226 thus composition of morphisms is associative .
if @xmath227 and @xmath228 are categories , then we say that a _ functor _ from @xmath227 to @xmath228 , denoted @xmath229 , is a function that takes objects to objects and morphisms to morphisms such that if @xmath211 is a morphisim in @xmath227 , then @xmath230 is a morphism in @xmath231 furthermore , we require of a functor that identity morphisms are carried to identity morphisms , and that compositions are taken to compositions in the sense that @xmath232 for all compositions @xmath233 in @xmath234 in this paper we will use categories whose objects are sets and whose morphisms are maps of these sets with whatever extra structure is demanded by that category . then there is a forgetful functor @xmath235 obtained by just taking the objects as sets and the morphisms as maps of sets , ignoring the extra structure . in such categories ,
a morphism @xmath211 is said to be _ injective _ if @xmath236 is injective in @xmath237 and @xmath213 is said to be _ surjective _ if @xmath236 is surjective in @xmath238 in @xmath239 one says that a set @xmath2 is _ infinite _ if there exists an injection @xmath240 that is not a surjection .
if every injection of @xmath2 to itself is a surjection , we say that the the set @xmath2 is finite .
we relativize this notion to other categories .
if @xmath3 is a set - based category , we say that an object @xmath2 of @xmath3 is _ finite _ ( in @xmath3 ) if every injection of @xmath2 to itself _ in the category @xmath3 _ is surjective .
for example consider the category @xmath241 of topological spaces .
we see at once that the circle @xmath242 where @xmath243 are real numbers , is finite in this category since the circle is not homeomorphic to any proper subset of itself .
thus , while the circle consists in infinitely many points when looked at under the forgetful functor to set theory , in the topological category the circle is finite .
we shall see that this point of view on finite and infinite is very useful in sorting out how we deal with mathematical objects in many situtations .
we now combine this topological point of view on finiteness with the grossone . in figure [ gcircle ] we show an embedding of a union of two sets of points in the form @xmath244 ( in the figure , @xmath0 is denoted by @xmath245 ) we regard @xmath6 as embedded in a circle with two points removed .
the two vertical marks on the circle in the figure denote the removed points . the circle with two points removed partakes of the subspace topology from the euclidean plane in which it is embedded .
we work in the category of orientation preserving homeomorphisms of the deleted circle @xmath246 that map the the intervals @xmath247 $ ] and @xmath248 $ ] to themselves . in the categorical sense , @xmath246 is finite and there can be no such homeomorphisms that take @xmath6 to a proper subset of itself .
in fact , in this model , every such homeomorphism is the identity map when restricted to @xmath11 in figure [ graphs ] we give another example of how to topologically make an infinite set finite .
we have labeled a set of graphs with the elements " of @xmath11 none of these graphs are homeomorphic ( we take the nodes of the graphs to be disks and the edges to be topological intervals)and so a homeomorphism of the entire collection must take each graph to itself .
there is no topological injection of the collection of graphs to a subcollection of itself .
sergeyev ya.d . ,
_ numerical point of view on calculus for functions assuming finite , infinite , and infinitesimal values over finite , infinite , and infinitesimal domains _ , nonlinear analysis series a : theory , methods @xmath249 applications * 71 * 12 ( 2009 ) , e1688e1707 ludin d.i . , sergeyev ya.d .
hayakawa m. , _ interpretation of percolation in terms of infinity computations , applied mathematics and computation _
, applied mathematics and computation ( 2012 ) , 218(16 ) , 8099 - 8111 .
sergeyev ya.d . , _ on accuracy of mathematical languages used to deal with the rieman zeta function and the dirichlet eta function _ ,
p - adic numbers ultrametric analysis @xmath249 applications * 2 * , 2 , pp . 129 - 148 ( 2011 ) .
margenstern m. , _ using grossone to count the number of elements of infinite sets and the connection with bijections _ ,
p - adic numbers ultrametric analysis @xmath249 applications * 3 * , 3 , pp . 196 - 204 ( 2011 ) . | this paper introduces the concept of a _ generic finite set _ and points out that a consistent and significant interpretation of the _ grossone _ , @xmath0 notation of yarolslav d. sergeyev is that @xmath0 takes the role of a generic natural number .
this means that @xmath0 is not itself a natural number , yet it can be treated as one and used in the generic expression of finite sets and finite formulas , giving a new power to algebra and algorithms that embody this usage . in this view
, @xmath1 is not an infinite set , it is a symbolic structure representing a generic finite set .
we further consider the concept of infinity in categories .
an object @xmath2 in a given category @xmath3 is _ infinite relative to that category _ if and only if there is a injection @xmath4 in @xmath3 that is not a surjection . in the category of sets
this recovers the usual notion of infinity .
in other categories , an object may be non - infinite ( _ finite _ ) while its underlying set ( if it has one ) is infinite .
the computational methodology due to yarolslav d. sergeyev for executing numerical calculations with infinities and infinitesimals is considered from this categorical point of view .
* keywords : * grossone , @xmath5 notation , finite , infinite , generic finite , category + * ams subject classification : * 03e65 , 65 - 02 , 65b10 , 60a10 + |
it is well known that the gravitational lensing by the foreground objects ( e.g. , galaxies ) can alter the apparent brightness of background objects ( e.g. , quasars ) , which may contaminate our observations .
three decades ago , barnothy and barnothy ( 1968 ) proposed that all the quasars were nothing but the gravitationally magnified images of seyfert galactic nuclei .
press and gunn ( 1973 ) showed that the probability of occurrence of gravitational lensing in an @xmath5 universe is nearly unity .
for many years there had been a lack of both convincing observational and theoretical supports for these speculations .
however , numerous and unprecedented deep galaxy surveys have recently revealed a considerably large population of faint galaxies ( metcalfe et al .
1996 ; references therein ) .
this motivates one to readdress the question if the observations of background objects are seriously affacted by the gravitational lensing effect of foreground galaxies ? for this purpose ,
zhu & wu ( 1997 ) have calculated the lensing cross - sections of background quasars by the foreground galaxies , and concluded that , despite the fact that there is a considerably high surface number density of faint galaxies the total lensing cross - sections by galaxies towards a distant source are still rather small , when only a special cosmological model of @xmath6 is considered .
nevertheless , the optical depth ( probability ) of gravitational lensing depends sensitively on the cosmological models .
it is worthy of examining whether the above claim is valid under general cosmological models . in this paper , we extend our previous work to a variety of cosmological models , which are characterized by the mass density parameter @xmath0 and the normalized cosmological constant @xmath1 ( cf . ,
carroll , press & turner 1992 ) .
this is well motivated because cosmological models with nonzero cosmological constant have become quite popular recently .
many years ago , gott , park & lee ( 1989 ) have given the general expressions for the optical depth and mean image seperation in general friedman - lematre - robertson - walker ( flrw ) cosmological models .
yet , these expressions are complicated and thereby hard to use in practice .
one of the purposes of this paper is thus to simplify the formula .
furthermore , we would like to investigate how the cosmological parameters affect the estimate of the lensing cross - section .
we assume a homogeneous and isotropic universe described by a robertson - walker metric ( weinberg 1972 ) : @xmath7.\ ] ] or in the form @xmath8.\ ] ] where @xmath9 the relation of the measurables to the unmeasurables is ( lightman et al .
1975 ; carroll , press & turner 1992 ) @xmath10 where @xmath11 is the present time of the universe , and @xmath12 , @xmath13 and @xmath14 are the angular diameter distance , the proper motion distance and the luminosity distance respectively .
distances used in lensing theory are the angular diameter distances ( schneider , ehlers & falco 1992 ) . from eqs .
[ r ] and [ distances ] , one can derive an important relation @xmath15 where @xmath16 and @xmath17 are the angular distances from the lens to the source and from the observer to the source , respectively . using the einstein field equation
, it can be shown that the relation of the comoving distance @xmath18 to the redshift for the general flrw cosmologies is @xmath19 for our end , the comoving volume @xmath20 is more convenient than the traditional physical volume . within the shell
@xmath21 at @xmath18 , @xmath20 reads ( gott , park & lee 1989 ) @xmath22
first of all , we consider the lensing cross - section ( turner , ostriker & gott 1984 ) due to a specific galaxy . following turner et al .
( 1984 ) , we model the mass density profile of the total galaxy matter as the singular isothermal sphere ( sis ) , whose magnification for a point source is given by ( schneider , ehlers & falco 1992 ; wu 1996 ) @xmath23 where @xmath24 is the observed angular position of the source ( image position ) , @xmath25 is the angular radius of einstein ring and @xmath26 is the velocity dispersion of the lensing galaxy . note that we only include the contribution of the primary image because here we will not deal with the statistics of multiple images .
the dimensionless magnification cross - section for a point source located at @xmath27 produced by a single sis galaxy at @xmath28 is @xmath29 ^ 2 \,,\ ] ] where the relation of eq .
[ ratio ] has been employed .
now , let s consider the contributions of an ensemble of galaxies having different luminosities and redshifts .
the present - day galaxy luminosity function can be described by the schechter function ( peebles 1993 ) @xmath30 where @xmath31 indicates the morphological type of galaxies : @xmath31=(e , s0 , s ) .
the above expression can be converted into the velocity dispersion distribution through the empirical formula between the luminosity and the central dispersion of local galaxies @xmath32 .
we keep the same parameters ( @xmath33 ) as those adopted by kochanek ( 1996 ) based on the surveys ( loveday et al .
1992 , marzke et al .
1994 ) , which yield @xmath34 km / s and @xmath35 for @xmath36 galaxies , and the morphological composition @xmath37 for ( @xmath38 ) . for the spatial distribution of galaxies ,
we employ a general flrw cosmological model parametrized by @xmath0 and @xmath1 , which has been outlined in section 2 . finally , the total dimensionless magnification cross - section by galaxies at redshifts ranging from 0 to @xmath27 for the distant sources like quasars at @xmath27 is @xmath39 the parameter @xmath40 represents the effectiveness of the @xmath31-th morphological type of galaxies in producing double images ( turner et al .
1984 ) , which reads @xmath41 where @xmath42 is the velocity bias between the velocity dispersion of stars and of dark matter particles .
the above equation can be further written as @xmath43 if the integral is performed from @xmath44 to @xmath45 . in practice ,
the galaxy luminosities have the minimum and maximum limits , and , therefore , eq . [ fi - maximum ] is the maximum estimate of @xmath40 .
the @xmath27 dependent factor @xmath46 is @xmath47}^2 f^2(\chi_d ) d\chi_d \,.\ ] ] for general flrw cosmologies , an analytic expression is found : @xmath48 , & \omega_m + \omega_\lambda > 1\,,\\ % \left|\omega_m + \omega_{\lambda } -1\right|^{-3/2 } \left[\frac{1}{8}(-1 + 3 \coth^2\chi_s ) \chi_s - \frac{3}{8}\coth\chi_s\right ] , \,\,\ , & \omega_m + \omega_\lambda < 1\,,\\ \end{array } \right.\ ] ] where @xmath49 can be calculated through eq .
[ comoving ] . for a flat universe ( @xmath50 )
, it reduces to ( turner 1990 ) @xmath51}^3 , \,\,\ , \omega_m + \omega_\lambda = 1 \,.\ ] ] if @xmath52 and @xmath53 , it reads ( turner et al .
1984 ) @xmath54 ^ 3}{(1+z_s)^{3/2 } } , \,\,\,\,\,\
, \omega_m = 1 , \ , \omega_{\lambda } = 0.\ ] ] we should point out that the expression of eq .
[ total - cross ] is very useful . dividing the expression by @xmath55 ,
one gets the fraction of the sky within redshift @xmath27 which is magnified by the factor greater than @xmath56 : @xmath57 we employ @xmath58 denoting the total effective parameter of all galaxies in producing multiple images . further omitting the @xmath56-dependent term in eq .
[ probability ] , one obtains the conventional optical depth for multiple images . in the above calculations , we have assumed that the comoving number density of galaxies is constant .
however , this may not hold true for the realistic situation .
the influence of galaxy evolution on the the lensing cross - section should also be taken into account .
zhu and wu ( 1997 ) have include this effect by using the galaxy merging model proposed by broadhurst et al .
( 1992 ) , since the scenario of galaxy merging can account for both the redshift distribution and the number counts of galaxies at optical and near - infrared wavelengths ( broadhurst et al .
there are two effects arising from the galaxy merging :
the first is that there are more galaxies and hence more lenses in the past .
the second is that galaxies are typically less massive in the past and hence less efficient as lenses . as a result of two effects ,
the total magnification cross - section remains roughly unchanged ( zhu & wu 1997 ) .
knowing the analytic expressions for the lensing cross - sections in general flrw cosmologies , we can explore in detail the influences of cosmological parameters by numerical computation .
we compute the probability @xmath59 so as to investigate whether the background objects ( like quasars ) counts are significantly contaminated .
[ probability ] contains three factors , namely , the @xmath56-dependent term , the galaxies term and the cosmological term , associated with @xmath59 . here
we concentrate on the cosmological term by adopting the maximum value @xmath60 ( kochanek 1996)and a moderate magnification of @xmath61 . fig . 1
shows how the probability @xmath62 depends on the normalized cosmological parameters @xmath0 and @xmath1 for an open ( @xmath63 , @xmath64 ) and a flat ( @xmath65 ) universe respectively . in our calculations , the source has been set at @xmath66 or @xmath67 respectively , and a moderate magnification of @xmath68 has been used .
indeed , the probability @xmath59 depends sensitively on cosmological parameters @xmath0 and @xmath1 .
however , even for a @xmath4-dominated ( @xmath69 ) flat universe , only a small fraction ( @xmath70 ) of the sky can be moderately ( @xmath3 ) lensed by galaxies .
of course , by taking somewhat lower value for @xmath56 the probability of the magnification can siginificantly increase . in order to get a more robust conclusion
, we now estimate what value of magnification would affect the current observations of quasar count , i.e. , we calculate the value of @xmath71 which makes @xmath72 . the resulting magnification , which depends on both the cosmological model and the source redshift , is shown in fig . 2 .
since the magnification is generally much lower than @xmath73 , our result reinforces the hypothesis that the quasar counts are not seriously contaminated by the galactic lenses .
barnothy j. , barnothy m. f. , 1968 , science 162 , 348 broadhurst t. , ellis r. s. , glazebrook , k. , 1992 , nature 355 , 55 carroll s. m. , press w. h. , turner e. l. , 1992 , ara&a 30 , 499 gott j. r. , park m. g. , lee h. m. , 1989 , apj 338 , 1 kochanek c. s. , 1996 , apj 466 , 638 lightman a. p. et al .
, 1975 , _ problem book in relativity and gravitation _ ( princeton univ . press , princeton ) loveday j. , peterson b. a. , efstathiou g. , maddox s. j. , 1992 , apj 390 , 338 marzke r. o. , geller m. j. , huchra j. p. , corvin h. g. , 1994 , aj 108 , 437 metcalfe n. , shanks t. , campos a. , fong r. , gardner j. p. , 1996
, nature 383 , 236 peebles p. j. e. , 1993 , _ physical cosmology _
( princeton univ . press , princeton ) press w. h. , gunn j.e .
, 1973 , apj 185 , 397 .
schneider p. , ehlers j. , falco e. e. , 1992 , _ gravitational lenses _
( springer verlag , berlin ) turner e. l. , 1990 , apj 365 , l43 turner e. l. , ostriker j. p. , gott j. r. , 1984 , apj 284 , 1 weinberg s. , 1972 , _ gravitation and cosmology _ ( wiley , new york ) .
wu x. p. , 1996
17 , 1 zhu z. h. , wu x. p. , 1997 ,
a&a 326 , l9 . | for a wide variety of cosmological models characterized by the cosmic mass density @xmath0 and the normalized cosmological constant @xmath1 , we derive an analytic expression for the estimate of magnification cross - sections by an ensemble of isothermal spheres as models of galactic mass distributions .
this provides a simple approach to demonstrate how the lensing probability by galaxies depends on the cosmological parameters .
an immediate consequence is that , while a non - zero cosmological constant indeed leads to a significant increase of the lensing probability as it has been shown in the literature , only a small fraction of sky to @xmath2 can be moderately ( @xmath3 ) lensed by galaxies even in a @xmath4-dominated flat universe .
therefore , whether or not there is a nonzero cosmological constant , it is unlikely that the overall quasar counts have been seriously contaminated by the presence of galactic lenses .
+ psfig |
the network growth literature was commenced by the seminal barabsi - albert model posited initially in @xcite .
the motivation of studying the evolution of networks is to grasp underlying mechanisms that drive the growth process , which ramify into observable macro properties ( such as a power - law degree distribution in the case of the barabsi - albert model ) .
examples of other growth models include models with edge growth @xcite , aging effects @xcite , node deletion @xcite , accelerated growth @xcite , copying @xcite , and fitness - based models @xcite .
these models are all purely structural , that is , the connectivity of new nodes is determined by factors that depend on time or structural measures of the network . in all the examples mentioned above , the prototypical way of analyzing a network growth model is as follows : a micro growth mechanism is devised ( for example for the barabsi - albert model , nodes enter sequentially and attach to existing nodes with degree - proportional probabilities ) , then the evolution of a desired quantity , such as the degree distribution is quantified ( e.g. , via a rate equation ) , then the equations are simplified in the long time limit ( the limit as @xmath0 ) , and then they are solved to yield steady - state quantities of interest .
the practical shortcoming of such an approach is that no real network has infinite size , and the assumption of equilibrium is unrealistic for most networks .
note that this can lead to crucial methodological consequences concerning the falsifiability of the model .
for example , consider the barabsi - albert model .
it predicts that in the steady state , the network will exhibit a power law . in this case , if we take a real network and show that its degree distribution does not follow a power law , this will not refute the theory because one could counter with the contention that the network ( no network , in fact ) has infinite size , so the network at hand does not have the necessary condition ( i.e. , infinite size ) for the prediction that theory provides . to alleviate this shortcoming and to remedy the falsifiability caveat
, we proposed an alternative approach : focusing on the temporal evolution of the network for a given growth model . that is , instead of solving for the quantities of interest in the steady - state , solve for them in arbitrary times and for initial conditions . this way ,
for a given real network , an arbitrary point in time can be considered the origin of time and the network at that instant will constitute the initial condition .
then observations on the temporal evolution of desired quantities can be compared to the theoretical prediction , in order to assess the theory . in the present paper ,
we focus on a shifted - linear preferential growth mechanism .
we find the expected degree distribution as a function of time , for arbitrary initial conditions and arbitrary times .
the results are corroborated with monte carlo simulations throughout the paper .
the growth process starts from a given initial network with @xmath1 nodes and @xmath2 links , with known degree distribution @xmath3 .
the network grows via the successive addition of new nodes . at each time
step a new node is born , and it forms @xmath4 links to existing nodes , according to the preferential linking : the probability that an existing node @xmath5 receives a link from the new node at time @xmath6 is proportional to @xmath7 , where @xmath8 is the degree of node @xmath5 at time @xmath6 , and @xmath9 is the initial attractiveness , a positive constant of the model . to obtain normalized probabilities , we need to divide @xmath7 for each @xmath5 by the sum of this quantity over every node .
the sum over @xmath10 yields twice the number of links at time @xmath6 , which is @xmath11 .
the sum over @xmath9 yields @xmath12 , where @xmath13 is the number of nodes at time @xmath6 , which equals @xmath14 .
thus , the probability that node @xmath5 receives a link emanated from the newly - born node equals @xmath15 hereinafter , we will denote @xmath16 by @xmath17 , and @xmath18 by @xmath19 . so transforms into @xmath20
at each timestep , we can quantify the expected the change in @xmath21 , which is the number of nodes in the network that have degree @xmath22 at time @xmath6 .
the value of @xmath21 can be altered if at time @xmath6 , an existing node with degree @xmath22 receives a link from the newly - born node ( which would increment the degree of the receiving node to @xmath23 , decrementing @xmath24 ) , or if an existing node with degree @xmath25 receives a link ( which would increment the degree of the receiving node to @xmath22 , incrementing @xmath24 ) . for each incoming node ,
@xmath26 increments .
the following rate equation quantifies the evolution of @xmath21 : @xmath27 + \delta_{k,\beta}. \label{nk_dot_0 } \end{aligned}\ ] ] this can be rearranged and expressed equivalently as follows @xmath28 + \delta_{k,\beta}. \label{nk_dot_00 } \end{aligned}\ ] ] this is a two - dimensional difference equation in time and @xmath22 .
we now employ a time - continuous approximation to this equation and replace it with the following difference - differential equation : @xmath29 + \delta_{k,\beta}. \label{nk_dot } \end{aligned}\ ] ] note that the relative error of this approximation at each timestep is proportional to @xmath30 . even for short times
, the @xmath31 in the denominator ensures that the truncation error is small , provided that @xmath2 is large . in our monte carlo simulations ,
the initial network comprises 100 links , and the predictions are remarkably accurate .
note that 100 links is tiny as compared to many real networks , such as social and biological networks , citation and collaboration networks , the web and other online networks .
the typical size of these networks are way larger than 100 , so the approximation will be conservative in real settings . in monte carlo simulations
, we also tested larger initial networks and verified the accuracy of predictions for large systems .
simulation results are presented in section [ sec : simul ] .
figure [ figs ] illustrates the remarkable accuracy of the theoretical predictions , where the error bars are smaller than the markers used for depiction . to solve
, we define the generating function : @xmath32 this is the conventional z transform .
we multiply both sides of by @xmath33 and sum over @xmath22 .
the left hand side yields @xmath34 . for the terms on the right hand side , we use two standard properties of the z - transform : if the generation function of some sequence @xmath35 is given by @xmath36 , then ( 1 ) the generating function for sequence @xmath37 is given by @xmath38 , and ( 2 ) the generating function for the sequence @xmath39 is given by @xmath40 . using these two properties , equation yields @xmath41 + z^{-\beta } .
\label{psi_dot_1 } \end{aligned}\ ] ] this can be rearranged and recast as @xmath42 in appendix [ app : pde ] we solve this partial differential equation .
let us define @xmath43 using these definitions , the solution to reads @xmath44 .
\label{psi_sol } \end{aligned}\ ] ] note that @xmath45 is obtained by taking the z - transform of the sequence @xmath46 ( which is given as the initial condition ) and then replacing @xmath47 by @xmath48 . in appendix
[ app : inv ] we take the inverse transform of this expression to obtain @xmath21 .
let us define @xmath49 using this definition for brevity , the inverse transform of reads @xmath50 } { \beta } } { \ , \displaystyle \frac { \gamma(k+\theta ) } { \gamma(\beta+\theta ) } } { \ , \displaystyle \frac { \gamma \left ( \beta+ 2+\frac{\theta}{\beta } + \theta \right ) } { \gamma \left ( k+3 + \frac{\theta}{\beta } + \theta \right ) } } u(k-\beta ) \nonumber \\ & \resizebox{.98 \linewidth}{!}{$ - { \ , \displaystyle \frac { \lambda ( 1-c)^{\theta } c^k } { \beta } } \gamma \left ( \beta+ 2+\frac{\theta}{\beta } + \theta \right ) { \ , \displaystyle \frac { \gamma(k+\theta ) } { \gamma(\beta+\theta ) } } { \ , \displaystyle}\sum_{r=\beta } ^{k } \displaystyle \frac { \left ( { \ , \displaystyle \frac { 1-c } { c } } \right)^r } { ( k - r ) ! \gamma \left ( m+3 + \frac{\theta}{\beta } + \theta \right ) } $ } .
\label{nk_ultimate } \end{aligned}\ ] ] we can divide this by the number of nodes at time @xmath6 to obtain the fraction of nodes with degree @xmath22 at time @xmath6 . the result is @xmath51 the first term is the effect of initial nodes . in the long time limit
, the @xmath14 in the denominator makes this term vanish .
moreover , from we observe that in the limit as @xmath0 , we have @xmath52 .
this means that every @xmath53 term as well as the @xmath54 prefactor all tend to zero in the long time limit .
note that the @xmath55 in the denominator will not cause divergence , because the @xmath56 prefactor removes the singularity .
so the first term on the right hand side of vanishes in the long time limit , as we intuitively expect . the second term on the right hand side of reaches a horizontal asymptote in the long time limit . in this limit
, we have @xmath57 . finally , the last term on the right hand side of vanishes in the long time limit for the same reasons delineated above for the first term .
so in the steady state , the first and third terms have no share in the degree distribution , and the second term dominates .
we have : @xmath58 this is in agreement with the results in @xcite .
finally , we can set @xmath59 to recover the degree distribution of the conventional barabsi - albert model : @xmath60 this is in agreement with the long - known result , as given for example in @xcite .
figure [ fig_1 ] shows the simulation results for the a 4-regular ring of 200 nodes as the initial network ( a 4-regular ring is obtained by connecting every second neighbor on a ring ) .
the temporal evolution of @xmath21 for three distinct values of @xmath22 are depicted . as can be seen ,
theoretical predictions match the simulation results to high accuracy .
the error bars are of the size of the markers used in the graph .
the value of @xmath4 is 2 and @xmath9 is 20 . since @xmath9 is large , the preferential share of the growth kernel is weaker , that is . generally , the growth process is closer to a uniform growth as @xmath9 increases .
figure [ fig_2 ] depicts the simulation results and theoretical predictions for a small - world network , which is constructed by taking a ring of 200 nodes and establishing every non - existing link with probability 0.05 . the initial network in figure [ fig_3 ] a 4-regular ring of 300 nodes .
figure [ fig_4 ] presents the simulation results for a larger initial network : a ring of 1000 nodes .
it can be observed that even for short times , the accuracy of the theoretical predictions are remarkably high .
we contended that conventional emphasis on the steady state for the analysis of network growth models might engender methodological caveats .
we proposed focusing on arbitrary times and solving the growth problem for arbitrary initial conditions .
we considered the shifted - linear growth scheme and obtained the degree distribution as a function of time for arbitrary initial conditions .
we verified the theoretical predictions with monte carlo simulations .
plausible extensions to this result include moving beyond the expected degree distribution , and obtaining the distribution of degree distributions .
this is essential for devising rigorous statistical tests to assess network growth mechanisms upon observing longitudinal data on the degree distribution . for a given initial network
, there are various paths that the system can take , due to the random nature of the growth process .
what we obtained in this paper is the average over all those paths .
but at each timestep , there is a distribution over all those paths .
if we knew that distribution , then we could devise statistical recipes for hypothesis testing , to be able to answer questions of the form : if the posited growth mechanism is true , what is the probability that the empirical degree distribution at time @xmath6 would occur ? " .
since a hypothesis testing scheme like the one discussed requires a null hypothesis , it would be plausible to solve the same problem for the uniformly - growing networks .
so the problem is : under uniform growth , for a given initial network , what is the probability that at time @xmath6 , the number of nodes with degree @xmath22 would be @xmath24 ? finally , it would be plausible to consider time
reversal in network growth problems . given a growth mechanism and the degree distribution at time @xmath6 ,
what is the distribution that the system has most likely been at time @xmath61 ?
what information does observing the network at some state at time @xmath6 ( say , observing its degree distribution at time @xmath6 ) give about the states ( degree distributions ) of the system at earlier times ?
.5 0.5 + 0.5 0.5 100 barabsi , a.l . ,
albert , r .
: emergence of scaling in random networks , science 286 , 509512 ( 1999 ) .
albert , r. , barabsi , a.l .
, : topology of evolving networks : local events and universality , phys .
85 , 5234 ( 2000 ) .
krapivsky , p. l. , rodgers , g. j. , redner , s.:degree distributions of growing networks .
86 , 5401 ( 2001 ) .
klemm , k. , eguiluz , v.m .
: highly clustered scale - free networks , phys , rev .
e 65 , 036123 ( 2002 ) .
dorogovtsev , s . , mendes , j.f.f .
: evolution of networks with aging of sites , physical review e 62 , 1842 ( 2000 ) .
moore , c. , ghoshal , g. , newman , m.e.j . : exact solutions for models of evolving networks with addition and deletion of nodes , phys.l rev .
e 74 , 036121 ( 2006 ) .
sarshar , n. , roychowdhury , v. : scale - free and stable structures in complex ad hoc networks , phys . rev .
e 69 , 026101 ( 2004 ) .
dorogovtsev , s.n . , mendes , j.f.f : effect of the accelerating growth of communications networks on their structure , phys .
e 63 , 025101 ( 2001 ) .
kumar , r. , raghavan , p. , rajagopalan , s. , sivakumar , d. , tomkins , a. , upfal , e. : stochastic models for the web graph , proc .
41st symp . found . of comp .
, 5765 ( 2000 ) .
krapivsky , p. l. , redner , s. , leyvraz , f. : connectivity of growing random networks .
85 , 4629 ( 2000 ) .
bianconi , g. , barabsi , a.l . : competition and multiscaling in evolving networks , eur .
54 , 436 ( 2001 ) .
smolyarenko , i. , hoppe , k. , rodgers , g. : network growth model with intrinsic vertex fitness , phy .
e 88 , 012805 ( 2013 ) .
ghadge , s. , killingback , t. , sundaram , b. , tran , d.a . : a statistical construction of power - law networks , int . j. parallel emergent and distributed systems 25 , 223235 ( 2010 ) .
zwillinger , d : handbook of differential equations , ch .
2 . san diego , ca : academic press , 1998 .
fotouhi , b. , rabbat , m.g . :
network growth with arbitrary initial conditions : degree dynamics for uniform and preferential attachment , phys .
e 88 , 062801 ( 2013 ) .
dorogovtsev , s. n. , mendes , j. f. f. , samukhin , a. n. : structure of growing networks with preferential linking .
85 , 4633 ( 2000 ) .
krapivsky , p. l. , redner , s. : organization of growing random networks , phys .
e 63 , 066123 ( 2001 ) .
bollobs , b. , riordan , o. , spencer , j. , tusnady , g. : the degree sequence of a scale - free random graph process , rand .
struct . and alg .
18 , 279290 ( 2001 ) .
chung , f. r. , lu , l. : complex graphs and networks : cbms regional conference series in mathematics 107 , american mathematical society ( 2006 ) .
the pde we need to solve is : @xmath62 we employ the method of characteristic curves to solve this equation ( see for example @xcite , for background on this method ) .
we need to first solve the following system of equations : @xmath63 from the first equation we get @xmath64 the second equation is @xmath65 this can be rearranged and rewritten as follows @xmath66 using , this transforms into @xmath67 this is an ordinary first - order linear differential equation , with integrating factor @xmath68 .
the solution is given by @xmath69 , \label{psi_sol_1 } \end{aligned}\ ] ] where @xmath70 , according to the method of characteristics , is an arbitrary function of @xmath71 that is uniquely specified for given initial conditions .
we expand the integrand before performing the integration .
we have @xmath72 plugging this into , we get @xmath73 \nonumber \\ & = z^{\theta } \left [ { \ , \displaystyle \frac { + c } { \beta } } { \ , \displaystyle}\sum_{m=0}^{\infty } ( -1)^{m}{\ , \displaystyle \frac { z^{\frac{-\nu}{\beta } -m-\beta-\theta } } { \frac{\nu}{\beta } + m+\beta+\theta } } + \phi(c ) \right ] \nonumber \\ & = { \ , \displaystyle \frac { c } { \beta } } { \ , \displaystyle}\sum_{m=0}^{\infty } ( -1)^{m}{\ , \displaystyle \frac { z^{\frac{-\nu}{\beta } -m-\beta } } { \frac{\nu}{\beta } + m+\beta+\theta } } + \phi(c)z^{\theta } . \label{psi_7 } \end{aligned}\ ] ] now we use to plug in the explicit expression for @xmath71 into : @xmath72 plugging this into , we get @xmath74 z^{\theta } .
\label{psi_8 } \end{aligned}\ ] ] let us define @xmath75 then can be rewritten as follows : @xmath76 .
\label{psi_9 } \end{aligned}\ ] ] we need to uniquely determine @xmath77 . at time
@xmath78 , equation becomes @xmath79\longrightarrow \nonumber \\ & \phi \left [ ( z-1)^{\frac{\nu}{\beta } } \lambda \right ] = z^{-\theta } \bigg[\psi(z,0)- \lambda f(z ) \bigg ] \longrightarrow \nonumber \\
& \phi(x)=\left [ \left(\frac{x}{\lambda}\right)^{\frac{\beta } { \nu}}+1\right]^{-\theta } \bigg[\psi \left ( \left(\frac{x}{\lambda}\right)^{\frac{\beta } { \nu}}+1 , 0\right ) - \lambda f \left ( \left(\frac{x}{\lambda}\right)^{\frac{\beta } { \nu}}+1 \right ) \bigg ] .
\label{psi_10 } \end{aligned}\ ] ] also , let us define @xmath80 then it follows that @xmath81 using , , we can simplify into the following : @xmath82 = z^{\theta } \left({\ , \displaystyle \frac { z - c } { 1-c}}\right)^{-\theta } \bigg[\psi \left ( { \ , \displaystyle \frac { z - c } { 1-c } } , 0\right ) - \lambda f \left({\ , \displaystyle \frac { z - c } { 1-c } } \right ) \bigg ] \label{phi_final } \end{aligned}\ ] ] substituting the last term on the right hand side of with the expression in , we arrive at @xmath83 .
\label{psi_9 } \end{aligned}\ ] ]
now we need to take the inverse transform of this expression .
we do this term by term .
first , we take the inverse transform of @xmath84 .
we have @xmath89 } { \frac{\nu}{\beta } + m+\beta+\theta } } \binom{\frac { \nu}{\beta } + m } { m } \binom{\frac { \nu}{\beta } } { r } \nonumber \\ & { \ , \displaystyle \frac { 1 } { \beta } } { \ , \displaystyle}\sum_{m } { \ , \displaystyle \frac { ( -1)^{k-\beta } } { \frac{\nu}{\beta } + m+\beta+\theta } } \binom{\frac { \nu}{\beta } + m } { m } \binom{\frac { \nu}{\beta } } { k - m-\beta } \label{f_inv_1 } \end{aligned}\ ] ] this yields the inverse transform of the first term on the right hand side of . for the second and third terms , we first ask : if the inverse transform of some function @xmath84 is known , and is given by , say , @xmath92 , then what is the inverse transform of @xmath93 ?
we have : @xmath94z^{-k } .
\label{steps } \end{aligned}\ ] ] so the inverse transform of @xmath93 is given by @xmath95 using this result , we can take the inverse transform of the other two terms on the right hand side of .
we obtain @xmath96 } { \beta } } { \ , \displaystyle \frac { ( k+\theta-1 ) ! } { ( \beta+\theta-1 ) ! } } { \ , \displaystyle \frac { \gamma \left ( \beta+ 2+\frac{\theta}{\beta } + \theta \right ) } { \gamma \left ( k+3+\frac{\theta}{\beta } + \theta \right ) } } u(k-\beta ) \nonumber \\ & \resizebox{.98 \linewidth}{!}{$ - { \ , \displaystyle \frac { \lambda ( 1-c)^{\theta } c^k } { \beta } } \gamma \left ( \beta+ 2+\frac{\theta}{\beta } + \theta \right ) { \ , \displaystyle \frac { \gamma(k+\theta ) } { \gamma(\beta+\theta ) } } { \ , \displaystyle}\sum_{r=\beta } ^{k } \displaystyle \frac { \left ( { \ , \displaystyle \frac { 1-c } { c } } \right)^r } { ( k - r ) ! \gamma \left ( m+3+\frac{\theta}{\beta } + \theta
\right ) } $ } .
\label{n_ultimate_app } \end{aligned}\ ] ] | in studying network growth , the conventional approach is to devise a growth mechanism , quantify the evolution of a statistic or distribution ( such as the degree distribution ) , and then solve the equations in the steady state ( the infinite - size limit ) .
consequently , empirical studies also seek to verify the steady - state prediction in real data .
the caveat concomitant with confining the analysis to this time regime is that no real system has infinite size ; most real growing networks are far from the steady state .
this underlines the importance of finite - size analysis . in this paper
, we consider the shifted - linear preferential attachment as an illustrative example of arbitrary - time network growth analysis .
we obtain the degree distribution for arbitrary initial conditions at arbitrary times .
we corroborate our theoretical predictions with monte carlo simulations . |
detection of cancer in the early asymptomatic stage improves the cure rates and quality of life of the patient by minimizing extensive , debilitating treatments and can be conservatively managed with minimal surgical morbidity and 100% survival .
early cancerous lesions are asymptomatic and vary in clinical presentations as they do not have ulcerations , indurations , elevations , bleeding , and cervical adenopathy as in case of advanced cancers . the significance of evaluation of leukoplakic lesions , which is the most common precursor of oral cancer , aids in prognostic implications .
but , it may show severe dysplasia , carcinoma in - situ , or frank carcinoma .
the cytological examination had failed to diagnose cases of dysplasia or malignancy accurately as that of the histopathology . in the oral cavity
, the cytology has been of limited use due to the superficial cells collected and the keratin layer that is present . as a result , deeper epithelial abnormalities are not detected . in earlier days
, cotton swabs were used for collection of smears and this was followed by sponge , wooden , or metal spatulas , where only superficial layer cells are collected . the basis for development of newer techniques for collection of cells
is , dysplasia starts in basal layers ( stratum germinatum ) and extends to all the layers of the epithelium . in order to collect the basal layer cells in a new transepithelial ,
noninvasive technique was developed by the oral cdx laboratories . in this technique , the smears obtained contain cells from all the layers of the epithelium and has improved diagnostic applications for mass screening campaigns , without the need for surgery and surgically trained personnel for taking the biopsy .
the concept developed in western countries has been established and its use in south indian population has not been validated . this confirmatory study intends to validate the use of this novel technology in the study population from south india .
the present study is done on 60 patients who are referred to the outpatient department of sibar institute of dental sciences between june 2008 and june 2010 , with clinical diagnosis suggestive of oral leukoplakia .
the criteria of inclusion were as follows :
diagnosing the oral lesions clinically as leukoplakia by proper case history and clinical examination.patients are treated for 1 - 2 weeks with antifungal and antibiotics to rule out other lesions or infections.all the lesions selected are more than 1 cm in size.no patients with obvious symptoms of pain or swelling are included in the study .
diagnosing the oral lesions clinically as leukoplakia by proper case history and clinical examination .
patients are treated for 1 - 2 weeks with antifungal and antibiotics to rule out other lesions or infections .
first , the lesion is brush biopsied according to the manufacturer 's guidelines and then scalpel biopsy was performed .
all the specimens are formalin - fixed , processed in graded alcohols , and stained with hematoxylin and eosin stains .
patient information form is filled and mailed to oral cdx india ltd . , mumbai , for computer evaluation by neural network - based image processing system , specifically designed to detect oral precancerous and cancerous cells
. the results of oral cdx were classified into the following four categories :
negative - no epithelial abnormality.atypical - abnormal epithelial changes of uncertain diagnostic significance.positive - definitive cellular evidence of epithelial dysplasia or carcinoma.inadequate - incomplete transepithelial biopsy specimen .
they are as follows :
no / questionable / mild - low risk.moderate/severe - high risk .
the gender , age , and demographics including site , habits were collected from the case history . the clinical , histopathological , and oral cdx was obtained .
2 2 table was used to calculate the sensitivity and specificity of the diagnostic tests .
the study group consisted of 60 cases , of which 49 were males and 11 were female patients [ table 1 ] .
the age of the patients ranged from 29 to 70 years with mean average age of 48.9 years .
most of the cases fall in the age group of third and fourth decades of life ( p < 0.001 ) [ table 2 and graph 1 ] .
the common sites of occurrence of the lesion in this study were buccal mucosa in 36 cases .
lip ( 16 ) , palate ( 6 ) , and tongue ( 2 ) were sites of predilection for other cases [ table 3 and graph 2 ] .
in the present study , tobacco smoking is present in all the cases and reverse smoking with chuttas were seen in three female patients only .
other habits like alcohol intake , pan chewing , and betel nut chewing along with tobacco smoking is seen in 32 cases [ table 4 ] . in the present study group ,
the clinical diagnosis was homogenous leukoplakia ( 43 cases ) , homogenous and erythematous leukoplakia ( 3 cases ) , and speckled leukoplakia ( 14 cases ) .
gender distribution of study subjects age distribution of study subjects age wise distribution of study subjects distribution of study subjects by site of occurrence distribution of study subjects by site of occurrence correlation of adverse habits with disease the results showed that 16 cases were positive for dysplasia in histopathology .
of the 16 cases , two cases showed well - differentiated squamous cell carcinoma and in 14 cases hyperorthokeratosis with mild dysplasia .
two cases of 12 abnormal oral brush biopsy cases showed positive cells for dysplasia or carcinoma [ table 5 ] .
two cases which were reported as well - differentiated squamous cell carcinomas in histopathology had been reported as positive for carcinoma in oral brush biopsy .
comparison of the oral brush biopsy test with histopathology of 14 cases with hyperorthokeratosis and mild dysplasia in histopathology , only ten cases had been reported as atypical cells present in brush biopsy .
four positive cases for dysplasia in histopathology were reported negative for dysplasia in oral brush biopsy .
the sensitivity of oral brush biopsy is 43.5% and specificity is 81.25% with a positive predictive value of 58.3% [ table 6 ] . showing the sensitivity , specificity , and ppd values
the subtle clinical features of pre - cancer and early oral cancers are difficult to diagnose and hinder their recognition . the pre - cancerous and early cancerous lesions are asymptomatic .
studies showed that more than 25% of dentists failed to recognize the true nature of innocuous looking but potentially malignant lesions .
deeper epithelial abnormalities often went undetected due to the cytological technique and thickness of the keratin layer often present in the lesional areas . in india , on the other hand , oral cancer represents a major health problem constituting up to 40% of all cancers and is the most prevalent cancer in males and the third most prevalent in females . at the screening centers , it is difficult to do scalpel biopsy for each lesion , because of the lack of specialist services or due to patients compliance .
the transepithelial brush biopsy technique was designed to address this deficiency which allows harvesting the cells from both superficial and deep regions of epithelium .
this minimally invasive technique can be used as chair side test to assess the benign looking lesions .
dysplasia and early carcinomas are asymptomatic and commonly misinterpreted as benign lesions or innocuous oral problems .
the inconspicuous nature of these lesions or misleading perception of practitioners may primarily be responsible for the advanced stages of these tumors at the time of discovery .
leukoplakia is a potential malignant lesion which should be followed periodically both by clinical examination and biopsy .
there is statistically significant difference between smokers and non - smokers in keratinization of the oral cavity . in the present study , an interesting feature is , the patients were unaware of the white lesions as they were asymptomatic .
they were diagnosed as leukoplakic lesions at the time of screening . the history revealed habit of smoking for longer duration ranging from 8 to 45 years with frequency of 10 to 15 smokes per day .
smoking of tobacco is seen in all the cases and some cases had habit of betel nut chewing , pan chewing , and alcohol intake along with smoking of tobacco , as seen in many studies . in the studies done by sciubba et al .
, christian et al . , poatea et al . , and scheifele et al . , the site of predilection for leukoplakia is buccal mucosa.[81012 ] the most common site for occurrence of leukoplakia is also buccal mucosa in the present study .
the age of the study group ranged from 29 to 70 years with mean average age 48.9 years .
the other studies also showed that the mean average age is fourth and fifth decades of life . in the present study ,
only clinically diagnosed oral leukoplakia were included . till date , the studies conducted are biased as the lesions at the time of screening were not subjected to both brush biopsy and scalpel biopsy . in the present study ,
earlier studies included lesions with obvious symptoms , i.e. class i patients or performed incisional biopsy when there is abnormal brush biopsy .
comparison of the histopathological reports with the brush biopsy reports revealed that they are similar in 44 cases in the study group .
four cases with mild dysplasia histopathologically had been reported as negative for dysplasia in oral brush biopsy .
as our data show , it is conceivable that the false - negative rate is significantly higher than reported ( 4 of 60 cases ) . in the previous study done by sciubba in 1999 ,
the analysis should be considered as incomplete because 517 of the 699 negative brush samples ( 73.9% ) were not followed with definitive incisional biopsy for diagnostic confirmation . in a study by svrisky et al . in 2002 , of 55 cases of negative brush biopsies , four cases had dysplasia in histopathological examination . in the present study ,
the sensitivity and specificity are 43.5% and 81.25% , respectively , with a positive predictive value of 58.3% . the low sensitivity may be due to sample size or performing incisional and brush biopsy of the same lesion which had not been done in other studies . in a retrospective study done by poate et al . in 2004 on 112 patients , a sensitivity of 71% , specificity of 32% , and a positive predictive value of 44% were reported .
they found 6 of 15 negative brush biopsy cases to have dysplasia or carcinoma present in the scalpel biopsy , underscoring the potential for false negative results .
christian in 2002 studied on dentists and dental hygienists comprising of 930 individuals , only the abnormal brush biopsies underwent scalpel biopsy . in their study , the negative brush biopsies were not subjected to the golden standard of scalpel biopsy . in a study by scheifele et al . in 2004 , the sensitivity and specificity are 61% and 97% , respectively .
they had included squamous cell carcinoma and oral lichen planus which skewed the sensitivity , specificity , and positive predictive values to increase .
the noninvasive nature of the oral cdx system should obviate some of the reluctance of general dental practitioners who may have to undertake the invasive investigation of suspicious lesions and/or perhaps reduce referral of patients for the investigation of disease which are unlikely to be potentially malignant .
some studies concluded that the oral brush biopsy technique shows promise but before any firm conclusions can be reached , a study needs to be conducted in a sufficient cohort of subjects where both brush biopsy and scalpel biopsy are performed on each participant .
this technique may be useful in the non - compliant patient who is unlikely to come back for a follow - up examination or accept an immediate referral to an oral surgeon . despite the overall uncertainty of this particular technology as an oral cancer diagnostic or case - finding aid
, the judicious use of the brush cytology in these scenarios may be clinically useful .
this is a hospital - based sample study which does not fully represent the patients of a general practitioner , and this is exactly the target group of oral cdx .
the study may be biased by the fact that only clinically diagnosed leukoplakia were subjected for oral cdx brush biopsy .
the oral brush biopsy technique surely eliminates the need for surgical procedure in asymptomatic doubtful lesions , but the diagnostic accuracy should be assessed before its use in routine clinical practice . oral brush biopsy along with advanced markers and cytomorphometric analysis can be future promising aids in non - surgical biopsy for prediction of malignancy with less morbidity .
the sensitivity and specificity should be estimated in a large sample , where both brush biopsy and conventional histopathology need to be performed in all red and white lesions . | background : the diagnosis of oral malignancy and epithelial dysplasia has traditionally been based upon histopathological evaluation of full thickness biopsy from lesional tissue . as many studies had shown that incisional biopsy could cause progression of the tumors , many alternative methods of collection of samples had been tested .
oral brush biopsy is a transepithelial biopsy where it collects cells from basal cell layer noninvasively.aim:to assess the diagnostic accuracy of brush biopsy when compared to histopathology in a group of patients with features of potentially malignancy.materials and methods : in the present study , 60 cases of clinically diagnosed leukoplakia are selected and subjected to histopathology and brush biopsy.results and conclusion : results showed that of 16 dysplasia cases confirmed by histopathology , only 12 were positively reported in oral brush biopsy . in 44 cases ,
the reports are same for histopathology and brush biopsy .
the sensitivity of oral brush biopsy is 43.5% and specificity is 81.25% with a positive predictive value of 58.3% .
oral brush biopsy with molecular markers like tenascin and keratins can be an accurate diagnostic test . |
vancomycin - resistant enterococcus ( vre ) is an important pathogen among hospitalized patients .
significant morbidity , mortality , and increased hospital costs have been associated with infections due to vre .
detection of new cases of vre represents cross transmission via the hands of health care workers , contaminated equipment , and environmental surfaces .
the emergence of de novo vre through genetic mutations induced by glycopeptide exposure in an individual patient is unusual .
acquiring nosocomial vre may vary according to how endemic vre is in a specific location , the exposure to contaminated equipment , vre carrier proximity referred to as colonization pressure , and patient 's hospitalization duration which is referred to as the time at risk .
colonization pressure is defined as the proportion of patients colonized with a particular organism in a defined geographic area within a hospital during a specified time period .
differentiating among the factors associated with nosocomial spread of vre or amplification of previously undetectable colonization is difficult in clinical settings .
the first report of vre from saudi arabia was in 1993 from king faisal specialist hospital - riyadh . however , there are only three studies of vre from saudi arabia .
one study described the frequency of vre as normal flora of the intestine in saudi patients .
the second study characterized 34 vancomycin - resistant vana e. faecium isolates obtained from two hospitals in saudi arabia .
the third study describes the prevalence and risk factors for fecal carriage in patients at tertiary care hospitals . in that study , only 7 out of 157 rectal swabs obtained from patients in different clinical setting were vre positive . here
, we report the result of the surveillance study of vre in a saudi arabian hospital and describe the associated risk factors for vre colonization and infection in our region .
the goal of this study was to identify significant risk factors for acquiring vre colonization and infection in icu settings using a case - control study .
this is a retrospective , case - control study of vre cases at king fahad specialist hospital - dammam , a referral hospital providing tertiary care for the province of dammam , saudi arabia .
the hospital has 18 medical - surgical intensive care unit beds , and more than 6000 patients are admitted to king fahad specialist hospital - dammam annually .
records obtained from the infection control section and the clinical microbiology laboratory were reviewed to identify icu patients who had vre ( e. faecalis or e. faecium ) isolated from either surveillance cultures or clinical specimens between february 2006 and march 2010 .
king fahad specialist hospital - dammam does have guidelines for wide screening of new hospital admission for mrsa and vre to prevent outbreaks of these infections .
intensive care unit , for example , performs active surveillance for methicillin - resistant staphylococcus aureus ( mrsa ) and vre at icu admission .
medical records of patients with and without vre were reviewed , and the following information was collected ( as outlined in table 2).demographic data ( age , gender ) .
host - related factors ( icu admission , acute renal failure , sepsis or multiorgan failure , and underlying diseases ) .
hospital related factors : referral from other hospitals ; hospital admission in the previous year ; length of stay of previous year 's hospitalization icu length of stay .
medication - related factors : use of antimicrobial agents in the past three months , duration of antibiotic use , use i d corticosteroid , chemotherapeutics , and cyclosporine . demographic data ( age , gender ) .
host - related factors ( icu admission , acute renal failure , sepsis or multiorgan failure , and underlying diseases ) .
hospital related factors : referral from other hospitals ; hospital admission in the previous year ; length of stay of previous year 's hospitalization icu length of stay .
medication - related factors : use of antimicrobial agents in the past three months , duration of antibiotic use , use i d corticosteroid , chemotherapeutics , and cyclosporine .
vre colonization or infection date is defined as the date on which a positive sample was collected .
recent antimicrobial use was defined as receipt of any antimicrobial agent for more than 3 consecutive days in the 3 months before the date of culture detection ; patients who received short courses of perioperative prophylaxis were excluded by this criterion .
renal insufficiency was defined as a creatinine concentration greater than 1.7 mg / dl .
high risk icu room is a room of previous patients colonized or infected with vre .
rectal swabs for culture were first inoculated onto columbia pnba and then into salt broth .
plates were incubated at 35c in ambient air and examined for growth at 24 and 48 hours .
any suspected colonies were identified by conventional laboratory methods , including gram stain , catalase test , bea test , and bvs ( vancomycin screening agar that incorporates the use of 6 ug / ml of vancomycin in brain - heart infusion agar ) .
black colonies ( esculin positive ) were then subcultured onto a blood agar plate for purity .
following 24 hours incubation , a definite spot of growth or greater than one colony present at the site of inoculation on the bvs agar indicates that the enterococci may be a vre .
faecium ) was confirmed by performing gram - positive ( gp ) identification card on the vitek 2 system ( biomerieux ; gp colorimetric identification card ) .
susceptibility testing was performed on confirmed enterococcal isolates using vancomycin ( 0.016 to 256 g / ml ) and teicoplanin ( 0.016 to 256 g / ml ) e test strips .
the determination of the mics and the interpretation of vancomycin resistance ( mic 32 g / ml ) were done according to the clinical and laboratory standards institute ( clsi ) guidelines . for the interpretation of the teicoplanin results , combination of the intermediate and resistant mics
was done as previously published for the assignment of isolates as having vana ( mic 16 g / ml ) or vanb ( mic < 16 g / ml ) .
patients who had vre colonization or infection ( 30 cases ) were matched 1 : 2 to randomly selected controls who were patients in the same ward or unit during the study period .
controls were selected in such a way that the distributions of case patients and control patients were similar over the dates of hospitalization .
the controls were selected from the population of patients whose surveillance or clinical culture findings were negative for vre . during the study period , there were 2200 surveillance cultures obtained , and only 30 ( 1.4% ) distinct cultures were positive .
we encoded all data into a database and used stata ( version 7 ) for analysis .
we compared the characteristics of cases and controls using the chi - square test for categorical variables and the t - test for continuous variables . to investigate
which potential risk factors were associated with vre , we performed unconditional logistic regression with adjustment for age , sex , and ward .
odds ratios ( ors ) and corresponding 95% confidence intervals ( cis ) were used as summary statistics to assess risk .
between february 2006 and march 2010 , a total of 30 patients in icu were identified with vre colonization or infection .
table 1 shows the characteristics of cases and controls . as a result of the matching process ,
the distributions of cases and controls were similar in terms of age , sex .
the case patients were more likely to have multiorgan failure upon icu admission ( 33% versus 12% , p = 0.03 ) , more likely having underlying chronic renal failure ( 43% versus 15% , p < 0.01 ) , receiving hemodialysis ( 37% versus 18% , p = 0.05 ) , or receiving gi contrast procedure ( 17% versus 2% , p = 0.03 ) .
case patients were more likely to have received antimicrobial agents in the 3 months before the study period ( 69% versus 20% , p < 0.01 ) especially vancomycin , metronidazole , quinolones , and piperacillin - tazobactam .
being on chemotherapeutic agents was observed in 10.7% of vre - positive versus 1.6% of vre - negative patients ( p value = 0.09 ) . of interest , it was found that being located in a high risk room ( roommate of patients colonized or infected with vre ) was protective ( table 2 ) .
multivariate analysis showed that prior antibiotic use was an independent determinant for the acquisition of vre ( p = 0.026 ) .
vancomycin - resistant enterococcus is becoming the causative agent in an increasing number of health - care - associated infections in the last decade especially in the united states .
nowadays , vre is reaching middle east countries like saudi arabia , and to our knowledge , this is the second published study on the epidemiology and risk factors of vre from saudi arabia .
also , vancomycin - resistant enterococci are becoming more important for hospital - infection control , mostly due to their particular features : colonization of the gastrointestinal tract , difficulty in decolonization of patients , and the environment dissemination .
case - control study was performed comparing all known risk factors for vre colonization from the current literature , including high risk host with comorbidities including underlying diseases like chronic renal failure , diabetes mellitus , ischemic heart disease , congestive heart failure , liver failure , and liver cirrhosis ; hospital - related factors including hospital admission in previous year including icu , length of hospitalization , having special procedures during hospitalization , and insertion of devices ; and medication- related factors like the use of antimicrobial agents in the past three months [ 5 , 15 , 16 ] .
univariate analysis of our data suggested that the potential risk factors for the new detection of vre were multiorgan failure upon icu admission , more likely having underlying chronic renal failure or receiving hemodialysis , was more likely to have received antimicrobial agents in the three months before the study period especially vancomycin , metronidazole , quinolones , and piperacillin - tazobactam .
the association of colonization with renal failure suggests that patients who are more ill are more vulnerable to colonization with vre .
numerous studies of both colonized and infected patients explored the role of preceding antimicrobial treatment as a risk factor for nosocomial vre with conflicting results . it was suggested that previous use of vancomycin , cephalosporins , and antimicrobial agents with an antianaerobic spectrum is important in the development of vre .
our study showed similar data regarding prior exposure to vancomycin , metronidazole , piperacillin - tazobactam , and quinolones as a major risk factor for the development of vre .
it is interesting to note that being located in a high risk icu room ( roommate of patients colonized or infected with vre ) was protective .
however , it was shown previously that roommates of patients identified as colonized or infected with vre were at substantial risk of becoming colonized , with the degree of risk increasing in older and more frail patients .
molecular typing of the isolates from this outbreak revealed that the predominant vre comprised 20 vanb , five vana , and one vana / vanb type isolates , which tended to fall into two genetic clusters that were identifiable phenotypically by their susceptibility to tetracycline . in conclusion ,
the factors associated with acquisition of vre are often complex , may be confounded by local variables , and may be different depending on whether the patient acquires vre by nosocomial transmission or by primary in vivo emergence ( e.g. , gene transfer to previously susceptible enterococci ) .
our study suggests that strict infection control and isolation procedures are effective in controlling health - care - associated transmission of vre , as it was shown that being in a high risk room ( room of previous patients colonized or infected with vre ) was protective .
this observation is likely related to more vigilant postdischarge cleaning and disinfection of these rooms .
one limitation of the study is the small sample size , but vre is not common in saudi arabia . | background .
vancomycin - resistant enterococci ( vre ) are significant nosocomial pathogens worldwide .
there is one report about the epidemiology of vre in saudi arabia . objective . to determine the risk factors associated with vre infection or colonization in intensive care unit ( icu ) settings . design .
this is a descriptive , epidemiologic hospital - based case - control study of patients with vre from february 2006 to march 2010 in icu in a tertiary hospital in saudi arabia
. methods .
data were collected from hospital records of patients with vre .
the main outcome measure was the adjusted odds ratio estimates of potential risk factors for vre . results .
factors associated with vre included icu admission for multiorgan failure , chronic renal failure , prior use of antimicrobial agents in the past three months and before icu admission , gastrointestinal oral contrast procedure , and hemodialysis . being located in a high risk room ( roommate of patients colonized or infected with vre ) was found to be protective .
conclusions . factors associated with vre acquisition are often complex and may be confounded by local variables . |
Mathematicians are a step closer to answering what, for some, is one of life’s most pressing questions – how to make the perfect cup of coffee.
Advanced mathematical analysis of a “hideously complicated” set of variables reveals that the size of the coffee grain is critical, followed by a long list of other factors.
This information is expected to be of particular interest to industrial manufacturers of coffee machines.
The research was carried out by a group from Mathematics Applications Consortium for Science and Industry (MACSI) at University of Limerick (UL), Ireland’s largest industrial mathematics research group. It was published in SIAM Journal on Applied Mathematics on Tuesday.
The research was led by Kevin Moroney of UL and co-authored by Dr William Lee, who now leads the industrial mathematics group at the University of Portsmouth, as well as Professor Stephen O'Brien director of MACSI at UL, Johan Marra and Dr Freek Suijver of Philips Research, Eindhoven.
“There are about 2,000 chemicals in coffee, making it as complex as wine,” said Dr Lee.
Coffee brewing is, the researchers say, poorly understood, and a better understanding of the physics and chemistry of coffee brewing is likely to lead to better designed coffee machines.
They used a combination of experimental and mathematical methods to reveal grain size is one of the most important elements in brewing coffee, but a host of other factors also play an important role.
According to Dr Lee, “what makes the best coffee is hideously complicated – from the shape of the filter, to the scale of a single grain, to the flow rate of water and which machine or tool is used, there are an enormous number of variables”.
“But maths is a way of revealing hidden simplicity. By using mathematical analysis, we can begin to tell the story of which elements and in what order lead to the best coffee – we are now one step closer to the perfect cup of coffee,” he explained.
The team hope to develop a complete theory of coffee brewing that could be used to inform the design of filter coffee machines in the same way that industry uses the theories of fluid and solid mechanics to design aeroplanes and racing cars.
“One of the many challenges that have to be overcome to develop such a theory is to understand the effect the grind size has on the extraction of coffee,” Dr Lee said.
“Our model shows that this can be understood in terms of the grind size controlling the balance between rapid extraction of coffee from the surface of grains and slow extraction from the interior of coffee grains.
“This not only explains qualitatively why grind size plays such an important role in determining the taste of coffee but also quantifies that relationship through formulas. These formulas could allow fine tuning the design of a coffee machine for a particular grind size,” Dr Lee concluded.
This work was funded through the Science Foundation Ireland Investigator Programme. ||||| Image copyright Thinkstock
Mathematicians are a step closer to understanding what makes a perfect cup of coffee.
Through some complex calculations, they have shone a light on the processes governing how coffee is extracted from grains in a filter machine.
This could help drinkers optimise their cuppa by applying a more precise - and scientific - approach.
The work is published in the SIAM Journal on Applied Mathematics.
Composed of over 1,800 chemical components, coffee is one of the most widely consumed drinks in the world.
Estimates put the number of cups drunk around the world at more than a couple of billion each day.
Brewing the perfect cup of coffee is always going to be a subjective endeavour. But the work by Kevin Moroney at the University of Limerick, William Lee at the University of Portsmouth and others offers a better understanding of the parameters that influence the final product.
While past studies have looked at the maths of coffee extraction, there hasn't been that much work on drip filter machines.
Media playback is unsupported on your device Media caption The BBC's Bryony Hopkins asked Londoners for their idea of the perfect cup of coffee.
These make up more than half of the 18 million coffee machines sold yearly in Europe, and involve pouring hot water over a bed of coffee grounds housed in a filter.
Gravity pulls the water through the filter, extracting soluble compounds from the coffee grains during the flow.
"Our overall idea is to have a complete mathematical model of coffee brewing that you could use to design coffee machines, rather like we use a theory of fluid and solid mechanics to design racing cars." Dr Lee told BBC News.
He said this study was a step towards that goal, adding: "We looked at the effect of coffee grain size on the way that coffee comes out of a filter coffee machine.
"The really surprising thing to us is that there are really two processes by which coffee is extracted from grains. There's a very quick process by which coffee's extracted from the surface of the grains. And then there's a slower tail-off where coffee comes out of the interior of the grains."
It had previously been known that grinding beans too finely could result in coffee that is over-extracted and very bitter. On the other hand not grinding them enough can make the end result too watery.
"What our work has done is take that [observation] and made it quantitative," said Dr Lee.
"So now, rather than just saying: 'I need to make [the grains] a bit bigger', I can say: 'I want this much coffee coming out of the beans, this is exactly the size [of grain] I should aim for."
This could help coffee drinkers - like Dr Lee - who grind their own beans, to optimise their coffee-making routines.
Image copyright Thinkstock Image caption The analysis looked at drip filter coffee machines
Dr Lee says he sets his grinder to the largest setting. By doing so, he says: "The grains are a bit larger than you get in the standard grind, which makes the coffee less bitter. Partly because it's adjusting that trade-off between the stuff coming out of the surface and stuff coming out of the interior. When things are larger, you're decreasing the overall surface area of the system.
"Also, the water flows more quickly through a coffee bed of large grains, because the water's spending less time in contact with the coffee, helping reduce the amount of extraction too.
"If it's bitter, it's because you're increasing the amount of surface area in the grains. Also, when the grains are very small, it's hard for the water to slide between them, so the water is spending a lot more time moving through the grains - giving it more time for the coffee to go out of solution."
But what to one person might seem bitter and tarry, to another might seem like the perfect cup of coffee.
"For industrial applications, we'd hope you could optimise the coffee machine for a certain size of grains. You could adjust the flow rate so you get the perfect extraction there," said Dr Lee.
"Or if the coffee machine has an integrated grinder you've got two variables to play around with. You can play around with the grind size and the flow rate."
The researchers are now looking at the shape of the coffee bed in drip filter machines.
"The shape of the coffee bed is deformed as you brew the coffee. When it goes in first, it's sitting flat at the bottom of the filter, but at the end of [brewing] it's coating the walls of the filter. This also seems to play a role in how the coffee tastes," said Dr Lee.
"That would allow us to address another degree of freedom: how exactly you put the water in. Do you put it in as a single jet down the centre, like water pouring out of a tap? Or do you use something more like a shower head, where it's dripping down from lots of places. Those would have different effects in disturbing the coffee bed."
Follow Paul on Twitter. ||||| Coffee is one of the world's most popular beverages. In Canada, two thirds of adults drink at least one cup of joe a day.
But beyond the caffeine kick, cups of coffee can differ wildly. Whether you have a Tim Hortons habit, swear by Starbucks or prefer to brew your own beans at home, you never quite know what you're going to get.
That's why researchers at Ireland's University of Limerick are working on brewing the perfect cup of coffee.
What makes a 'perfect' cup of coffee?
Researchers devised a mathematical model specifically for drip coffee machines, hoping to unlock the secret to the smoothest, most predictable, and most efficient brew.
Although it might seem simple, passing hot water over roasted and ground coffee beans is an incredibly complex process.
A model showing the transfers of water and coffee that go into brewing a cup of drip coffee. (Kevin M. Moroney)
There are over 1,800 chemicals in a typical cup of coffee and many of those contribute to the taste. There are mechanical variables, too: the size of the coffee grounds, the temperature of the hot water, the rate at which the water passes over the grounds, and the density of packing of the grounds all affect the flavour and texture of a humble cup of joe.
How much coffee did the researchers have to drink?
All the authors involved in the study admit that coffee is a part of their daily routine — but their research wasn't as simple as touring the coffee shops of Limerick. Instead, the team considered all the possible variables that go into making coffee and developed a mathematical and computer model that can predict the amount of coffee extraction that will take place under the conditions of a drip coffee machine.
Their initial equations included so many variables that it was far too cumbersome to ever actually use in practice. So the researchers decided to ignore things like brew time and water quality and focused their model on what could truly be measured.
However, if the orders at Starbucks are any indication, what constitutes a good cup of coffee is really based on personal preference. So it wasn't so much about researching the quality of the coffee but more the efficiency of the brew.
What's the biggest variable that changes coffee quality?
According to researcher Kevin Moroney, the size of the grounds is "vitally important" to the extraction of coffee. The larger the grind in drip coffee the less bitter the taste, partially because there are more gaps between the grinds and the hot water can circulate more easily.
While there are a number of factors that affect coffee quality, grind size is the most important variable. (Irene Coco / Unsplash)
The bitterness occurs when the surface area of the grain is high (a fine grind), preventing water from easily flowing between the grounds and increasing the amount of coffee extracted from the beans.
What this means is that bitterness is determined primarily by the size of the coffee grounds. The largest setting of grind size will give you the least bitter taste. But it's a trade off, since smaller grinds pack more of a caffeine punch.
Will this research help us make the perfect cup of coffee?
Yes. In fact, this work has helped them quantify just how many molecules you want to extract from a coffee bean.
"You want to extract about 20 per cent," said Moroney, "because if you extract too little of the mass of the coffee grain, you only get the flavours that come from chemicals of a small molecular mass which extract fast so you don't get that complex flavour you want in the coffee."
There's no way to calculate the perfect 20 per cent extraction at home, but you can bet that coffee machine makers are taking note.
The next step researchers want to take is to determine how specific brewing configurations affect the quality of coffee — and how you can maximize flavour extraction without verging on bitterness. ||||| A lot of math goes into the coffee extraction process. Public domain photo.
Composed of over 1,800 chemical components, coffee is one of the most widely-consumed drinks in the world. The seeds (coffee beans) from the plant of the same name are roasted and ground, allowing a flow of hot water to extract their soluble content. Undissolved solids are filtered from the dissolved particles, and the resulting liquid becomes the concoction that much of the population drinks every day.
Brewers have developed numerous techniques to prepare the popular beverage. All techniques are based on leaching via a solid-liquid extraction, and each method aims to produce the best quality coffee possible – a subjective feat that, despite evaluations from professional coffee tasters, is often a matter of personal preference. Because so many chemical compounds comprise a single batch of coffee, determining precise correlations between the solubles’ physical parameters and the beverage’s quality is difficult. However, understanding the mathematics of extraction can help identify the influence of various parameters on the final product. In a paper publishing today in the SIAM Journal on Applied Mathematics, Kevin M. Moroney, William T. Lee, Stephen B.G. O’Brien, Freek Suijver, and Johan Marra present and analyze a new multiscale model of coffee extraction from a coffee bed.
While past studies have investigated the mathematics of coffee extraction, researchers have previously paid little attention to the drip filter brewing system. Drip filter machines make up about 10 million of the 18+ million coffee machines sold yearly in Europe, and involve pouring hot water over a bed of coffee grounds housed in a filter. Gravity pulls the water through the filter, extracting coffee solubles from the grains during the flow.
(a) Espresso coffee is made by forcing hot water under high pressure through a compacted bed of finely ground coffee. (b) Drip filter brewing involves pouring hot water over a loose bed of coarser coffee in a filter. In either method water flows through the bed, leaching soluble coffee components from the grains. Any undissolved solids in the fluid are filtered from the extract as the liquid leaves the filter. Image credit: Kevin M. Moroney
Moroney et al.’s current paper focuses on drip filter machines and expands upon the authors’ previous work, which was published in Chemical Engineering Science in 2015. “Most of the models of coffee extraction we found in the literature either focused on batch extraction in a well-mixed system, or derived general transport equations without proposing specific extraction mechanisms or validating with experiments,” Moroney said. “In comparison, our model describes flow and extraction in a coffee bed, specifies extraction mechanisms in terms of the coffee grain properties, and compares the model’s performance with experiment. Our initial focus on the flow-through cylindrical brewing chamber allowed us to consider the model in one spatial coordinate and ensure that the model assumption of a static bed was valid.”
The authors’ earlier paper presents the derivation of this general model, which considers bed dimensions, flow rates, grind size distribution, and pressure drop. They assume isothermal conditions (constant temperature), because optimal brewing circumstances require a narrow temperature range of 91-94 degrees Celsius. They also assume that coffee bed properties remain homogeneous in any cross section and that water saturates all pores in the coffee bed, eliminating the need to model unsaturated flow. A set of conservation equations on the bed scale monitor the transport of coffee and liquid throughout the coffee bed.
Now the authors take that model one step further. “The model of coffee brewing published in Chemical Engineering Science was mathematically complete, but I would describe it as a model only a computer could love: a complicated system of coupled partial differential equations that can only be solved numerically,” Lee said. “This new paper analyses that model to produce a reduced system of equations for which approximate analytic solutions can be found.”
Transfers included in the coffee extraction model (reproduced from K. M. Moroney et al, 2015): The diagram shows the transfers of water and coffee which are described by the coffee extraction model presented in the above reference. Image credit: Kevin M. Moroney
Because coffee brewing involves so many components, simplifying the model becomes necessary. “In modelling a complicated physical process such as coffee brewing, one attempts to write down a system of equations which captures the essence of the process,” O’Brien said. “In doing so, we initially make some simplifications, which neglect some aspects of the real problem. For example, real coffee contains a large number of dissolved substances; we simplify our model by considering the case of a single such substance. The mathematical model then comprises conservation laws (mass momentum), which in their complete form cannot be solved exactly.”
The authors then utilize non-dimensionalism, which measures variables with respect to fundamental constants intrinsic to the problem, to further simplify the extraction model. This technique reduces the number of parameters—which include brew ratio, brewing time, water quality and temperature, grind size and distribution, and extraction uniformity—therefore letting the authors recognize the equations’ dominant terms before they begin actively seeking solutions. “Neglecting smaller terms thus allows us to find approximate solutions,” O’Brien said.
Recognizing these approximate solutions helps the authors easily identify typical trends. “Approximate solutions are formed based on the dominant processes in the coffee bed during different stages of the extraction process,” Moroney said. “Initially, the concentration of coffee in the bed is determined by the balance between a rapid extraction from the surfaces of coffee grains and the rate at which coffee is removed from the coffee bed by the extracting water. Later in the process, the extraction is dominated by slow diffusion of coffee from the kernels of larger grains, which was initially negligible.” Although the timescales of the aforementioned extraction methods are much shorter in fine coffee grinds rather than coarse grinds, the authors can still construct approximate solutions because of the timescale ratios’ small size. “The value of the solutions lies in the ability to explicitly relate the performance of a brewing system with the properties of the coffee, water and equipment used,” Moroney said. These solutions help predict the coffee quality for specific brewing configurations.
Location of coffee in the bed: The coffee bed consists of (intergranular) pores and grains. The grains consist of (intragranular) pores and solids. The schematic shows the breakdown of this coffee in the grains (intragranular pores are not represented for clarity). Image credit: Kevin M. Moroney
In the end, the authors intend for their model analysis to expose the mathematics involved in coffee brewing. “The research work is ultimately aimed at improving our understanding of the brewing process and understanding the relation between brewing process parameters and perceived coffee taste,” Marra said.
A possible next step involves incorporating the changing coffee bed shape that occurs while water flows through the conical filter holder of a drip filter machine. “This causes both the extraction and the flow rate through the coffee bed to become a function of position,” Marra said. The authors’ research also has the potential to inspire further models on different extraction processes, including unsaturated flow and the trapping of air pockets in a coffee bed, in the never-ending quest for a perfect cup of coffee.
Source article: Moroney, K.M., Lee, W.T., O'Brien, S.B.G., Suijver, F., & Marra, J. (2016). Asymptotic Analysis of the Dominant Mechanisms in the Coffee Extraction Process. SIAM Journal on Applied Mathematics, 76(6), 2196-2217.
About the authors: Kevin M. Moroney is a Ph.D. researcher with the Mathematics Applications Consortium for Science and Industry (MACSI) in the Department of Mathematics and Statistics at the University of Limerick. William T. Lee is a lecturer in the Department of Mathematics and Statistics at the University of Limerick, and is a part of MACSI. Stephen B.G. O’Brien is director of MACSI and a professor of applied mathematics at the University of Limerick. Freek Suijver is a program manager and senior director of the Program Management Team at Philips Research Laboratories. Johan Marra is a principal scientist and chemical engineer at Philips Research Laboratories. | – One plus one equals … brew? Scientists out of Ireland's University of Limerick tapped into math and a computer model in their quest to come up with a cup of coffee that would satisfy even Twin Peaks' Special Agent Dale Cooper, the CBC reports. And while it was impossible to master every factor in the "hideously complicated" set of coffee-brewing variables, the researchers were able to make some inroads in the drip-coffee process in a study published in the SIAM Journal on Applied Mathematics. Making the experiment challenging is that coffee comprises about 2,000 chemical compounds, the subjectivity of individual preferences, and the fact that each step of the brewing process can affect how the beverage turns out—from the water temperature and how fast it flows, to how densely it's packed and how long it's brewed. And so the researchers concentrated on an easily measurable variable: the size of the grind. What they discovered is the larger the grounds, the better the hot water was able to flow in between them, minimizing bitterness. What you'll lose with that bigger grind, though, is the caffeine boost, as smaller grinds offer more of a wake-up wallop. Lead author Kevin Moroney also notes the ideal molecule extraction from a coffee bean is 20%. The researchers note it may be impossible for the average Joe to maximize his joe with this info, but they point out that coffee-machine makers are likely paying close attention to the math. "Our overall idea is to have a complete mathematical model of coffee brewing that you could use to design coffee machines, rather like we use a theory of fluid and solid mechanics to design racing cars," co-author William Lee tells the BBC. (The most potent cup of coffee you can buy.) |
SECTION 1. FORT PRESQUE ISLE NATIONAL HISTORIC SITE, PENNSYLVANIA.
(a) Findings and Purposes.--
(1) Findings.--The Congress finds the following:
(A) Fort Presque Isle was a frontier outpost
located on Garrison Hill in the area of present-day
Erie, Pennsylvania, which was the site of the American
installations built in 1795 and 1796 and in the War of
1812.
(B) General Anthony Wayne was a Revolutionary War
hero who served under General George Washington and, at
one point, was commanding general of the United States
Army. He first arrived in the area of Presque Isle in
1786.
(C) Legend has it that General Wayne was nicknamed
``Mad'' by his troops, not for being rash or foolish,
but for his leadership and bravery on and off the
battlefield.
(D) The original blockhouse of Fort Presque Isle
was built in 1795 by 200 Federal troops from General
Wayne's army, under the direction of Captain John
Grubb. It was the first blockhouse used as part of a
defensive system established to counter Native American
uprisings. It was also used during the War of 1812.
(E) General Wayne was stricken ill at Fort Presque
Isle and died there in 1796. At his request, his body
was buried under the flagpole of the northwest
blockhouse of the fort.
(F) The original blockhouse of Fort Presque Isle
burned in 1852, and the existing structure was built by
the Commonwealth of Pennsylvania in 1880 as a memorial
to General Wayne.
(G) The Pennsylvania Historical and Museum
Commission has recognized the reconstructed blockhouse
as eligible for placement on the National Register of
Historic Places.
(2) Purposes.--The purposes of this section are the
following:
(A) To provide for reconstruction of the frontier
fort at Presque Isle for the benefit, inspiration, and
education of the people of the United States.
(B) To preserve the original grave site of General
``Mad'' Anthony Wayne at Fort Presque Isle.
(C) To broaden understanding of the historical
significance of Fort Presque Isle.
(b) Definitions.--In this section:
(1) Historic site.--The term ``historic site'' means the
Fort Presque Isle National Historic Site established by
subsection (c).
(2) Secretary.--The term ``Secretary'' means the Secretary
of the Interior.
(c) Establishment of Fort Presque Isle National Historic Site.--
(1) Establishment.--There is established as a unit of the
National Park System the Fort Presque Isle National Historic
Site in Erie, Pennsylvania.
(2) Description.--
(A) In general.--The historic site shall consist of
land and improvements comprising the historic location
of Fort Presque Isle, including the existing blockhouse
replica at that location, as depicted on a map entitled
``________'', numbered ________ and dated ________,
comprising approximately ________ acres.
(B) Map and boundary description.--The map referred
to in subparagraph (A) and accompanying boundary
description shall be on file and available for public
inspection in the office of the Director of the
National Park Service and any other office of the
National Park Service that the Secretary determines to
be an appropriate location for filing the map and
boundary description.
(d) Administration of the Historic Site.--
(1) In general.--The Secretary shall administer the
historic site in accordance with this section and the
provisions of law generally applicable to units of the National
Park System, including the Act of August 25, 1916 (commonly
known as the National Park Service Organic Act; 16 U.S.C. 1 et
seq.), and the Act of August 21, 1935 (commonly known as the
Historic Sites, Buildings, and Antiquities Act; 16 U.S.C. 461
et seq.).
(2) Cooperative agreements.--To further the purposes of
this section, the Secretary may enter into a cooperative
agreement with any interested individual, public or private
agency, organization, or institution.
(3) Technical and preservation assistance.--
(A) In general.--The Secretary may provide to any
eligible person described in subparagraph (B) technical
assistance for the preservation of historic structures
of, the maintenance of the cultural landscape of, and
local preservation planning for, the historic site.
(B) Eligible persons.--The eligible persons
described in this subparagraph are--
(i) an owner of real property within the
boundary of the historic site, as described in
subsection (c)(2); and
(ii) any interested individual, agency,
organization, or institution that has entered
into an agreement with the Secretary pursuant
to paragraph (2) of this subsection.
(e) Acquisition of Real Property--The Secretary may acquire by
donation, exchange, or purchase with funds made available by donation
or appropriation, such lands or interests in lands as may be necessary
to allow for the interpretation, preservation, or restoration of the
historic site.
(f) General Management Plan.--
(1) In general.--Not later than the last day of the third
full fiscal year beginning after the date of enactment of this
Act, the Secretary shall, in consultation with the officials
described in paragraph (2), prepare a general management plan
for the historic site.
(2) Consultation.--In preparing the general management
plan, the Secretary shall consult with an appropriate official
of each appropriate political subdivisions of the State of
Pennsylvania that have jurisdiction over all or a portion of
the historic site.
(3) Submission of plan to congress.--Upon the completion of
the general management plan, the Secretary shall submit a copy
of the plan to the Committee on Energy and Natural Resources of
the Senate and the Committee on Resources of the House of
Representatives. | Authorizes the Secretary of the Interior, in administering the site, to acquire by donation, exchange, or purchase any lands or interests necessary to allow for the site's interpretation, preservation, or restoration.
Requires the Secretary to prepare and submit to specified congressional committees a general management plan for the site. |
the need for rapid implementation of high performance , robust , and portable finite element methods has led to approaches based on automated code generation .
this has been proven successful in the context of the fenics @xcite and firedrake @xcite projects . in these frameworks ,
the weak variational form of a problem is expressed in a high level mathematical syntax by means of the domain - specific language ufl @xcite .
this mathematical specification is used by a domain - specific compiler , known as a form compiler , to generate low - level c or c++ code for the integration over a single element of the computational mesh of the variational problem s left and right hand side operators .
the code for assembly operators must be carefully optimized : as the complexity of a variational form increases , in terms of number of derivatives , pre - multiplying functions , or polynomial order of the chosen function spaces , the operation count increases , with the result that assembly often accounts for a significant fraction of the overall runtime . as demonstrated by the substantial body of research on the topic , automating the generation of such high performance implementations poses several challenges .
this is a result of the complexity inherent in the mathematical expressions involved in the numerical integration , which varies from problem to problem , and the particular structure of the loop nests enclosing the integrals .
general - purpose compilers , such as those by _
gnu _ and _ intel _ , fail to exploit the structure inherent in the expressions , thus producing sub - optimal code ( i.e. , code which performs more floating - point operations , or `` flops '' , than necessary ; we show this in section [ sec : perf - results ] ) .
research compilers , for instance those based on polyhedral analysis of loop nests , such as pluto @xcite , focus on parallelization and optimization for cache locality , treating issues orthogonal to the question of minimising flops .
the lack of suitable third - party tools has led to the development of a number of domain - specific code transformation ( or synthesizer ) systems .
show how automated code generation can be leveraged to introduce optimizations that a user should not be expected to write `` by hand '' . and
employ mathematical reformulations of finite element integration with the aim of minimizing the operation count .
in , the effects and the interplay of generalized code motion and a set of low level optimizations are analysed .
it is also worth mentioning two new new form compilers , uflacs @xcite and tsfc @xcite , which particularly target the compilation time challenges of the more complex variational forms .
the performance evaluation in section [ sec : perf - results ] includes most of these systems .
however , in spite of such a considerable research effort , there is still no answer to one fundamental question : can we automatically generate an implementation of a form which is optimal in the number of flops executed ? in this paper , we formulate an approach that solves this problem for a particular class of forms and provides very good approximations in all other cases . in particular , we will define `` local optimality '' , which relates operation count with inner loops . in summary ,
our contributions are as follows : * we formalize the class of finite element integration loop nests and we build the space of legal transformations impacting their operation count . * we provide an algorithm to select points in the transformation space .
the algorithm uses a cost model to : ( i ) understand whether a transformation reduces or increases the operation count ; ( ii ) choose between different ( non - composable ) transformations .
* we demonstrate that our approach systematically leads to a local optimum .
we also explain under what conditions of the input problem global optimality is achieved . *
we integrate our approach with a compiler , coffee , which is in use in the firedrake framework . *
we experimentally evaluate using a broader suite of forms , discretizations , and code generation systems than has been used in prior research .
this is essential to demonstrate that our optimality model holds in practice .
in addition , in order to place coffee on the same level as other code generation systems from the viewpoint of low level optimization ( which is essential for a fair performance comparison ) : * we introduce a transformation based on symbolic execution that allows irrelevant floating point operations to be skipped ( for example those involving zero - valued quantities ) . after reviewing basic concepts in finite element integration , in section [ sec : lnopt ]
we introduce a set of definitions mapping mathematical properties to the level of loop nests .
this step is an essential precursor to the definition of the two algorithms sharing elimination ( section [ sec : sharing - elimination ] ) and pre - evaluation ( section [ sec : pre - evaluation ] ) through which we construct the space of legal transformations .
the main transformation algorithm in section [ sec : optimal - synthesis ] delivers the local optimality claim by using a cost model to coordinate the application of sharing elimination and pre - evaluation .
we elaborate on the correctness of the methodology in section [ sec : proof ] .
the numerical experiments are showed in section [ sec : perf - results ] .
we conclude discussing the limitations of the algorithms presented and future work .
we review finite element integration using the same notation and examples adopted in and .
consider the weak formulation of a linear variational problem : @xmath1 where @xmath2 and @xmath3 are , respectively , a bilinear and a linear form .
the set of _ trial _ functions @xmath4 and the set of _ test _ functions @xmath5 are suitable discrete function spaces . for simplicity , we assume @xmath6 .
let @xmath7 be the set of basis functions spanning @xmath4 .
the unknown solution @xmath8 can be approximated as a linear combination of the basis functions @xmath7 . from the solution of the following linear system it is possible to determine a set of coefficients to express @xmath8 : @xmath9 in which @xmath10 and @xmath11 discretize @xmath2 and @xmath3 respectively : @xmath12 the matrix @xmath10 and the vector @xmath11 are assembled and subsequently used to solve the linear system through ( typically ) an iterative method .
we focus on the assembly phase , which is often characterized as a two - step procedure : _ local _ and _ global _ assembly .
local assembly is the subject of this article .
it consists of computing the contributions of a single element in the discretized domain to the equation s approximated solution . during global assembly , these local contributions are coupled by suitably inserting them into @xmath10 and @xmath11 .
we illustrate local assembly in a concrete example , the evaluation of the local element matrix for a laplacian operator .
consider the weighted poisson equation : @xmath13 in which @xmath8 is unknown , while @xmath14 is prescribed .
the bilinear form associated with the weak variational form of the equation is : @xmath15 the domain @xmath16 of the equation is partitioned into a set of cells ( elements ) @xmath17 such that @xmath18 and @xmath19 . by defining @xmath20 as the set of basis functions with support on the element @xmath21 ( i.e. those which do not vanish on this element ) , we can express the local element matrix as @xmath22 the local element vector @xmath3 can be determined in an analogous way .
it has been shown ( for example in ) that local element tensors can be expressed as a sum of integrals over @xmath21 , each integral being the product of derivatives of functions from sets of discrete spaces and , possibly , functions of some spatially varying coefficients .
an integral of this form is called _
monomial_. quadrature schemes
are typically used to numerically evaluate @xmath23 . for convenience ,
a reference element @xmath24 and an affine mapping @xmath25 to any element @xmath26 are introduced .
this implies that a change of variables from reference coordinates @xmath27 to real coordinates @xmath28 is necessary any time a new element is evaluated .
the basis functions @xmath29 are then replaced with local basis functions @xmath30 such that @xmath31 .
the numerical integration of over an element @xmath21 can then be expressed as follows : @xmath32 where @xmath33 is the number of integration points , @xmath34 the quadrature weight at the integration point @xmath35 , @xmath36 the dimension of @xmath16 , @xmath37 the number of degrees of freedom associated to the local basis functions , and @xmath38 the determinant of the jacobian of the aforementioned change of coordinates . by exploiting the linearity , associativity and distributivity of the relevant mathematical operators , we can rewrite as @xmath39 a generalization of this transformation was introduced in @xcite . since it only involves reference element terms ,
the quadrature sum can be pre - evaluated and reused for each element .
the evaluation of the local tensor can then be abstracted as @xmath40 in which the pre - evaluated _ reference tensor _ , @xmath41 , and the cell - dependent _ geometry tensor _ , @xmath42 ,
are exposed . depending on form and discretization
, the relative performance of the two modes , in terms of the operation count , can vary quite dramatically .
the presence of derivatives or coefficient functions in the input form increases the rank of the geometry tensor , making the traditional quadrature mode preferable for sufficiently complex forms . on the other hand , speed - ups from adopting tensor mode
can be significant in a wide class of forms in which the geometry tensor remains sufficiently small .
the discretization , particularly the polynomial order of trial , test , and coefficient functions , also plays a key role in the resulting operation count .
these two modes are implemented in the fenics form compiler @xcite . in this compiler ,
a heuristic is used to choose the most suitable mode for a given form .
it consists of analysing each monomial in the form , counting the number of derivatives and coefficient functions , and checking if this number is greater than a constant found empirically @xcite .
we will return to the efficacy of this approach in section [ sec : perf - results ] .
one of the objectives of this paper is to produce a system that goes beyond the dichotomy between quadrature and tensor modes .
we will reason in terms of loop nests , code motion , and code pre - evaluation , searching the entire implementation space for an optimal synthesis .
in this section , we characterize global and local optimality for finite element integration as well as the space of legal transformations that needs be explored to achieve them .
the method by which exploration is performed is discussed in section [ sec : optimal - synthesis ] . in order to make the article
self - contained , we start with reviewing basic compiler terminology .
a perfect loop nest is a loop whose body either 1 ) comprises only a sequence of non - loop statements or 2 ) is itself a perfect loop nest .
if this condition does not hold , a loop nest is said to be imperfect .
an independent basic block is a sequence of statements such that no data dependencies exist between statements in the block .
we focus on perfect nests whose innermost loop body is an independent basic block .
a straightforward property of this class is that hoisting invariant expressions from the innermost to any of the outer loops or the preheader ( i.e. , the block that precedes the entry point of the nest ) is always safe , as long as any dependencies on loop indices are honored .
we will make use of this property .
the results of this section could also be generalized to larger classes of loop nests , in which basic block independence does not hold , although this would require refinements beyond the scope of this paper . by mapping mathematical properties to the loop nest level , we introduce the concepts of a _ linear loop _ and , more generally , a ( perfect ) multilinear loop nest .
[ def : linear - loop ] a loop @xmath3 defining the iteration space @xmath43 through the iteration variable @xmath44 , or simply @xmath45 , is linear if in its body 1 .
@xmath44 appears only as an array index , and 2 .
whenever an array @xmath2 is indexed by @xmath44 ( @xmath46 $ ] ) , all expressions in which this appears are affine in @xmath46 $ ] .
[ def : multi - linear - loop ] a multilinear loop nest of arity @xmath37 is a perfect nest composed of @xmath37 loops , in which all of the expressions appearing in the body of the innermost loop are affine in each loop @xmath45 separately .
we will show that multilinear loop nests , which arise naturally when translating bilinear or linear forms into code , are important because they have a structure that we can take advantage of to reach a local optimum .
we define two other classes of loops .
[ def : i - loop ] a loop @xmath45 is said to be a reduction loop if in its body 1 .
@xmath44 appears only as an array index , and 2 . for each augmented assignment statement @xmath47 ( e.g. , an increment )
, arrays indexed by @xmath44 appear only on the right hand side of @xmath47 .
[ def : e - loop ] a loop @xmath45 is said to be an order - free loop if its iterations can be executed in any arbitrary order .
consider equation [ eq : quadrature ] and the ( abstract ) loop nest implementing it illustrated in figure [ code : loopnest ] . the imperfect nest @xmath48 $ ] comprises an order - free loop @xmath49 ( over elements in the mesh ) , a reduction loop @xmath45 ( performing numerical integration ) , and a multilinear loop nest @xmath50 $ ] ( over test and trial functions ) . in the body of @xmath51 , one or more statements evaluate the local tensor for the element @xmath52 .
expressions ( the right hand side of a statement ) result from the translation of a form in high level matrix notation into code .
in particular , @xmath53 is the number of monomials ( a form is a sum of monomials ) , @xmath54 ( @xmath55 ) represents the product of a coefficient function ( e.g. , the inverse jacobian matrix for the change of coordinates ) with test or trial functions , and @xmath56 is a function of coefficients and geometry .
we do not pose any restrictions on function spaces ( e.g. , scalar- or vector - valued ) , coefficient expressions ( linear or non - linear ) , differential and vector operators , so @xmath56 can be arbitrarily complex .
we say that such an expression is in _ normal form _ , because the algebraic structure of a variational form is intact : products have not yet been expanded , distinct monomials can still be identified , and so on .
this brings us to formalize the class of loop nests that we aim to optimize .
[ def : fem - loopnest ] a finite element integration loop nest is a loop nest in which the following appear , in order : an imperfect order - free loop , an imperfect ( perfect only in some special cases ) , linear or non - linear reduction loop , and a multilinear loop nest whose body is an independent basic block in which expressions are in normal form .
we then characterize optimality for a finite element integration loop nest as follows .
[ def : mln - optimality ] let @xmath57 be a generic loop nest , and let @xmath58 be a transformation function @xmath59 such that @xmath60 is semantically equivalent to @xmath57 ( possibly , @xmath61 ) .
we say that @xmath62 is an optimal synthesis of @xmath57 if the total number of operations ( additions , products ) performed to evaluate the result is minimal . the concept of local optimality , which relies on the particular class of _ flop - decreasing _ transformations , is also introduced . a transformation which reduces the operation count is called flop - decreasing .
[ def : mln - quasi - optimality ] given @xmath57 , @xmath60 and @xmath58 as in definition [ def : mln - optimality ] , we say that @xmath62 is a locally optimal synthesis of @xmath57 if : * the number of operations ( additions , products ) in the innermost loops performed to evaluate the result is minimal , and * @xmath58 is expressed as composition of flop - decreasing transformations .
the restriction to flop - decreasing transformations aims to exclude those apparent optimizations that , to achieve flop - optimal innermost loops , would rearrange the computation at the level of the outer loops causing , in fact , a global increase in operation count .
we also observe that definitions [ def : mln - optimality ] and [ def : mln - quasi - optimality ] do not take into account memory requirements .
if the execution of loop nest were memory - bound the ratio of operations to bytes transferred from memory to the cpu being too low then optimizing the number of flops would be fruitless .
henceforth we assume we operate in a cpu - bound regime , evaluating arithmetic - intensive expressions . in the context of finite elements , this is often true for more complex multilinear forms and/or higher order elements .
achieving optimality in polynomial time is not generally feasible , since the @xmath56 sub - expressions can be arbitrarily unstructured .
however , multilinearity results in a certain degree of regularity in @xmath54 and @xmath55 . in the following sections
, we will elaborate on these observations and formulate an approach that achieves : ( i ) at least a local optimum in all cases ; ( ii ) global optimality whenever the monomials are `` sufficiently structured '' . to this purpose
, we will construct : * the space of legal transformations impacting the operation count ( sections [ sec : sharing - elimination ] [ sec : mem - const ] ) * an algorithm to select points in the transformation space ( section [ sec : optimal - synthesis ] ) we start with introducing the fundamental notion of sharing .
a statement within a loop nest @xmath57 presents sharing if at least one of the following conditions hold : spatial sharing : : there are at least two symbolically identical sub - expressions temporal sharing : : there is at least one non - trivial sub - expression ( e.g. , an addition or a product ) that is redundantly executed because it is independent of @xmath63 . to illustrate the definition , we show in figure [ code : multi_loopnest ] how sharing evolves as factorization and code motion
are applied to a trivial multilinear loop nest . in the original loop nest ( figure [ code : multi_loopnest_a ] ) , spatial sharing is induced by the symbol @xmath64 .
factorization eliminates spatial sharing and creates temporal sharing ( figure [ code : multi_loopnest_b ] ) . finally , generalized code motion @xcite , which hoists sub - expressions that are redundantly executed by at least one loop in the nest , leads to optimality ( figure [ code : multi_loopnest_c ] ) .
= 13pt = 13pt = 4pt in this section , we study _ sharing elimination _ , a transformation that aims to reduce the operation count by removing sharing through the application of expansion , factorization , and generalized code motion .
if the objective were reaching optimality and the expressions lacked structure , a transformation of this sort would require solving a large combinatorial problem for instance to evaluate the impact of all possible factorizations .
our sharing elimination strategy , instead , exploits the structure inherent in finite element integration expressions to guarantee , after coordination with other transformations ( an aspect which we discuss in the following sections ) , local optimality .
global optimality is achieved if stronger preconditions hold . setting local optimality , rather than optimality , as primary goal is essential to produce simple and computationally efficient algorithms two necessary conditions for integration with a compiler .
finite element expressions can be seen as composition of operations between tensors . often , the optimal implementation strategy for these operations is to be determined out of two alternatives .
for instance , consider @xmath65 , with @xmath66 being the transposed inverse jacobian matrix for the change of ( two - dimensional ) coordinates , and @xmath67 a generic two - dimensional vector .
the tensor operation will reduce to the scalar expression @xmath68 , in which @xmath69 and @xmath70 represent components of @xmath67 that depend on @xmath45 . to minimize the operation count for expressions of this kind
, we have two options : [ strategy : i ] eliminating temporal sharing through generalized code motion . [
strategy : ii ] eliminating spatial sharing first through product expansion and factorization and temporal sharing afterwards , again through generalized code motion . in the current example , we observe that , depending on the size of @xmath45 , applying strategy [ strategy : ii ] could reduce the operation count since the expression would be recast as @xmath71 and some hoistable sub - expressions would be exposed . on the other hand ,
strategy [ strategy : i ] would have no effect as @xmath67 only depends on a single loop , @xmath45 .
in general , the choice between the two strategies depends on multiple factors : the loop sizes , the increase in operation count due to expansion ( in strategy [ strategy : ii ] ) , and the gain due to code motion .
a second application of strategy [ strategy : ii ] was provided in figure [ code : multi_loopnest ] .
these examples motivate the introduction of a particular class of expressions , for which the two strategies assume notable importance .
[ def : struct - expr ] we say that an expression is `` structured along a loop nest @xmath57 '' if and only if , for every symbol @xmath72 depending on at least one loop in @xmath57 , the spatial sharing of @xmath72 may be eliminated by factorizing all occurrences of @xmath72 in the expression . [ prop : multi - struct ] an expression along a multilinear loop nest is structured .
this follows directly from definition [ def : linear - loop ] and definition [ def : multi - linear - loop ] , which essentially restrict the number of occurrences of a symbol @xmath72 in a summand to at most 1 .
if @xmath57 were an arbitrary loop nest , a given symbol @xmath72 could appear everywhere ( e.g. , @xmath37 times in a summand and @xmath53 times in another summand with @xmath73 , as argument of a higher level function , in the denominator of a division ) , thus posing the challenge of finding the factorization that maximizes temporal sharing . if @xmath57 is instead a finite element integration loop nest , thanks to proposition [ prop : multi - struct ] the space of flop - decreasing transformations is constructed by `` composition '' of strategy [ strategy : i ] and strategy [ strategy : ii ] , as illustrated in algorithm [ algo : sharing - elimination ] .
finally , we observe that the @xmath56 sub - expressions can sometimes be considered `` weakly structured '' .
this happens when a relaxed version of definition [ def : struct - expr ] applies , in which the factorization of @xmath72 only `` minimizes '' ( rather than `` eliminates '' ) spatial sharing ( for instance , in the complex hyperelastic model analyzed in section [ sec : perf - results ] )
. weak structure will be exploited by algorithm [ algo : sharing - elimination ] in the attempt to achieve optimality .
algorithm [ algo : sharing - elimination ] describes sharing elimination assuming as input a tree representation of the loop nest .
it makes use of the following notation and terminology : * _ multilinear operand _ : any @xmath54 or @xmath55 in the input expression . *
_ multilinear symbol _ : a symbol appearing within a multilinear operand depending on @xmath74 or @xmath51 ( e.g. , test functions , first order derivatives of test functions , etc . ) .
examples will be provided in section [ sec : se - examples ] . ' '' '' [ algo : sharing - elimination ] the input of the algorithm is a tree representation a finite element integration loop nest . 1 . perform a depth - first visit of the loop tree to collect and partition multilinear operands into disjoint sets , @xmath75 .
@xmath76 is such that all multilinear operands in each @xmath77 share the same set of multilinear symbols @xmath78 , whereas there is no sharing across different partitions . for all multilinear operands in @xmath77 such that @xmath79 , apply strategy [ strategy : i ] .
+ _ note : as a consequence of proposition [ prop : multi - struct ] , @xmath80 and @xmath81 represent the number of products in the innermost loop induced by @xmath82 if strategy [ strategy : i ] or strategy [ strategy : ii ] were applied _ 2 . for each sub - expression @xmath83 depending on exactly one linear loop ,
collect the multilinear symbols and the temporaries produced at step ( 1 ) .
partition them into disjoint sets , @xmath84 , such that @xmath85 includes all instances of a given symbol in @xmath83 .
apply strategy [ strategy : ii ] factorizing the symbols in each @xmath85 , provided that this leads to a reduction in operation count ; otherwise , apply strategy [ strategy : i ] + _ note : the last check ensures the flop - decreasing nature of the transformation . in the cases in which expansion outweighs code motion , strategy [ strategy : i ] is preferred . _
+ _ note : the expansion cost is a function of the products wrapping a symbol ( how many of them and their arity ) , so it can be determined through tree visits .
build the _ sharing graph _ @xmath86 .
each @xmath87 represents a multilinear symbol or a temporary produced by the previous steps .
an edge @xmath88 , @xmath89 indicates that a product @xmath90 would appear if the sub - expressions including @xmath91 and @xmath92 were expanded .
+ _ note : the following steps will only impact bilinear forms , since otherwise @xmath93 . _ 4 .
partition @xmath47 into disjoint sets , @xmath94 , such that @xmath95 includes all instances of a given symbol @xmath96 in the expression . transform @xmath97 by merging @xmath98 into a unique vertex @xmath96 ( taking the union of the edges ) , provided that factorizing @xmath99 $ ] would not cause an increase in operation count .
map @xmath97 to an integer linear programming ( ilp ) model for determining how to optimally apply strategy [ strategy : ii ] .
the solution is the set of symbols that will be factorized by strategy [ strategy : ii ] .
let @xmath100 ; the ilp model then is as follows : @xmath101 6 .
perform a depth - first visit of the loop tree and , for each yet unhandled or hoisted expression , apply the most profitable between strategy [ strategy : i ] and strategy [ strategy : ii ] .
+ _ note : this pass speculatively assumes that expressions are ( weakly ) structured along the reduction loop . if the assumption does not hold , the operation count will generally be sub - optimal because only a subset of factorizations and code motion opportunities may eventually be considered . _ ' '' '' although the primary goal of algorithm [ algo : sharing - elimination ] is operation count minimization within the multilinear loop nest , the enforcement of flop - decreasing transformations ( steps ( 2 ) and ( 4 ) ) and the re - scheduling of sub - expressions within outer loops ( last step ) also attempt to optimize the loop nest globally .
we will further elaborate this aspect in section [ sec : proof ] .
consider again figure [ code : multi_loopnest_a ] .
we have @xmath102 , with @xmath103 , @xmath104 , and @xmath105 . for all @xmath106
, we have @xmath107 , although applying strategy [ strategy : i ] in step ( 1 ) has no effect .
the sharing graph is @xmath108 , and @xmath109 .
the ilp formulation leads to the code in figure [ code : multi_loopnest_c ] . in figure
[ code : poisson ] , algorithm [ algo : sharing - elimination ] is executed in a very simple realistic scenario , which originates from the bilinear form of a poisson equation in two dimensions .
we observe that @xmath110 , with @xmath111 and @xmath112 .
in addition , @xmath113 , so strategy [ strategy : i ] is applied to both partitions ( step ( 1 ) ) .
we then have ( step ( 3 ) ) @xmath114 .
since there are no more factorization opportunities , the ilp formulation becomes irrelevant .
= 7pt = 7pt for reasons of space , further examples , including the hyperelastic model evaluated in section [ sec : perf - results ] and other non - trivial ilp instances , are made available online .
sharing elimination uses three operators : expansion , factorization , and code motion . in this section ,
we discuss the role and legality of a fourth operator : reduction pre - evaluation .
we will see that what makes this operator special is the fact that there exists a single point in the transformation space of a monomial ( i.e. , a specific factorization of test , trial , and coefficient functions ) ensuring its correctness .
we start with an example .
consider again the loop nest and the expression in figure [ code : loopnest ] .
we pose the following question : are we able to identify sub - expressions for which the reduction induced by @xmath45 can be pre - evaluated , thus obtaining a decrease in operation count proportional to the size of @xmath45 , @xmath43 ?
the transformation we look for is exemplified in figure [ code : loopnest_rednored ] with a simple loop nest .
the reader may verify that a similar transformation is applicable to the example in figure [ code : poisson_a ] .
= 19pt = 5pt pre - evaluation can be seen as the generalization of tensor contraction ( section [ sec : tc ] ) to a wider class of sub - expressions .
we know that multilinear forms can be seen as sums of monomials , each monomial being an integral over the equation domain of products ( of derivatives ) of functions from discrete spaces . a monomial can always be reduced to the product between a `` reference '' and a `` geometry '' tensor . in our model , a reference tensor is simply represented by one or more sub - expressions independent of @xmath49 , exposed after particular transformations of the expression tree .
this leads to the following algorithm . ' '' ''
[ algo : pre - evaluation ] consider a finite element integration loop nest @xmath115 $ ] .
we dissect the normal form input expression into distinct sub - expressions , each of them representing a monomial .
each sub - expression is then factorized so as to split constants from @xmath116$]-dependent terms .
this transformation is feasible , as a consequence of the results in .
these @xmath116$]-dependent terms are hoisted outside of @xmath57 and stored into temporaries . as part of this process ,
the reduction induced by @xmath45 is computed by means of symbolic execution .
finally , @xmath45 is removed from @xmath57 . ' '' '' the pre - evaluation of a monomial introduces some critical issues : 1 . depending on the complexity of a monomial
, a certain number , @xmath117 , of temporary variables is required if pre - evaluation is performed .
such temporary variables are actually @xmath37-dimensional arrays of size @xmath47 , with @xmath37 and @xmath47 being , respectively , the arity and the extent ( iteration space size ) of the multilinear loop nest ( e.g. , @xmath118 and @xmath119 in the case of bilinear forms ) . for certain values of @xmath120 , pre - evaluation may dramatically increase the working set , which may be counter - productive for actual execution time
the transformations exposing @xmath116$]-dependent terms increase the arithmetic complexity of the expression ( e.g. , expansion tends to increase the operation count ) .
this could outweigh the gain due to pre - evaluation .
3 . a strategy for coordinating sharing elimination and pre - evaluation is needed .
we observe that sharing elimination inhibits pre - evaluation , whereas pre - evaluation could expose further sharing elimination opportunities .
we expand on point ( 1 ) in the next section , while we address points ( 2 ) and ( 3 ) in section [ sec : optimal - synthesis ] .
we have just observed that the code motion induced by monomial pre - evaluation may dramatically increase the working set size .
even more aggressive code motion strategies are theoretically conceivable .
imagine @xmath57 is enclosed in a time stepping loop .
one could think of exposing ( through some transformations ) and hoisting time - invariant sub - expressions for minimizing redundant computation at each time step .
the working set size would then increase by a factor @xmath121 , and since @xmath122 , the gain in operation count would probably be outweighed , from a runtime viewpoint , by a much larger memory pressure . since , for certain forms and discretizations , hoisting may cause the working set to exceed the size of some level of local memory ( e.g. the last level of private cache on a conventional cpu , the shared memory on a gpu )
, we introduce the following _ memory constraints_. [ const : le ] the size of a temporary due to code motion must not be proportional to the size of @xmath49 .
[ const : th ] the total amount of memory occupied by the temporaries due to code motion must not exceed a certain threshold , ` t_h ` . constraint [ const : le ] is a policy decision that the compiler should not silently consume memory on global data objects .
it has the effect of shrinking the transformation space .
constraint [ const : th ] has both theoretical and practical implications , which will be carefully analyzed in the next sections .
in this section , we build a transformation algorithm that , given a memory bound , systematically reaches a local optimum for finite element integration loop nests .
we address the two following issues : 1 .
_ coordination of pre - evaluation and sharing elimination .
_ recall from section [ sec : pre - evaluation ] that pre - evaluation could either increase or decrease the operation count in comparison with that achieved by sharing elimination .
2 . _ optimizing over composite operations . _ consider a form comprising two monomials @xmath123 and @xmath124 .
assume that pre - evaluation is profitable for @xmath123 but not for @xmath124 , and that @xmath123 and @xmath124 share at least one term ( for example some basis functions ) . if pre - evaluation were applied to @xmath123 , sharing between @xmath123 and @xmath124 would be lost .
we then need a mechanism to understand which transformation
pre - evaluation or sharing elimination results in the highest operation count reduction when considering the whole set of monomials ( i.e. , the expression as a whole ) .
let @xmath125 be a cost function that , given a monomial @xmath126 , returns the gain / loss achieved by pre - evaluation over sharing elimination .
in particular , we define @xmath127 , where @xmath128 and @xmath129 represent the operation counts resulting from applying sharing elimination and pre - evaluation , respectively . thus pre - evaluation is profitable for @xmath53 if and only if @xmath130 .
we return to the issue of deriving @xmath128 and @xmath129 in section [ sec : op_count ] . having defined @xmath131
, we can now describe the transformation algorithm ( algorithm [ algo : gamma ] ) . ' '' '' [ algo : gamma ] the algorithm has three main phases : initialization ( step 1 ) ; determination of the monomials preserving the memory constraints that should be pre - evaluated ( steps 2 - 4 ) ; application of pre - evaluation and sharing elimination ( step 5 ) . 1 . perform a depth - first visit of the expression tree and determine the set of monomials @xmath132 .
let @xmath47 be the subset of monomials @xmath53 such that @xmath133 .
the set of monomials that will _ potentially _ be pre - evaluated is @xmath134 .
+ _ note : there are two fundamental reasons for not pre - evaluating @xmath135 straight away : 1 ) the potential presence of spatial sharing between @xmath123 and @xmath136 , which impacts the search for the global optimum ; 2 ) the risk of breaking constraint [ const : th ] .
build the set @xmath137 of all possible bipartitions of @xmath82 .
let @xmath138 be the dictionary that will store the operation counts of different alternatives .
3 . discard @xmath139 if the memory required after applying pre - evaluation to the monomials in @xmath140 exceeds @xmath141 ( see constraint [ const : th ] ) ; otherwise , add @xmath142 = \mathrm{\theta}^{se}(s \cup b_s ) + \mathrm{\theta}^{pre}(b_p)$ ] .
+ _ note : @xmath143 is in practice very small , since even complex forms usually have only a few monomials .
this pass can then be accomplished rapidly as long as the cost of calculating @xmath128 and @xmath129 is negligible .
we elaborate on this aspect in section [ sec : op_count ] . _
take @xmath144 $ ] .
5 . apply pre - evaluation to all monomials in @xmath140 .
apply sharing elimination to all resulting expressions .
+ _ note : because of the reuse of basis functions , pre - evaluation may produce some identical tables , which will be mapped to the same temporary variable .
sharing elimination is therefore transparently applied to all expressions , including those resulting from pre - evaluation . _ ' '' '' the output of the transformation algorithm is provided in figure [ code : loopnest - opt ] , assuming as input the loop nest in figure [ code : loopnest ] .
we tie up the remaining loose end : the construction of the cost function @xmath131 .
we recall that @xmath145 , with @xmath128 and @xmath129 representing the operation counts after applying sharing elimination and pre - evaluation .
since @xmath131 is deployed in a working compiler , simplicity and efficiency are essential characteristics . in the following ,
we explain how to derive these two values .
the most trivial way of evaluating @xmath128 and @xmath129 would consist of applying the actual transformations and simply count the number of operations .
this would be tolerable for @xmath128 , as algorithm [ algo : sharing - elimination ] tends to have negligible cost .
however , the overhead would be unacceptable if we applied pre - evaluation in particular , symbolic execution to all bipartitions analyzed by algorithm [ algo : gamma ] .
we therefore seek an analytic way of determining @xmath129 .
the first step consists of estimating the _ increase factor _ , @xmath146 .
this number captures the increase in arithmetic complexity due to the transformations exposing pre - evaluation opportunities . for context , consider the example in figure [ code : increase_factor ] .
one can think of this as the ( simplified ) loop nest originating from the integration of the action of a mass matrix .
the sub - expression ` f_0*b_{i0}+f_1*b_{i1}+f_2*b_{i2 } ` represents the coefficient @xmath147 over ( tabulated ) basis functions ( array @xmath137 ) . in order to apply pre - evaluation
, the expression needs be transformed to separate @xmath147 from all @xmath116$]-dependent quantities ( see algorithm [ algo : pre - evaluation ] ) . by product expansion , we observe an increase in the number of @xmath50$]-dependent terms of a factor @xmath148 . in general , however , determining @xmath146 is not so straightforward since redundant tabulations may result from common sub - expressions . consider the previous example .
one may add one coefficient in the same function space as @xmath147 , repeat the expansion , and observe that multiple sub - expressions ( e.g. , @xmath149 and @xmath150 ) will reduce to identical tables .
to evaluate @xmath146 , we then use combinatorics .
we calculate the @xmath151-combinations with repetitions of @xmath37 elements , where : ( i ) @xmath151 is the number of ( derivatives of ) coefficients appearing in a product ; ( ii ) @xmath37 is the number of unique basis functions involved in the expansion . in the original example
, we had @xmath152 ( for @xmath153 , @xmath154 , and @xmath155 ) and @xmath156 , which confirms @xmath157 . in the modified example
, there are two coefficients , so @xmath158 , which means @xmath159 .
if @xmath160 ( the extent of the reduction loop ) , we already know that pre - evaluation will not be profitable . intuitively , this means that we are introducing more operations than we are saving from pre - evaluating @xmath45 .
if @xmath161 , we still need to find the number of terms @xmath162 such that @xmath163 .
the mass matrix monomial in figure [ code : increase_factor ] is characterized by the dot product of test and trial functions , so trivially @xmath164 . in the example in figure [ code : poisson ] , instead , we have @xmath165 after a suitable factorization of basis functions . in general , therefore , @xmath162 depends on both form and discretization .
to determine this parameter , we look at the re - factorized expression ( as established by algorithm [ algo : pre - evaluation ] ) , and simply count the terms amenable to pre - evaluation .
we demonstrate that the orchestration of sharing elimination and pre - evaluation performed by the transformation algorithm guarantees local optimality ( definition [ def : mln - quasi - optimality ] ) .
the proof re - uses concepts and explanations provided throughout the paper , as well as the terminology introduced in section [ sec : se - algo ] .
[ prop : optimal - approach ] consider a multilinear form comprising a set of monomials @xmath132 , and let @xmath57 be the corresponding finite element integration loop nest .
let @xmath58 be the transformation algorithm .
let @xmath27 be the set of monomials that , according to @xmath58 , need to be pre - evaluated , and let @xmath166 .
assume that the pre - evaluation of different monomials does not result in identical tables .
then , @xmath167 is a local optimum in the sense of definition [ def : mln - quasi - optimality ] and satisfies constraint [ const : th ] .
we first observe that the cost function @xmath131 predicts the _ exact _ gain / loss in monomial pre - evaluation , so @xmath27 and @xmath168 can actually be constructed .
let @xmath169 denote the operation count for @xmath57 and let @xmath170 be the subset of innermost loops ( all @xmath51 loops in figure [ code : loopnest - opt ] ) .
we need to show that there is no other synthesis @xmath171 satisfying constraint [ const : th ] such that @xmath172 .
this holds if and only if 1 .
_ the coordination of pre - evaluation with sharing elimination is optimal_. this boils down to prove that 1 .
_ pre - evaluating any @xmath173 would result in @xmath174 _ 2 .
_ not pre - evaluating any @xmath175 would result in @xmath174 _ 2 .
_ sharing elimination leads to a ( at least ) local optimum . _ + we discuss these points separately 1 . 1 .
let @xmath176 represent the set of tables resulting from applying pre - evaluation to a monomial @xmath53 .
consider two monomials @xmath177 and the respective sets of pre - evaluated tables , @xmath178 and @xmath179 .
if @xmath180 , at least one table is assignable to the same temporary . @xmath58 , therefore , may not be optimal , since @xmath131 only distinguishes monomials in `` isolation '' .
we neglect this scenario ( see assumptions ) because of its purely pathological nature and its with high probability
negligible impact on the operation count .
2 . let @xmath181 and @xmath182 be two monomials sharing some generic multilinear symbols . if @xmath123 were carelessly pre - evaluated , there may be a potential gain in sharing elimination that is lost , potentially leading to a non - optimum .
this situation is prevented by construction , because @xmath58 exhaustively searches all possible bipartitions on order to determine an optimum which satisfies constraint [ const : th ] .
recall that since the number of monomials is in practice very small , this pass can rapidly be accomplished .
2 . consider algorithm [ algo : sharing - elimination ] .
proposition [ prop : multi - struct ] ensures that there are only two ways of scheduling the multilinear operands in @xmath77 : through generalized code motion ( strategy [ strategy : i ] ) or factorization of multilinear symbols ( via strategy [ strategy : ii ] ) . if applied , these two strategies would lead , respectively , to performing @xmath80 and @xmath81 multiplications at every loop iteration .
since strategy [ strategy : i ] is applied if and only if @xmath183 and does not change the structure of the expression ( it requires neither expansion nor factorization ) , step ( 1 ) can not prune the optimum from the search space .
+ after structuring the sharing graph @xmath97 in such a way that only flop - decreasing transformations are possible , the ilp model is instantiated . at this point , proving optimality reduces to establishing the correctness of the model , which is relatively straightforward because of its simplicity .
the model aims to minimize the operation count by selecting the most promising factorizations .
the second set of constraints is to select all edges ( i.e. , all multiplications ) , exactly once .
the first set of inequalities allows multiplications to be scheduled : once a vertex @xmath96 is selected ( i.e. , once a symbol is decided to be factorized ) , all multiplications involving @xmath96 can be grouped . throughout the paper
we have reiterated the claim that algorithm [ algo : gamma ] achieves a globally optimal flop count if stronger preconditions on the input variational form are satisfied .
we state here these preconditions , in increasing order of complexity . 1 .
there is a single monomial and only a specific coefficient ( e.g. , the coordinates field ) .
this is by far the simplest scenario , which requires no particular transformation at the level of the outer loops , so optimality naturally follows .
2 . there is a single monomial , but multiple coefficients are present .
optimality is achieved if and only if all sub - expressions depending on coefficients are structured ( see section [ sec : se - rln ] ) .
this avoids ambiguity in factorization , which in turn guarantees that the output of step ( 7 ) in algorithm [ algo : sharing - elimination ] is optimal .
3 . there are multiple monomials , but either at most one coefficient ( e.g. , the coordinates field ) or multiple coefficients not inducing sharing across different monomials are present .
this reduces , respectively , to cases ( 1 ) and ( 2 ) above .
there are multiple monomials , and coefficients are shared across monomials .
optimality is reached if and only if the coefficient - dependent sub - expressions produced by algorithm [ algo : sharing - elimination ] that is , the by - product of factorizing test / trial functions from distinct monomials preserve structure .
sharing elimination and pre - evaluation , as well as the transformation algorithm , have been implemented in coffee , the compiler for finite element integration routines adopted in firedrake . in this section
, we briefly discuss the aspects of the compiler that are relevant for this article .
coffee implements sharing elimination and pre - evaluation by composing building block transformation operators , which we refer to as _ rewrite operators_. this has several advantages .
the first is extensibility .
new transformations , such as sum factorization in spectral methods , could be expressed by composing the existing operators , or with small effort building on what is already available .
second , generality : coffee can be seen as a lightweight , low level computer algebra system , not necessarily tied to finite element integration .
third , robustness : the same operators are exploited , and therefore tested , by different optimization pipelines .
the rewrite operators , whose ( python ) implementation is based on manipulation of abstract syntax trees ( asts ) , comprise the coffee language .
a non - exhaustive list of such operators includes expansion , factorization , re - association , generalized code motion .
coffee aims to be independent of the high level form compiler .
it provides an interface to build generic asts and only expects expressions to be in normal form ( or sufficiently close to it ) .
for example , firedrake has transitioned from a version of the fenics form compiler @xcite modified to produce asts rather than strings , to a newly written compiler , while continuing to emply coffee .
thus , coffee decouples the mathematical manipulation of a form from code optimization ; or , in other words , relieves form compiler developers of the task of fine scale loop optimization of generated code . for several reasons
, basis function tables may be block - sparse ( e.g. , containing zero - valued columns ) .
for example , the fenics form compiler implements vector - valued functions by adding blocks of zero - valued columns to the corresponding tabulations ; this extremely simplifies code generation ( particularly , the construction of loop nests ) , but also affects the performance of the generated code due to the execution of `` useless '' flops ( e.g. , operations like ` a + 0 ` ) . in , a technique to avoid iteration over zero - valued columns based on the use of indirection arrays ( e.g. ` a[b[i ] ] ` , in which ` a ` is a tabulated basis function and ` b ` a map from loop iterations to non - zero columns in a ) was proposed .
this technique , however , produces non - contiguous memory loads and stores , which nullify the potential benefits of vectorization .
coffee , instead , handles block - sparse basis function tables by restructuring loops in such a manner that low level optimization ( especially vectorization ) is only marginally affected .
this is based on symbolic execution of the code , which enables a series of checks on array indices and loop bounds which determine the zero - valued blocks which can be skipped without affecting data alignment .
experiments were run on a single core of an intel i7 - 2600 ( sandy bridge ) cpu , running at 3.4ghz , 32 kb l1 cache ( private ) , 256 kb l2 cache ( private ) and 8 mb l3 cache ( shared ) .
the intel turbo boost and intel speed step technologies were disabled .
the intel ` icc 15.2 ` compiler was used .
the compilation flags used were ` -o3 , -xhost ` .
the compilation flag ` xhost ` tells the intel compiler to generate efficient code for the underlying platform .
the zenodo system was used to archive all packages used to perform the experiments : firedrake @xcite , petsc @xcite , petsc4py @xcite , fiat @xcite , ufl @xcite , ffc @xcite , pyop2 @xcite and coffee @xcite .
the experiments can be reproduced using a publicly available benchmark suite @xcite .
we analyze the execution time of four real - world bilinear forms of increasing complexity , which comprise the differential operators that are most common in finite element methods .
in particular , we study the mass matrix ( `` ` mass ` '' ) and the bilinear forms arising in a helmholtz equation ( `` ` helmholtz ` '' ) , in an elastic model ( `` ` elasticity ` '' ) , and in a hyperelastic model ( `` ` hyperelasticity ` '' ) .
the complete specification of these forms is made publicly available .
we evaluate the speed - ups achieved by a wide variety of transformation systems over the `` original '' code produced by the fenics form compiler ( i.e. , no optimizations applied ) .
we analyze the following transformation systems : quad : : optimized quadrature mode .
work presented in , implemented in in the fenics form compiler .
tens : : tensor contraction mode .
work presented in , implemented in the fenics form compiler .
auto : : automatic choice between ` tens ` and ` quad ` driven by heuristic ( detailed in and summarized in section [ sec : qualitative ] ) . implemented in the fenics form compiler .
ufls : : uflacs , a novel back - end for the fenics form compiler whose primary goals are improved code generation and execution times .
cfo1 : : generalized loop - invariant code motion .
work presented in , implemented in coffee .
cfo2 : : optimal loop nest synthesis with handling of block - sparse tables . work presented in this article , implemented in coffee .
the values that we report are the average of three runs with `` warm cache '' ; that is , with all kernels retrieved directly from the firedrake s cache , so code generation and compilation times are not counted .
the timing includes however the cost of both local assembly and matrix insertion , with the latter minimized through the choice of a mesh ( details below ) small enough to fit the l3 cache of the cpu . for a fair comparison ,
small patches were written to the make ` quad ` , ` tens ` , and ` ufls ` compatible with firedrake . by executing all simulations in firedrake , we guarantee that both matrix insertion and mesh iteration have a fixed cost , independent of the transformation system employed .
the patches adjust the data storage layout to what firedrake expects ( e.g. , by generating an array of pointers instead of a pointer to pointers , by replacing flattened arrays with bi - dimensional ones ) . for constraint [ const : th ] , discussed in section [ sec : mem - const ]
, we set @xmath184 ; that is , the size of the processor l2 cache ( the last level of private cache ) . when the threshold had an impact on the transformation process , the experiments were repeated with @xmath185 .
the results are documented later , individually for each problem . following the methodology adopted in , we vary the following parameters : * the polynomial degree of test , trial , and coefficient ( or `` pre - multiplying '' ) functions , @xmath186 * the number of coefficient functions @xmath187 while constants of our study are * the space of test , trial , and coefficient functions : lagrange * the mesh : tetrahedral with a total of 4374 elements * exact numerical quadrature ( we employ the same scheme used in , based on the gauss - legendre - jacobi rule ) we report the results of our experiments in figures [ fig : mass ] , [ fig : helmholtz ] , [ fig : elasticity ] , and [ fig : hyperelasticity ] as three - dimensional plots .
the axes represent @xmath188 , @xmath189 , and code transformation system .
we show one subplot for each problem instance @xmath190 , with the code transformation system varying within each subplot . the best variant for each problem instance is given by the tallest bar , which indicates the maximum speed - up over non - transformed code
. we note that if a bar or a subplot are missing , then the form compiler failed to generate code because it either exceeded the system memory limit or was otherwise unable to handle the form .
the rest of the section is organized as follows : we first provide insights into the general outcome of the experimentation ; we then comment on the impact of a fundamental low - level optimization , namely autovectorization ; finally , we motivate , for each form , the performance results obtained .
[ [ high - level - view ] ] high level view + + + + + + + + + + + + + + + our transformation strategy does not always guarantee minimum execution time . in particular , about 5@xmath191 of the test cases ( 3 out of 56 , without counting marginal differences )
show that ` cfo2 ` was not optimal in terms of runtime .
the most significant of such test cases is the elastic model with @xmath192 $ ] .
there are two reasons for this .
first , low level optimization can have a significant impact on the actual performance .
for example , the aggressive loop unrolling in ` tens ` eliminates operations on zeros and reduces the working set size by not storing entire temporaries ; on the other hand , preserving the loop structure can maximize the chances of autovectorization .
second , the transformation strategy adopted when @xmath141 is exceeded plays a key role , as we will later elaborate .
[ [ autovectorization ] ] autovectorization + + + + + + + + + + + + + + + + + we chose the mesh dimension and the function spaces such that the inner loop sizes would always be a multiple of the machine vector length .
this ensured autovectorization in the majority of code variants .
the biggest exception is ` quad ` , due to the presence of indirection arrays in the generated code . in ` tens ` , loop nests are fully unrolled , so the standard loop vectorization is not feasible ; the compiler reports suggest , however , that block vectorization @xcite is often triggered . in ` ufls ` , ` cfo1 ` , and ` cfo2 ` the iteration spaces have identical structure , with loop vectorization being regularly applied . [
[ mass - matrix ] ] mass matrix + + + + + + + + + + + we start with the simplest of the bilinear forms investigated , the mass matrix .
results are in figure [ fig : mass ] .
we first notice that the lack of improvements when @xmath193 is due to the fact that matrix insertion outweighs local assembly . for @xmath194 , ` cfo2 ` generally shows the highest speed - ups .
it is worth noting why ` auto ` does not always select the fastest implementation : ` auto ` always opts for ` tens ` , while for @xmath195 ` quad ` tends to be preferable . on the other hand , ` cfo2 ` always makes the optimal decision about whether to apply pre - evaluation or not . surprisingly , despite the simplicity of the form , the performance of the various code generation systems can differ significantly . [ [ helmholtz ] ] helmholtz + + + + + + + + + as in the case of mass matrix , when @xmath193 the matrix insertion phase is dominant . for @xmath194 ,
the general trend is that ` cfo2 ` outperforms the competitors . in particular : @xmath196 : : pre - evaluation makes ` cfo2 ` notably faster than ` cfo1 ` , especially for high values of @xmath188 ; ` auto ` correctly selects ` tens ` , which is comparable to ` cfo2 ` . @xmath197
: : ` auto ` picks ` tens ` ; the choice is however sub - optimal when @xmath198 and @xmath199
. this can indirectly be inferred from the large gap between ` cfo2 ` and ` tens / auto ` : ` cfo2 ` applies sharing elimination , but it correctly avoids pre - evaluation because of the excessive expansion cost . @xmath200 and
@xmath201 : : ` auto ` reverts to ` quad ` , which would theoretically be the right choice ( the flop count is much lower than in ` tens ` ) ; however , the generated code suffers from the presence of indirection arrays , which break autovectorization and `` traditional '' code motion .
the slow - downs ( or marginal improvements ) seen in a small number of cases exhibited by ` ufls ` can be attributed to the presence of sharing in the generated code .
an interesting experiment we additionally performed was relaxing the memory threshold by setting @xmath185 .
we found that this makes ` cfo2 ` generally slower for @xmath195 , with a maximum slow - down of 2.16@xmath202 with @xmath203 .
this effect could be worse when running in parallel , since the l3 cache is shared and different threads would end up competing for the same resource . [ [ elasticity ] ] elasticity + + + + + + + + + + the results for the elastic model are displayed in figure [ fig : elasticity ] .
the main observation is that ` cfo2 ` never triggers pre - evaluation , although in some occasions it should . to clarify this ,
consider the test case @xmath204 , in which ` tens / auto ` show a considerable speed - up over ` cfo2 ` . `
cfo2 ` finds pre - evaluation profitable in terms of operation count , although it is eventually not applied to avoid exceeding @xmath141 .
however , running the same experiments with @xmath185 resulted in a dramatic improvement , even higher than that obtained by ` tens ` .
the reason is that , despite exceeding @xmath141 by roughly 40@xmath191 , the saving in operation count is so large ( 5@xmath202 in this specific problem ) that pre - evaluation would in practice be the winning choice .
this suggests that our objective function should be improved to handle the cases in which there is a significant gap between potential cache misses and reduction in operation count .
we also note that : * the differences between ` cfo2 ` and ` cfo1 ` are due to the perfect sharing elimination and the zero - valued blocks avoidance technique presented in section [ sec : zeros ] .
* when @xmath197 , ` auto ` prefers ` tens ` over ` quad ` , which leads to sub - optimal operation counts and execution times . * ` ufls ` often results in better execution times than ` quad ` and ` tens ` .
this is due to multiple factors , including avoidance of indirection arrays , preservation of loop structure , and a more effective code motion strategy .
[ [ hyperelasticity ] ] hyperelasticity + + + + + + + + + + + + + + + in the experiments on the hyperelastic model , shown in figure [ fig : hyperelasticity ] , ` cfo2 ` exhibits the largest gains out of all problem instances considered in this paper .
this is a positive result , since it indicates that our transformation algorithm scales well with form complexity .
the fact that all code transformation systems ( apart from ` tens ` ) show quite significant speed - ups suggests two points .
first , the baseline is highly inefficient . with forms as complex as in the hyperelastic model , a trivial translation of integration routines into code
should always be avoided as even the best general - purpose compiler available ( the intel compiler on an intel platform at maximum optimization level ) fails to exploit the structure inherent in the expressions .
second , the strategy for removing spatial and temporal sharing has a tremendous impact .
sharing elimination as performed by ` cfo2 ` ensures a critical reduction in operation count , which becomes particularly pronounced for higher values of @xmath188 .
we have developed a theory for the optimization of finite element integration loop nests .
the article details the domain properties which are exploited by our approach ( e.g. , linearity ) and how these translate to transformations at the level of loop nests .
all of the algorithms shown in this paper have been implemented in coffee , a compiler publicly available fully integrated with the firedrake framework .
the correctness of the transformation algorithm was discussed .
the performance results achieved suggest the effectiveness of our methodology .
we have defined sharing elimination and pre - evaluation as high level transformations on top of a specific set of rewrite operators , such as code motion and factorization , and we have used them to construct the transformation space .
there are three main limitations in this process .
first , we do not have a systematic strategy to optimize sub - expressions which are independent of linear loops .
although we have a mechanism to determine how much computation should be hoisted to the level of the integration ( reduction ) loop , it is not clear how to effectively improve the heuristics used at step ( 6 ) in algorithm [ algo : sharing - elimination ] .
second , lower operation counts may be found by exploiting domain - specific properties , such as redundancies in basis functions ; this aspect is completely neglected in this article .
third , with constraint [ const : le ] we have limited the applicability of code motion .
this constraint was essential given the complexity of the problem tackled .
another issue raised by the experimentation concerns selecting a proper threshold for constraint [ const : th ] . to solve
this problem would require a more sophisticated cost model , which is an interesting question deserving further research .
we also identify two additional possible research directions : a complete classification of forms for which a global optimum is achieved ; and a generalization of the methodology to other classes of loop nests , for instance those arising in spectral element methods . | we present an algorithm for the optimization of a class of finite element integration loop nests .
this algorithm , which exploits fundamental mathematical properties of finite element operators , is proven to achieve a locally optimal operation count . in specified circumstances
the optimum achieved is global .
extensive numerical experiments demonstrate significant performance improvements over the state of the art in finite element code generation in almost all cases .
this validates the effectiveness of the algorithm presented here , and illustrates its limitations .
this work was supported by by the department of computing at imperial college london , the engineering and physical sciences research council [ grant number ep / l000407/1 ] , and the natural environment research council [ grant numbers ne / k008951/1 and ne / k006789/1 ] , and by a hipeac collaboration grant .
the authors would like to thank dr .
andrew t.t .
mcrae , dr . lawrence mitchell , and dr .
francis russell for their invaluable suggestions and their contribution to the firedrake project .
author s addresses : fabio luporini @xmath0 paul h. j. kelly , department of computing , imperial college london ; david a. ham , department of mathematics , imperial college london ; |
the study of sample path continuity and hlder regularity of stochastic processes is a very active field of research in probability theory .
the existing literature provides a variety of uniform results on local regularity , especially on the modulus of continuity , for rather general classes of random fields ( see e.g. @xcite , @xcite on gaussian processes and @xcite for more recent developments ) . on the other hand ,
the structure of pointwise regularity is often more complex , in particular as it often happens to behave erratically as time passes .
such sample path behaviour was first put into light for brownian motion by @xcite and @xcite .
they respectively studied _ fast _ and _ slow points _ , which characterize logarithmic variations of the pointwise modulus of continuity , and proved that the sets of times with a given pointwise regularity have a particular fractal geometry .
@xcite recently extended the fast points study to fractional brownian motion .
as exhibited by @xcite , lvy processes with a jump compound also display an interesting pointwise behaviour .
indeed , for this class of random fields , the pointwise exponent varies randomly inside a closed interval as time passes .
this seminal work has been enhanced and extended by @xcite , @xcite and @xcite .
particularly , the latter proves that markov processes have a range of admissible pointwise behaviours much wider than lvy processes . in the aforementioned works , _ multifractal analysis
_ happens to be the key concept for the study and the characterisation of local fluctuations of pointwise regularity . in order to be more specific ,
let us first recall the definition of the different notions previously outlined .
a function @xmath1 belongs to @xmath2 , with @xmath3 and @xmath4 , if there exist @xmath5 , @xmath6 and a polynomial @xmath7 of degree less than @xmath0 such that @xmath8 the _ pointwise hlder exponent _ of @xmath9 at @xmath10 is defined by @xmath11 , where by convention @xmath12 .
multifractal analysis is concerned with the study of the level sets of the pointwise exponent , usually called the _ iso - hlder sets _ of @xmath9 , @xmath13 to describe the geometry of the collection @xmath14 , and thereby to determine the arrangement of the hlder regularity , we are interested in the _ local spectrum of singularities _ of @xmath9 . it is usually denoted @xmath15 and defined by @xmath16 where @xmath17 denotes the collection of nonempty open sets of @xmath18 and @xmath19 the hausdorff dimension ( by convention @xmath20 ) . although @xmath14 are random sets , stochastic processes such as lvy processes @xcite , lvy processes in multifractal time @xcite and fractional brownian motion happen to have a deterministic multifractal spectrum .
furthermore , these random fields are also said to be _ homogeneous _ as the hausdorff dimension @xmath21 is independent of the set @xmath22 for all @xmath23 .
when the pointwise exponent is constant along sample paths , the spectrum is described as _ degenerate _ , i.e. its support is reduced to a single point ( e.g. the hurst exponent in the case of f.b.m . ) .
nevertheless , @xcite and @xcite provided examples of respectively markov jump processes and wavelet random series with non - homogeneous and random spectrum of singularities . as stated in equations and , classic multifractal analysis deals with the study of the variations of pointwise regularity
unfortunately , it is known that common hlder exponents ( local and pointwise as well ) do not give a complete picture of the local regularity ( see e.g the deterministic chirp function @xmath24 detailed in @xcite ) .
furthermore , they also lack of stability under the action of pseudo - differential operators . _
2-microlocal analysis _ is one natural way to overcome these issues and obtain a more precise description of the local regularity .
it has first been introduced by @xcite in the deterministic frame to study properties of generalized solutions of pde .
more recently , @xcite and @xcite developed a stochastic approach based on this framework to investigate the finer regularity of stochastic processes such as gaussian processes , martingales and stochastic integrals . in order to the study sample path properties in this frame ,
we need to recall the concept of _
space_. [ def:2ml_spaces ] let @xmath3 , @xmath25 and @xmath26 such that @xmath27 .
a function @xmath1 belongs to the _ 2-microlocal space _
if there exist @xmath5 , @xmath6 and a polynomial @xmath7 such that @xmath29 for all @xmath30 . the time - domain characterisation of 2-microlocal spaces has been obtained by @xcite .
the original definition given by @xcite relies on the littlewood - paley decomposition of tempered distributions , and thereby corresponds to a description in the fourier space .
another characterisation based on wavelet expansion has also been exhibited by @xcite .
the extension of definition [ def:2ml_spaces ] to @xmath31 relies on the following important property satisfied by 2-microlocal spaces ( see theorem @xmath32 in @xcite ) , @xmath33 where @xmath34 designates the fractional integral of @xmath9 of order @xmath0 . as a consequence of
, the application of equation to iterated integrals or differentials of @xmath9 provides an extension of definition [ def:2ml_spaces ] to any @xmath35 , which is sufficient for the purpose of this paper .
similarly to the pointwise hlder exponent , the introduction of 2-microlocal spaces leads naturaly to the definition of regularity tool named the _
2-microlocal frontier _ and given by @xmath36 2-microlocal spaces enjoy several inclusion properties which imply that the map @xmath37 is well - defined and display the following features : * @xmath38 is a concave non - decreasing function ; * @xmath38 has left and right derivatives between @xmath39 and @xmath40 . as a function , the 2-microlocal frontier @xmath38 offers a more complete description of the local regularity .
in particular , it embraces the local hlder exponent since @xmath41 . furthermore , as stated in @xcite , if the modulus of continuity of @xmath9 satisfies @xmath42 , the pointwise exponent can also be retrieved using the formula @xmath43 .
note that the previous formula can not directly deduced from equation [ eq : def_2ml_spaces ] since definition [ def:2ml_spaces ] does not stand when @xmath44 .
@xcite provides an example of generalized function which does not satisfy this relation .
as observed @xcite , brownian motion provides a simple instance of 2-microlocal frontier in the stochastic frame : almost surely for all @xmath3 , @xmath45 satisfies @xmath46 in this paper , the 2-microlocal approach is combined with the classic use of multifractal analysis to obtain a finer description of the regularity of stochastic processes . following the path of @xcite ,
we refine the multifractal description of lvy processes ( section [ sec:2ml_levy ] ) and observe in particular that the use of the 2-microlocal formalism allows to capture subtle behaviours that can not be characterized by the classic spectrum of singularities .
this finer analysis of sample path properties of lvy processes happens to be very useful for the study of another class of processes named linear fractional stable motion ( lfsm ) .
the lfsm is a common @xmath0-stable self - similar process with stationary increments , and can be seen as an extension of the fractional brownian motion to the non - gaussian frame .
since it also has long range dependence and heavy tails , it is of great interest in modelling . in section [ sec:2ml_lfsm ]
, we completely characterize the multifractal nature of the lfsm , and thereby illustrate the fact that 2-microlocal analysis is well - suited to study the regularity of unbounded sample paths as well as continuous ones . as it is well known , an @xmath47-valued lvy process @xmath48 has stationary and independent increments .
its law is determined by the lvy - khintchine formula ( see e.g. @xcite ) : for all @xmath49 and @xmath50 , @xmath51 } } = e^{t\psi(\lambda)}$ ] where @xmath52 is given by @xmath53 @xmath54 is a non - negative symmetric matrix and @xmath55 a lvy measure , i.e. a positive radon measure on @xmath56 such that @xmath57 . throughout this paper
, it will always be assumed that @xmath58 .
otherwise , the lvy process simply corresponds to the sum of a compound poisson process with drift and a brownian motion whose regularity can be simply deduced .
sample path properties of lvy processes are known to depend on the growth of the lvy measure near the origin .
more precisely , @xcite defined the following exponents @xmath59 and @xmath60 , @xmath61 owing to @xmath55 s definition , @xmath62}}$ ] .
@xcite proved that @xmath63 when @xmath64 .
note that several other exponents have been defined in the literature focusing on lvy processes sample paths properties ( see e.g. @xcite for some recent developments ) .
@xcite studied the spectrum of singularities of lvy processes under the following assumption on the measure @xmath55 , @xmath65 under the hypothesis , theorem 1 in @xcite states that the multifractal spectrum of a lvy process @xmath66 is almost surely equal to @xmath67}}. \end{cases}\ ] ] note that equation still holds when @xmath68 .
@xcite extended this result to hausdorff @xmath69-measures , where @xmath69 is a gauge function , and @xcite generalized the study to multivariate lvy fields .
we establish in proposition [ prop : pointwise_levy ] a new proof of the multifractal spectrum which does not require assumption .
we observe that results obtained in @xcite on hausdorff @xmath69-measure are also extended using this method . in order to refine the spectrum of singularities , we focus on the study of the 2-microlocal frontier of lvy processes .
for that purpose , we introduce and study the collections of sets @xmath70 and @xmath71 respectively defined by @xmath72 the family @xmath70 represents the set of times at which the 2-microlocal behaviour is rather common ( and thus similar the 2-microlocal frontier of brownian motion ) , whereas at points which belong @xmath71 , the 2-microlocal frontier has an unusual form , with in particular a slope different from @xmath40 at @xmath73 .
the next statement gathers our main result on the 2-microlocal regularity of lvy processes .
[ th:2ml_levy ] sample paths of a lvy process @xmath66 almost surely satisfy @xmath74}}. \end{cases}\ ] ] the collection of sets @xmath71 enjoys almost surely @xmath75}}\cup{\ensuremath{[1/\beta',+\infty]}}. \end{cases}\ ] ] furthermore , the 2-microlocal frontier at @xmath76 satisfies @xmath77 for all @xmath78 .
the previous statement induces that @xmath79 for all @xmath80}}$ ] .
hence , from a hausdorff dimension point of view , the majority of the times @xmath49 have a rather classic 2-microlocal frontier @xmath81 .
the collection of sets @xmath71 illustrates the fact that 2-microlocal analysis can capture particular behaviours that are not necessarily described by a classic multifractal spectrum .
examples [ ex:2ml_levy1 ] and [ ex:2ml_levy2 ] constructed in section [ ssec:2ml_levy_ex ] show that different behaviours may occur , depending on properties of the lvy measure .
the first one provides a class of lvy processes which satisfy @xmath82 for all @xmath80}}$ ] . on the other hand , in example [ ex:2ml_levy2 ]
is constructed a collection of lvy measures @xmath83 such that the related lvy process almost surely enjoys @xmath84 . it remains an open question to completely characterize the collection @xmath71 in terms of the lvy measure @xmath55 ( examples [ ex:2ml_levy1 ] and [ ex:2ml_levy2 ] indeed prove that the blumenthal - getoor exponent @xmath59 is not sufficient ) .
although sample paths of lvy processes do not satisfy the condition @xmath85 outlined in the introduction , theorem [ th:2ml_levy ] nevertheless ensures that the pointwise hlder exponent can be retrieved from the 2-microlocal frontier at any @xmath49 using the formula @xmath86 .
since this work extends the study of the classic spectrum of singularities , it is also quite natural to investigate geometrical properties of the sets @xmath87 defined by @xmath88 theorem [ th:2ml_levy ] induces the next statement .
[ cor:2ml_levy ] a lvy process @xmath66 satisfies almost surely for any @xmath89}}$ ] , @xmath90 where @xmath91 denotes the common 2-microlocal parameter @xmath92 .
furthermore , for all @xmath93 , @xmath94 and @xmath95 is empty if @xmath96 .
corollary [ cor:2ml_levy ] generalizes the multifractal formula since the spectrum of singularities corresponds to the case @xmath44 .
note that the subtle behaviour exhibited in theorem [ th:2ml_levy ] is not captured by equality . as outlined in the proof ( section [ ssec:2ml_levy ] )
, this property disappears because the sets @xmath97 are negligible compared to @xmath98 in terms of hausdorff dimension .
regularity results established in theorem [ th:2ml_levy ] also happen to be interesting outside the scope of lvy processes , thanks to the powerful properties satisfied the 2-microlocal frontier .
more precisely , it allows to characterized the multifractal nature of the linear fractional stable motion ( lfsm ) .
this process is usually defined by the following stochastic integral ( see e.g. @xcite ) @xmath99 where @xmath100 is an @xmath0-stable random measure and @xmath101 .
several regularity properties have been determined in the literature .
in particular , sample paths are known to be nowhere bounded @xcite if @xmath102 , whereas they are hlder continuous when @xmath103 . in this latter case
, @xcite proved that the pointwise and local hlder exponents satisfy almost surely @xmath104 and @xmath105 . in the sequel
, we will assume that @xmath106 , which is in particular required to obtain hlder continuous sample paths ( @xmath103 ) .
using an alternative representation of lfsm obtained in proposition [ prop : rep_lfsm ] , we enhance the aforementioned regularity results and obtain a description of the multifractal spectrum of the lfsm .
[ th:2ml_lfsm ] let @xmath66 be a linear fractional stable motion parametrized by @xmath106 and @xmath101 .
it satisfies almost surely for all @xmath107}}$ ] , @xmath108 } } ; \\ -\infty & \text { otherwise . } \end{cases}\ ] ] where @xmath92 .
furthermore , for all @xmath93 , @xmath95 is empty if @xmath109 . in the continuous case @xmath103 ,
theorem [ th:2ml_lfsm ] ensures that the multifractal spectrum ( @xmath44 ) of the lfsm is equal to @xmath110 } } ; \\
-\infty & \text { otherwise}. \end{cases}\ ] ] spectrum [ eq : spectrum_lfsm ] and equation clearly extend the aforementioned lower and upper bounds obtained on the pointwise and local hlder exponents .
we also note that as it could be expected , the lfsm is an homogeneous multifractal process .
more generally , we observe that theorem [ th:2ml_lfsm ] unifies in terms of regularity the continuous ( @xmath111 ) and unbounded ( @xmath112 ) cases . indeed , in both situations , the domain of acceptable 2-microlocal frontiers have the same multifractal structure .
when @xmath113 , it intersects the @xmath114-axis , which induces @xmath115 and therefore the continuity of trajectories owing to properties of the 2-microlocal frontier . on the contrary ,
when the domain is located strictly below the @xmath114-axis , it implies that sample paths are nowhere bounded .
nevertheless , the proof of theorem [ th:2ml_lfsm ] ensures in this case the existence of modification of the lfsm such that sample paths are tempered distributions whose 2-microlocal regularity can be studied as well .
figure [ fig:2ml_lfsms ] illustrates this dichotomy .
an equivalent result is obtained in proposition [ prop:2ml_flp ] for a similar class of processes called fractional lvy processes ( see @xcite ) .
the lfsm admits a natural multifractional extension which has been introduced and studied in @xcite .
the definition of the linear multifractional stable motion ( lmsm ) is given by equation where the hurst exponent @xmath116 is replaced by a function @xmath117 .
@xcite obtained lower and upper bounds on hlder exponents which are similar to lfsm results : for all @xmath49 , @xmath118 and @xmath119 almost surely .
@xcite recently investigated the existence of optimal local modulus of continuity .
theorem [ th:2ml_lfsm ] can be generalized to the lmsm in the continuous case .
more precisely , we assume that the hurst function satisfies the following assumption , @xmath120 since the lmsm is clearly a non - homogeneous process , it is natural to focus on the study of the spectrum of singularities localized at @xmath49 , i.e. @xmath121 the next statement correspond to an adaptation of theorem [ th:2ml_lfsm ] to the lmsm .
[ th:2ml_lmsm ] let @xmath66 be a linear multifractional stable motion parametrized by @xmath122 and an @xmath123-hurst function @xmath116 .
it satisfies almost surely for all @xmath3 and for all @xmath124 , @xmath125 } } ; \\ -\infty & \text { otherwise . } \end{cases}\ ] ] where @xmath92 .
furthermore , the set @xmath126 is empty for any @xmath127 and @xmath6 sufficiently small .
theorem [ th:2ml_lmsm ] extends results presented in @xcite . in particular , it ensures that the localized multifractal spectrum is equal to @xmath128 } } ; \\
-\infty & \text { otherwise}. \end{cases}\ ] ] moreover , we observe that proposition [ prop : rep_lfsm ] and theorem [ th:2ml_lmsm ] still hold when the hurst function @xmath129 is a continuous random process .
thereby , similarly to the works of @xcite and @xcite , it provides a class stochastic processes whose spectrum of singularities , given by equation , is non - homogeneous and random .
in this section , @xmath66 will designate a lvy process parametrized by the generating triplet @xmath130 .
lvy - it decomposition states that it can represented as the sum of three independent processes @xmath131 , @xmath132 and @xmath133 , where @xmath131 is a @xmath134-dimensional brownian motion , @xmath132 is a compound poisson process with drift and @xmath133 is a lvy process characterized by @xmath135 . without any loss of generality , we restrict the study to the time interval @xmath136}}$ ] .
as noticed in @xcite , the component @xmath132 does not affect the regularity of @xmath66 since its trajectories are piecewise linear with a finite number of jumps .
sample path properties of brownian motion are well - known and therefore , we first focus in the sequel on the study of the jump process @xmath133 .
we know there exists a poisson measure @xmath137 of intensity @xmath138 such that @xmath133 is given by @xmath139}}\times d({\varepsilon},1 ) } x \,j({\ensuremath{\mathrm d}\xspace}s,{\ensuremath{\mathrm d}\xspace}x ) - t\int_{d({\varepsilon},1 ) } x \,\pi({\ensuremath{\mathrm d}\xspace}x ) \biggr]},\ ] ] where for all @xmath140 , @xmath141 .
moreover , as presented in @xcite ( theorem 19.2 ) , the convergence is almost surely uniform on any bounded interval . for any @xmath142
, @xmath143 will denote the following lvy process @xmath144}}\times d({\varepsilon},2^{-m } ) } x \,j({\ensuremath{\mathrm d}\xspace}s,{\ensuremath{\mathrm d}\xspace}x ) - t\int_{d({\varepsilon},2^{-m } ) } x \,\pi({\ensuremath{\mathrm d}\xspace}x ) \biggr]}.\ ] ] we extend in this section the multifractal spectrum to any lvy process . to begin with , we prove two technical lemmas that will be extensively used in the sequel .
[ lemma : tech_lemma1 ] for any @xmath145 , there exists a constant @xmath146 such that for all @xmath142 @xmath147 let @xmath148 .
we first observe that for any @xmath142 , @xmath149 therefore , it is sufficient to prove that there exists @xmath150 such that for any @xmath151 , @xmath152 let @xmath153 and @xmath154 for all @xmath49 . according to theorem @xmath155 in @xcite , we have @xmath156 } } = \exp{\bigl\ { t \int_{d(0,2^{-m/\delta } ) } { \bigl ( e^{\lambda{\langle { \varepsilon},x \rangle } } -1 - \lambda{\langle { \varepsilon},x \rangle } \bigr ) } \pi({\ensuremath{\mathrm d}\xspace}x ) \bigr\}}$ ] .
furthermore , we observe that for all @xmath157 , @xmath158 } } = m_s \exp{\biggl\ { ( t - s ) \int_{d(0,2^{-m/\delta } ) } { \bigl ( e^{\lambda{\langle { \varepsilon},x \rangle } } -1 - \lambda{\langle { \varepsilon},x \rangle } \bigr ) } \pi({\ensuremath{\mathrm d}\xspace}x ) \biggr\ } } \geq m_s,\ ] ] since for any @xmath159 , @xmath160 .
hence , @xmath161 is a positive submartingale , and using doob s inequality ( theorem 1.7 in @xcite ) , we obtain @xmath162}}. \end{aligned}\ ] ] for all @xmath163}}$ ] , we note that @xmath164 . thus , for any @xmath142 , @xmath165 } } & \leq \exp{\biggl\ { 2^{-m } \int_{d(0,2^{-m/\delta } ) } \lambda^2 { \langle { \varepsilon},x \rangle}^2 \pi({\ensuremath{\mathrm d}\xspace}x ) \biggr\ } } \leq \exp{\biggl\ { 2^{-m } \int_{d(0,2^{-m/\delta } ) } \lambda^2 { \lvertx\rvert}^2 \pi({\ensuremath{\mathrm d}\xspace}x ) \biggr\}}.
\end{aligned}\ ] ] if @xmath166 , let @xmath167 be such that @xmath168 and @xmath169
. then , we obtain @xmath170 since @xmath169 .
if @xmath171 , we simply observe that @xmath172 as @xmath173 .
therefore , there exists @xmath146 such that for all @xmath142 , @xmath174 } } \leq c_\delta$ ] , which proves the lemma .
[ lemma : tech_lemma2 ] for any @xmath145 , there exists a constant @xmath146 such that for all @xmath142 @xmath175 } } , { \lvertu - v\rvert}\leq 2^{-m } } { \bigl\lvert y^{m/\delta}_u - y^{m/\delta}_v \bigr\rvert}_1 \geq 3 m 2^{-m/\delta } \biggr ) } } \leq c_\delta e^{-dm},\ ] ] where @xmath176 is positive constant independent of @xmath177 .
we note that for any @xmath142 and all @xmath148 , @xmath178 } } , { \lvertu - v\rvert}\leq 2^{-m } } { \bigl\lvert y^{m/\delta}_u - y^{m/\delta}_v \bigr\rvert}_1 \geq 3 m 2^{-m/\delta } \biggr\ } } \\ & \subseteq \bigcup_{k=0}^{2^m-1 } { \biggl\ { \sup_{t\leq 2^{-m } } { \bigl\lvert y^{m/\delta}_{t+k2^{-m } } - y^{m/\delta}_{k2^{-m } } \bigr\rvert}_1 \geq m 2^{-m/\delta } \biggr\}}. \end{aligned}\ ] ] therefore , the stationarity of lvy processes and lemma [ lemma : tech_lemma1 ] yield @xmath175 } } , { \lvertu - v\rvert}\leq 2^{-m } } { \bigl\lvert y^{m/\delta}_u - y^{m/\delta}_v \bigr\rvert}_1 \geq 3 m 2^{-m/\delta } \biggr ) } } \leq 2^m c_\delta e^{-m } = c_\delta e^{-dm},\ ] ] where @xmath179 .
let us recall the definition of the collection of random sets @xmath180 introduced in @xcite .
for every @xmath181 , @xmath182 denotes the countable set of jumps of @xmath183 .
moreover , for any @xmath184 , let @xmath185 be @xmath186}}.\ ] ] then , the random set @xmath187 is defined by @xmath188 . as noticed in @xcite , if @xmath189 , we necessarily have @xmath190 . the other side inequality is obtained in the next statement , extending the proof of proposition 2 from @xcite . [ prop : pointwise_levy ] let @xmath145 .
almost surely for all @xmath191}}\setminus s(\omega)$ ] , @xmath133 satisfies @xmath192 using lemma [ lemma : tech_lemma2 ] and borel - cantelli lemma , we know that almost surely , there exists @xmath193 such that for any @xmath194 , @xmath195}}\ \text { such that } \ { \lvertu - v\rvert}\leq
2^{-m};\quad { \bigl\lvert y^{m/\delta}_v - y^{m/\delta}_u \bigr\rvert } \leq c m 2^{-m/\delta},\ ] ] where @xmath196 is a positive constant . furthermore , as @xmath197
, there exists @xmath198 such that for all @xmath199 , @xmath200 .
then , for all @xmath201 in the neighbourhood of @xmath10 , we have @xmath202}}\times d({\varepsilon}_0,1 ) } x \,j({\ensuremath{\mathrm d}\xspace}s,{\ensuremath{\mathrm d}\xspace}x ) - ( t - u)\int_{d({\varepsilon}_0,1 ) } x \,\pi({\ensuremath{\mathrm d}\xspace}x ) = - ( t - u)\int_{d({\varepsilon}_0,1 ) } x \,\pi({\ensuremath{\mathrm d}\xspace}x),\ ] ] since a linear component does not contribute to the pointwise exponent , we only have to consider the remaining part of the lvy process to characterize the regularity .
let @xmath203}}$ ] and @xmath204 such that @xmath205 .
@xmath206 can be supposed large enough to satisfy @xmath207 .
let @xmath208 .
for any jump @xmath209 whose norm is in the interval @xmath210}}$ ] , we have @xmath211 .
therefore , there is no such jumps @xmath209 in the interval @xmath212 $ ] and @xmath213}}\times d({\varepsilon}_1,{\varepsilon}_0 ) } x \,j({\ensuremath{\mathrm d}\xspace}s,{\ensuremath{\mathrm d}\xspace}x)=0 $ ] .
let now distinguish two different cases . 1 .
if @xmath214 , we obtain @xmath215 furthermore , @xmath216 according to equation .
hence , these two inequalities imply @xmath217 .
if @xmath218 ( and thus @xmath219 ) , we have @xmath220 .
the component @xmath221 is linear in the neighbourhood of @xmath10 , and therefore can be ignored . for the second
integral , we similarly observe that @xmath222 this last inequality and equation prove that @xmath217 .
proposition [ prop : pointwise_levy ] ensures that almost surely @xmath223 furthermore , since the estimate of the hausdorff dimension obtained in @xcite does not rely on assumption , @xmath133 satisfies almost surely @xmath224 } } ; \\ -\infty & \text{else . }
\end{cases}\ ] ] let us now study the 2-microlocal frontier of lvy processes . according to theorem @xmath225 in @xcite , for all @xmath191}}$ ] and any @xmath226
, the sample path @xmath183 almost surely belongs to the 2-microlocal space @xmath227 .
furthermore , owing to the density of the set of jumps @xmath182 in @xmath136}}$ ] , we necessarily have @xmath228 for any @xmath96 and @xmath93 . hence ,
since the 2-microlocal frontier is a concave function with left- and right - derivatives between @xmath39 and @xmath40 , we obtain that almost surely , for all @xmath191}}$ ] @xmath229 therefore , we consider in the sequel on the negative component of the 2-microlocal frontier of @xmath133 . as outlined in the introduction and according to definition [ def:2ml_spaces ] of 2-microlocal spaces , we need to study the increments around @xmath10 of the integral of the process @xmath133 , i.e. @xmath230 where @xmath231 is fixed .
the form of the polynomial @xmath7 depends on @xmath59 .
if @xmath232 , we only need to remove a linear component equal to @xmath233 , and therefore the increment simply corresponds to @xmath234 . on the other hand ,
if @xmath219 , the proof of proposition [ prop : pointwise_levy ] induces that we also need to subtract the compensation term @xmath235 . then , in this case , we have to study @xmath236 for sake of readability , we divide the proof of theorem [ th:2ml_levy ] and its corollary in different technical lemmas .
we begin by obtaining a global upper bound of the frontier .
[ lemma:2ml_levy1 ] almost surely for all @xmath191}}$ ] , the 2-microlocal frontier @xmath237 satisfies @xmath238}};\quad \sigma_{y , t}(s ' ) \leq { \bigl ( \frac{1}{\beta } + s ' \bigr)}\wedge 0.\ ] ] let @xmath204 , @xmath184 , @xmath239 and @xmath240 . as stated in @xcite , to prove that @xmath241 , it is sufficient to exhibit for any @xmath184 a sequence @xmath242 such that @xmath243 and @xmath244 . in order to extend this inequality to the 2-microlocal frontier and
obtain equation , we need to reinforce the previous statement and show that in the neighbourhood of @xmath245 , there is no other jump of similar size . more precisely , let consider @xmath246 consecutive intervals @xmath247 of size @xmath248 .
the family @xmath249 forms a cover of @xmath136}}$ ] .
each @xmath247 can be divided into at least @xmath250 disjoint of intervals @xmath251 of size @xmath252 .
finally , let @xmath253 be the three consecutive intervals of same size inside @xmath251 .
we investigate the probability of the following event @xmath254 since @xmath255 is a poisson measure , @xmath256 corresponds to the intersection of independent events and we have @xmath257 as described in @xcite , @xmath59 can also be defined by @xmath258 . therefore , there exists @xmath259 such that for all @xmath260}}$ ] , @xmath261 .
then , for all @xmath204 large enough , we obtain @xmath262 according to the definition of @xmath59 , there also exists an increasing sequence @xmath263 such that for all @xmath264 , @xmath265 . therefore , along this sub - sequence , we have @xmath266 for any @xmath264 .
let now consider the event @xmath267 defined by @xmath268 since the events @xmath269 are identical and independent , we obtain @xmath270 which leads to @xmath271 , since @xmath272 .
therefore , for all @xmath264 , @xmath273 . finally , let @xmath274 .
for all @xmath264 , it satisfies @xmath275 where @xmath196 is a positive constant .
hence , using borel - cantelli lemma , there exists an event @xmath276 of probability @xmath40 such that for any @xmath277 , there exists @xmath278 and for all @xmath279 , @xmath280 let now @xmath191}}$ ] , @xmath277 and @xmath279 .
there exist @xmath281 such that @xmath282 and @xmath283 .
hence , according to the definition of the event @xmath256 , there is @xmath284 such that @xmath285 and @xmath286 .
furthermore , there is no other jump of size greater than @xmath287 in the ball @xmath288 , where @xmath289 . since @xmath290
, we necessarily have @xmath291 or @xmath292 . without any loss of generality ,
we assume in the sequel that @xmath291 is satisfied .
as previously outlined , we need to study the increments described in equation .
specifically , we observe on the interval @xmath293}}$ ] that @xmath294 let obtain an upper bound for the second term .
there is no jump of size greater than @xmath287 in the interval @xmath295}}$ ] .
therefore , for all @xmath296}}$ ] , we have @xmath297 .
lemma [ lemma : tech_lemma2 ] provides an upper bound for the first term .
indeed , let @xmath298 , where @xmath299 .
using borel - cantelli lemma , we know that for any @xmath264 sufficiently large , for all @xmath300}}$ ] such that @xmath301 , @xmath302 .
moreover , we observe that @xmath303 . therefore , using the previous inequalities , we obtain that almost surely for all @xmath279 @xmath304}};\quad { \bigl\lvert y^{m_n(1+{\varepsilon})}_s - y^{m_n(1+{\varepsilon})}_{t_n } \bigr\rvert } \leq c m_n 2^{-m_n(1+{\varepsilon})}.\ ] ] to investigate the second integral term , we have distinguish the two different cases introduced above . 1 .
if @xmath232 , the integral of the linear drift corresponds to @xmath305 the norm of the previous expression is bounded above by @xmath306 as @xmath307 .
2 . if @xmath219 , we know that the drift @xmath308 can be removed from the lvy process
. therefore , we only have to consider the following quantity @xmath309 since it can be assumed that @xmath310 , we similarly get @xmath311 inequalities , and yield @xmath312 where @xmath196 and @xmath313 are positive constants independent of @xmath314 . finally , since @xmath315 , @xmath316 where @xmath317 .
hence , according to definition [ def:2ml_spaces ] of the 2-microlocal spaces , this last equation ensures that @xmath318}};\quad \sigma_{y , t}(s ' ) \leq { \bigl ( \lambda-1 + s ' \bigr)}\wedge 0.\ ] ] since the inequality holds for any @xmath319 and @xmath320 , we obtain the expected result .
the following simple lemma will be used in the sequel to obtain the 2-microlocal frontier when @xmath321 .
[ lemma : tech_lemma3 ] let @xmath322 , @xmath184 and @xmath323 such that @xmath324 for all @xmath204 , let @xmath325 be the collection of successive subintervals of @xmath136}}$ ] of size @xmath326 .
then , there exists almost surely @xmath327 such that for all @xmath328 and for any interval @xmath329 , there are at most @xmath330 jumps of size greater than @xmath331 in @xmath329 . for any @xmath204 and @xmath332 , we know that @xmath333 is poisson variable .
hence , we have @xmath334 where @xmath335 . according to the definition of @xmath59 , we know that for all @xmath206 sufficiently large , @xmath336
. therefore , @xmath337 where @xmath177 is a positive constant , according to the assumption made on @xmath0 , @xmath338 and @xmath330 .
borel - cantelli lemma concludes the proof . in the next lemma
, we study the behaviour of the 2-microlocal frontier of @xmath133 at points @xmath191}}$ ] where @xmath339}}$ ] .
[ lemma:2ml_levy2 ] almost surely , for all @xmath340 , we have @xmath341 and @xmath342 , i.e. for all @xmath343 @xmath344 let @xmath319 and @xmath345 .
inequality is satisfied for @xmath346 .
therefore , lemma [ lemma : tech_lemma3 ] ensures that there exists almost surely @xmath193 such that for any @xmath207 , the distance between two consecutive jumps of size larger than @xmath331 is at least @xmath347 .
let @xmath348}}$ ] and @xmath349 } } \setminus s(\omega)$ ] ( @xmath10 is not a jump time ) .
according to characterisation of the set @xmath350 , there exist sequences @xmath263 and @xmath242 such that @xmath351 as previously , we shall assume that @xmath352 and investigate the increment @xmath353 , where @xmath354 . as stated in lemma
[ lemma:2ml_levy1 ] , for all @xmath279 , @xmath355}};\quad { \bigl\lvert y^{m_n(1+{\varepsilon})}_s - y^{m_n(1+{\varepsilon})}_{t_n } \bigr\rvert } \leq c m_n 2^{-m_n(1+{\varepsilon})}.\ ] ] furthermore , the remaining integral also satisfies the following inequality . 1 .
if @xmath232 , the norm of @xmath356 is upper bounded by @xmath357 since @xmath358 .
if @xmath219 , we similarly obtain @xmath359 hence , previous inequalities yield @xmath360 if @xmath361 , there exists @xmath362 such that @xmath363 .
we observe that @xmath364 and the previous expression is lower bounded by @xmath365 .
hence , the 2-microlocal frontier of @xmath133 at @xmath10 enjoys @xmath366 for all @xmath93 such that @xmath367}}$ ] .
since this inequality is obtained for any @xmath368 such that @xmath363 and @xmath369 , we get the expected formula and @xmath341 . in the second case @xmath370 , we know that we can assume @xmath371 .
thus , we have @xmath372 .
similarly to the previous case and as @xmath373 , we obtain the expected upper bound of the 2-microlocal frontier . to conclude this proof , let us consider the case @xmath374 .
we observe that for all @xmath375 , @xmath376 since @xmath133 is right - continuous .
similarly , for all @xmath377 , @xmath378 . therefore , since @xmath379 , there does not exist a polynomial @xmath7 that can cancel both terms @xmath380 and @xmath381 , which proves that @xmath382 for all @xmath93 .
for this last technical lemma , we focus on the particular case @xmath383 .
[ lemma:2ml_levy3 ] almost surely , for all @xmath384 , @xmath133 satisfies @xmath385 furthermore , for any @xmath386 , we have @xmath387 for all @xmath93 . according to lemma 1 in @xcite , for any @xmath204 and @xmath184 , @xmath388 } } , d ( 2^{-m(1+{\varepsilon})},1 ) \bigr ) } \geq 2 \pi{\bigl ( d(2^{-m(1+{\varepsilon})},1 ) \bigr ) } + 2 m \bigr ) } } \leq e^{-m}.\ ] ] hence , using borel - cantelli lemma , we know that almost surely , for any @xmath319 and @xmath206 sufficiently large , there are at most @xmath389 jumps of size greater than @xmath331 on the interval @xmath136}}$ ] . for any @xmath204 , the process @xmath390}}\times d(2^{-m(1+{\varepsilon})},1 ) } x \,j({\ensuremath{\mathrm d}\xspace}s,{\ensuremath{\mathrm d}\xspace}x)$ ] is compound poisson process , and therefore differences between successive jumps are i.i.d .
exponential random variables @xmath391 . according to the previous calculus
, we only have to consider the first @xmath392 r.v . @xmath393 .
let @xmath394 , @xmath395 and for all @xmath204 , @xmath396 be the number of variables @xmath397 which are smaller than @xmath248 .
@xmath398 follows the binomial distribution of parameters @xmath399 and @xmath400 . according to markov inequality , @xmath401 therefore , using borel - cantelli lemma , almost surely for any @xmath395 , @xmath402 for all @xmath403 . for any @xmath404 and @xmath204 , let @xmath405 designates the following set @xmath406 } } \cup { \ensuremath{\bigl[t^r_{j , m}-2^{-m\alpha},t^r_{j , m}+2^{-m\alpha}\bigr ] } } \bigr\}},\ ] ] where @xmath407 and @xmath408 are respectively the left and right points which define the r.v .
moreover , let @xmath405 denotes @xmath410 . to obtain an upper for the hausdorff dimension
, we observe that for any @xmath161 sufficiently large and @xmath167 , @xmath411 the series converges if @xmath412 .
since this property is satisfied for any @xmath184 and the definition of @xmath413 does not depend on @xmath338 , it ensures that almost surely for any @xmath395 , @xmath414 . finally ,
since the family @xmath415 is decreasing , the inequality stands for any @xmath404 .
let now consider @xmath416 and @xmath417 be @xmath418 .
one readily verifies that @xmath419 .
let @xmath420 .
for any @xmath184 , there exists @xmath161 such that for all @xmath328 , @xmath421 , where @xmath422 .
furthermore , as @xmath343 , there exist sequences @xmath263 and @xmath242 such that @xmath423 , @xmath424 and @xmath425 . without any loss of generality , we can assume that @xmath426 for all @xmath264 .
then , since @xmath427 , we know that for all @xmath264 , there is no jump of size larger than @xmath287 in @xmath428
. therefore , a reasoning similar to lemma [ lemma:2ml_levy2 ] yields @xmath429 where @xmath430 .
since this inequality is satisfied for any @xmath184 and @xmath431 , the 2-microlocal frontier enjoys @xmath432 for all @xmath93 such that @xmath367}}$ ] .
hence , we have proved that @xmath433 and @xmath434 , and since @xmath419 and @xmath435 , we obtain the expected estimates . to conclude this lemma
, we obtain an upper bound of the 2-microlocal frontier in the case @xmath76 .
let @xmath436 , @xmath437 and @xmath307 . equation obtained in the previous lemma still holds @xmath438 this inequality ensures that for all @xmath394 and @xmath437 , @xmath439 .
the limit @xmath373 leads to the expected upper bound . before finally proving theorem [ th:2ml_levy ] and its corollary on the 2-microlocal frontier of lvy processes
, we recall the following result on the increments of a brownian motion .
the proof can be found in @xcite ( inequality @xmath440 ) .
[ lemma : bm_incr ] let @xmath131 be a @xmath134-dimensional brownian motion .
there exists an event @xmath276 of probability one such that for all @xmath277 , @xmath184 , there exists @xmath441 such that for all @xmath442 and @xmath191}}$ ] , we have @xmath443 we use the notations introduced at the beginning of the section .
as previously said , the compound poisson process @xmath132 can be ignored since it does not influence the final regularity .
if @xmath444 , and therefore @xmath445 and @xmath446 , lemmas [ lemma:2ml_levy1 ] , [ lemma:2ml_levy2 ] and [ lemma:2ml_levy3 ] on the component @xmath133 yield theorem [ th:2ml_levy ] . otherwise , the lvy process @xmath66 corresponds to the sum of the brownian motion @xmath131 and the jump component @xmath133 . still using lemmas
[ lemma:2ml_levy1 ] , [ lemma:2ml_levy2 ] and [ lemma:2ml_levy3 ] , it is sufficient to prove that almost surely for all @xmath191}}$ ] , @xmath447 .
owing to the definition of 2-microlocal frontier , we already know that @xmath448 .
furthermore , when @xmath449 , it is straight forward to get @xmath450 . therefore , to obtain theorem [ th:2ml_levy ] , we have to prove that almost surely for all @xmath191}}$ ] , @xmath451 . 1 .
if @xmath452 , we only need to slightly modify the proof of lemma [ lemma:2ml_levy1 ] . more precisely , let consider the same constructed sequence @xmath242 . we observe
that since @xmath131 is almost surely continuous , @xmath453 and thus , we can still assume that @xmath454 .
then , we have @xmath455 since @xmath131 is a brownian motion , we know there exists @xmath456 such that for all @xmath300}}$ ] , @xmath457 .
hence , the last term in the previous equation satisfies @xmath458 where we recall that @xmath459 .
this term is negligible in front of @xmath460 , and the rest of the proof of lemma [ lemma:2ml_levy1 ] ensures that @xmath461 , for all @xmath93 such that @xmath462}}$ ] , which is sufficient to obtain theorem [ th:2ml_levy ] .
2 . if @xmath166 , let @xmath463 and @xmath464 . according to lemma [ lemma : tech_lemma3 ] , there exist @xmath323 and @xmath327 such that for all @xmath328 , there are at most @xmath330 jumps of size greater than @xmath465 in any interval @xmath329 .
hence , for any @xmath332 , there exists a subinterval @xmath466 of size @xmath467 and with no such jump inside .
+ lemma [ lemma : tech_lemma2 ] proves that @xmath193 can be chosen such that for all @xmath207 , @xmath468 } } \quad\text{s.t.}\quad { \lvertu - v\rvert}\leq 2^{-\alpha m};\quad { \bigl\lvert y^{m(1 + 3{\varepsilon})}_u - y^{m(1 + 3{\varepsilon})}_v \bigr\rvert } \leq c m 2^{-m(1 + 3{\varepsilon})}.\ ] ] let @xmath191}}$ ] and @xmath332 such that @xmath469 . according to lemma [ lemma : bm_incr ]
, there exist @xmath470 such that @xmath471 .
then , we know that @xmath472 where @xmath338 and @xmath196 can be chosen such that @xmath473 .
thus , @xmath474 and therefore , for any @xmath206 sufficiently large , there exists @xmath475 such that @xmath476 .
furthermore , @xmath477 can be chosen such that there are no jumps of size greater than @xmath465 in @xmath478 , where @xmath479 . then , using a reasoning similar to the previous point , we obtain for any @xmath207 , @xmath480 this inequality implies that for all @xmath93 such that @xmath462}}$ ] , @xmath481 , which concludes the proof . owing to theorem [ th:2ml_levy ]
, the case @xmath44 corresponds to the classic spectrum of singularity .
hence , let @xmath482 .
we recall that @xmath91 denotes the parameter @xmath483 . if @xmath484 or @xmath485 , the result is straight forward using theorem [ th:2ml_levy ] and properties of the 2-microlocal frontier .
hence , we suppose @xmath482 and @xmath486 .
we note that @xmath487 , since the negative component of the 2-microlocal frontier of @xmath66 can not be constant .
thus , @xmath95 satisfies @xmath488 using notations introduced in the proof of lemma [ lemma:2ml_levy3 ] , we know that for all @xmath489 , @xmath490 , where @xmath491
. therefore , this inequality and equation ensures that @xmath492 .
as previously outlined , the collection of sets @xmath71 considered in theorem [ th:2ml_levy ] gathers times at which the 2-microlocal regularity is unusual ( the slope of the frontier is not equal to @xmath40 ) . in this section ,
we present examples of lvy processes which show that for a fixed blumenthal - getoor exponent , different situations may occur , depending on the form of the lvy measure .
it is assumed in the sequel that @xmath493 .
[ ex:2ml_levy1 ] let @xmath55 be a lvy measure such that @xmath494 .
then , the lvy process @xmath133 with generating triplet @xmath495 almost surely satisfies , @xmath496 according to theorem [ th:2ml_levy ] , we have to prove that for all @xmath23 , @xmath82 . for that purpose , we extend lemma [ lemma:2ml_levy2 ] to any @xmath497 .
let @xmath242 still be a sequence such that @xmath498 and @xmath499 for all @xmath264 .
we first assume that @xmath500 . similarly to lemma [ lemma:2ml_levy2 ] , we obtain @xmath501 and @xmath502 for all @xmath503}}$ ] . furthermore ,
owing to the definition of @xmath55 , there are only positive jumps in the interval @xmath504}}$ ] .
hence , if we consider the contribution to the increment @xmath505 of jumps of size greater than @xmath287 , it is positive and larger than @xmath506 for all @xmath507 .
therefore , @xmath508 the rest of the proof of lemma [ lemma:2ml_levy2 ] holds and ensures the equality .
the case @xmath509 is treated similarly . the 2-microlocal frontier obtained in the previous example
is proved to be classic at any @xmath49 .
this behaviour is due to the existence of only positive jumps which can not locally compensate each other .
similarly , the proof of lemma [ lemma:2ml_levy2 ] focuses on times where the distance between two consecutive jumps of close size is always sufficiently large so that there is no compensation phenomena . hence , in order to exhibit some unusual 2-microlocal regularity , we consider in the next example points where jumps are locally compensated at any scale .
[ ex:2ml_levy2 ] for any @xmath510 and all @xmath511 , there exists a lvy measure @xmath512 such that its blumenthal - getoor exponent is @xmath59 and the lvy process @xmath133 with generating triplet @xmath513 almost surely satisfies @xmath84 .
let @xmath510 and @xmath514 such that @xmath515 .
let the measure @xmath512 , where @xmath516 , be @xmath517 @xmath0 and @xmath177 are supposed to satisfy @xmath518 .
we note that @xmath519 since @xmath520 and @xmath521 .
one readily verifies that the blumenthal - getoor exponent of @xmath512 is equal to @xmath59 .
moreover , since the measure @xmath512 is symmetric , @xmath133 is a pure jump process with no linear component .
the construction of the example is divided in two parts .
we first define a random time set @xmath522 and prove it is not empty with a positive probability . then , we determine the 2-microlocal frontier of points from @xmath522 in order to exhibit unusual behaviours . more precisely , let define an inhomogeneous galton - watson process @xmath523 that is used to construct the random set @xmath522 .
every individual of generation @xmath524 represents a an interval @xmath525 of size @xmath526 and the distribution of its offspring is denoted @xmath527 .
there exist at least @xmath528 intervals @xmath529 of size @xmath530 inside @xmath531 .
then , for every @xmath532 , let consider an interval @xmath533 of size @xmath534 centered inside @xmath535 .
let @xmath536 denotes the probability of the existence of two jumps of size @xmath537 but with different signs inside an interval @xmath533 and the absence of @xmath537 jumps in the rest of @xmath535 .
it is equal to @xmath538 the distribution of the offspring @xmath527 is defined as the number of intervals @xmath535 which satisfy such a configuration .
it follows a binomial distribution @xmath539 with the following mean , @xmath540 } } = m_n p_n \sim_n 2^{-j_{n-1}\delta + j_n(2\beta+\alpha - 2\gamma ) - 1 } \sim_n 2,\ ] ] owing to the definition of @xmath541 .
then , to every @xmath525 is associated a family of subintervals @xmath542 such that for all @xmath543 , @xmath544 is a subinterval of @xmath535 of size @xmath545 and the distance between @xmath533 and @xmath544 is equal to @xmath545 .
let now define the random set @xmath522 as following : @xmath546 we observe that @xmath522 is nonempty if and only if the galton - watson tree @xmath523 survives .
according to proposition 1.2 in @xcite , if @xmath523 satisfies the conditions @xmath547 } } < + \infty,\qquad\inf_{n\in{\ensuremath{\mathbf{n}}\xspace } } { \mathbb{e}{[l_n ] } } > 0\quad\text{and}\quad\liminf_{n\in{\ensuremath{\mathbf{n}}\xspace } } { \bigl({\mathbb{e}{[t_n]}}\bigr)}^{1/n } > 1\ ] ] then the survival probability @xmath548 is strictly positive .
one readily verifies that @xmath549 } } \leq 3 m_n p_n$ ] and @xmath550 } } = \prod_{p=1}^n { \mathbb{e } { [ l_p ] } } $ ] for all @xmath264 , proving that @xmath551 .
let set @xmath552 , @xmath553 and determine the regularity of @xmath183 at @xmath10 .
owing to the construction of @xmath522 , there exists a sequence @xmath242 which converges to @xmath10 and such that @xmath554 it clearly proves that @xmath10 belongs to @xmath555 for all @xmath556 . since we also know that the distance between @xmath10 and a jump of size @xmath537 is a at least @xmath545 , @xmath557 for all @xmath558 . hence @xmath343 and the pointwise hlder exponent @xmath559 is equal to @xmath560 .
let now study the 2-microlocal frontier of @xmath133 at @xmath10 and set @xmath561 with @xmath562 sufficiently small .
there exists @xmath264 such that @xmath563 .
two different cases have to be distinguished . 1 .
if @xmath564 , there exist at most two jumps of size @xmath537 in the interval @xmath565}}$ ] ( corresponding to the jumps inside @xmath533 ) .
let first estimate the contribution @xmath566 to the increment .
for any @xmath567 , lemma [ lemma : tech_lemma2 ] ensures that almost surely for all @xmath264 @xmath568 } } , { \lverts - v\rvert}\leq 2^{-j_{n+1}\delta ' } } { \bigl\lvert y^{j_{n+1}}_s - y^{j_{n+1}}_v \bigr\rvert } \leq 3 j_{n+1 } 2^{-j_{n+1}}\ ] ] then , dividing @xmath565}}$ ] in subintervals of size @xmath569 , we obtain for any @xmath570}}$ ] @xmath571 if we consider the exponent at the limit @xmath572 and @xmath573 , it is equal to @xmath574 .
one easily verifies that it is strictly greater than @xmath40 for all @xmath510 and @xmath575 . hence , owing to the continuity of the fraction , we can assume there exists @xmath184 such that @xmath576 when @xmath577 and @xmath578 are sufficiently close to @xmath177 and @xmath59 .
it ensures that for @xmath570}}$ ] , @xmath579 .
+ since the distance between the two jumps of size @xmath537 in the interval @xmath565}}$ ] is at most @xmath534 , @xmath580 as @xmath581 .
2 . let now assume @xmath582 .
we first note that there are no jumps of size greater than @xmath537 inside the interval @xmath565}}$ ] . hence ,
if @xmath583 , equation ensures that @xmath584 since we can assume that @xmath585 .
if @xmath586 , we have @xmath587 the exponent at the limit @xmath572 and @xmath573 is equal to @xmath588 .
similarly to the previous case , one verifies that it is strictly greater than @xmath589 for all @xmath510 and @xmath575 , and therefore @xmath0 , @xmath590 and @xmath578 can be chosen such that @xmath591 .
+ hence , equations and yield @xmath592 where the constant @xmath196 is independent of @xmath314 . owing to equations and ,
the 2-microlocal frontier @xmath237 of @xmath133 at @xmath10 is greater than @xmath593 . hence , for any @xmath181 , @xmath594 and thus @xmath595 .
furthermore , using kolmogorov @xmath596 law , we also know that @xmath597 , which ends the proof .
example [ ex:2ml_levy2 ] justifies the distinction made in theorem [ th:2ml_levy ] between the normal behaviour @xmath98 and the exceptional one @xmath97 .
nevertheless , if the previous examples prove that different regularities can be obtained , depending on the form of the lvy measure , it remains an open problem to characterize completely the 2-microlocal frontier for any lvy process .
the linear fractional stable motion ( lfsm ) is a stochastic process that has been considered by several authors : @xcite .
its general integral form is defined by @xmath598 } \nonumber \\ + & a^- { \bigl [ ( t - u)_-^{h-1/\alpha } - ( -u)_-^{h-1/\alpha } \bigr ] } \bigr\ } } \,m_{\alpha,\beta}({\ensuremath{\mathrm d}\xspace}u),\end{aligned}\ ] ] where @xmath101 , @xmath599 and @xmath100 is an @xmath0-stable random measure on @xmath18 with lebesgue control measure @xmath600 and skewness intensity @xmath601}}$ ] . throughout this paper
, it is assumed that @xmath59 is constant , and equal to zero when @xmath602 . in this context , for any borel set @xmath603 , the characteristic function of @xmath604 is given by @xmath605 } } = \begin{cases } \exp{\bigl\ { -\lambda(a){\lvert\theta\rvert}^\alpha { \bigl ( 1 - i\beta \,\text{sign}(\theta)\tan(\alpha\pi/2 ) \bigr ) } \bigr\ } } \quad & \text{if } \alpha\in{\ensuremath{(0,1)}}\cup{\ensuremath{(1,2 ) } } ; \\
\exp{\bigl\ { -\lambda(a){\lvert\theta\rvert } \bigr\ } } & \text{if } \alpha = 1 .
\end{cases}\ ] ] for sake of readability , we consider in the rest of the section the particular case @xmath606 ( even though as stated @xcite , the law of the process depends on values @xmath607 chosen ) . [ prop : rep_lfsm ] for all @xmath3 and @xmath101 , the random variable @xmath609 satisfies @xmath610 } } , \end{cases}\ ] ] where @xmath611 and @xmath612 is an @xmath0-stable lvy process defined by @xmath613}})\quad\text{and}\quad \forall t\in{\ensuremath{\mathbf{r}}\xspace}_-\quad l_t = -m_{\alpha,\beta}({\ensuremath{[t,0]}}).\ ] ] let @xmath3 and @xmath101 .
since @xmath614 is an @xmath0-stable lvy process , it has cdlg sample paths . according to @xcite ( chap .
4.3.4 ) , the theory of the stochastic integration based @xmath0-stable lvy processes coincide integrals with respect to @xmath0-stable random measure .
therefore , the r.v .
@xmath609 is almost surely equal to @xmath615 .
let @xmath184 and @xmath231 . using a classic integration by parts formula , we obtain @xmath616 1 .
if @xmath617 , @xmath618 . hence , @xmath619 almost surely converges to @xmath620 when @xmath373 .
similarly , @xmath621 converges in @xmath622 .
therefore , using equation with @xmath623 and @xmath624 , we obtain almost surely @xmath625 when @xmath626 , the left - term clearly converges to @xmath609 in @xmath622 . according to @xcite , we know that almost surely for any @xmath184 , @xmath627 .
furthermore , we also have @xmath628 and @xmath629 .
therefore , as @xmath630 and using the dominated convergence theorem , the right - term almost surely converges to the expected integral .
if @xmath631 , we observe that equation can be slightly transformed into @xmath632 according to @xcite , @xmath633 .
therefore , up to an extracted sequence , the previous expression almost surely converges when @xmath373 and using a similar formula for @xmath623 , we obtain @xmath634 the property @xmath627 and the previous equivalents yield equation . to end this proof , let consider the integral representation in the particular case @xmath635 .
in fact , equation is a slightly misuse since the expression does not exist .
nevertheless , we prove that it converges almost surely to @xmath636 when @xmath637 .
let first assume that @xmath638 and rewrite @xmath609 as @xmath639 the first component of the expression converges to zero since @xmath640 and @xmath633 .
as the second part simply converges to @xmath641 , we get the expected limit .
the case @xmath642 is treated similarly . 1 .
if @xmath103 , we note that the representation obtained in proposition [ prop : rep_lfsm ] is defined almost surely for all @xmath3 .
therefore , let set @xmath277 and @xmath3 . as previously
, we can assume that @xmath191}}$ ] .
then , we observe that @xmath643 where @xmath624 is fixed .
the second term is simply a constant that does not influence the regularity .
similarly , using the dominated convergence theorem , we note that the third one is a @xmath644 function on the interval @xmath136}}$ ] , and therefore has no impact on the 2-microlocal frontier .
+ finally , the first term corresponds to a fractional integral of order @xmath645 of the process @xmath612 .
according to the properties satisfied by the 2-microlocal spaces ( see e.g. theorem 1.1 in @xcite ) , we know that almost surely for all @xmath3 , the 2-microlocal frontier @xmath646 of @xmath647 simply corresponds to a translation of @xmath612 s frontier @xmath648 .
2 . if @xmath102 , we observe that @xmath649 owing to proposition [ prop : pointwise_levy ] and since @xmath650 .
hence , for every @xmath277 , formula is well - defined almost everywhere on @xmath18 .
anywhere else , we may simply assume that @xmath609 is set to zero .
similarly to the previous case @xmath651 , the regularity of @xmath66 only depends on properties of the component @xmath652 one might recognize a marchaud fractional derivative ( see e.g. @xcite ) .
let us modify this expression to exhibit a more classic form of fractional derivative .
for almost all @xmath653}}$ ] and @xmath184 , we have @xmath654 the last two terms are equal to @xmath655 which converges to @xmath656 as @xmath373 . similarly , using the dominated convergence theorem
, the first term converges to @xmath657 , and therefore @xmath658 according to classic real analysis results , the previous expression is differentiable almost everywhere on the interval @xmath659 $ ] , and therefore @xmath660 for almost all @xmath191}}$ ] .
the last two formulas ensure that @xmath661 a.s . , and
thus , @xmath647 is a tempered distribution whose 2-microlocal regularity can be determined as well .
+ similarly to the previous case , the term @xmath662 is a riemann - liouville fractional integral of order @xmath663 .
hence , at any @xmath3 , the 2-microlocal frontier of the distribution @xmath664 is equal to @xmath665 .
+ since the regularity of the second component locally corresponds to spectrum of @xmath612 , it does not have any influence .
therefore , almost surely for all @xmath3 , the 2-microlocal frontier @xmath646 of @xmath647 corresponds to a translation of @xmath612 s frontier , i.e. @xmath666 . in both cases that regularity of @xmath66 can be deduced from @xmath612 s 2-microlocal frontier .
hence , using corollary [ cor:2ml_levy ] and since @xmath612 is an @xmath0-stable lvy process , we obtain the spectrum described in equation .
another class of processes similar to the lfsm has been introduced and studied in @xcite .
named fractional lvy processes , it is defined by @xmath667 where @xmath668 and @xmath612 is a lvy process enjoying @xmath64 ( no brownian component ) , @xmath669 } } = 0 $ ] and @xmath670 } } < + \infty$ ] . owing to this last assumption on @xmath612 ,
lfsms are not fractional lvy processes .
nevertheless , their multifractal regularity can be determined as well .
[ prop:2ml_flp ] let @xmath66 be a a fractional lvy process parametrized by @xmath668 .
it satisfies almost surely for all @xmath671}}$ ] , @xmath672 } } ; \\ -\infty & \text { otherwise . } \end{cases}\ ] ] where @xmath59 designates the blumenthal - getoor exponent of the lvy process .
furthermore , for all @xmath93 , @xmath95 is empty if @xmath673 .
@xcite established ( theorem 3.3 ) a representation of fractional lvy processes equivalent to proposition [ prop : rep_lfsm ] .
based on this result , an adaptation of theorem [ th:2ml_lfsm ] proof yields equation . similarly to the lfsm , this statement refines regularity results established in @xcite and proves that the multifractal spectrum of a fractional lvy process is equal to @xmath674 } } ; \\
-\infty & \text { otherwise}. \end{cases}\ ] ] let @xmath675 be a linear multifractional stable motion with @xmath122 and hurst function @xmath676 . according to the representation obtained in proposition [ prop : rep_lfsm ] , @xmath677 is almost surely equal to @xmath678 .
let @xmath3 and @xmath679 . for all @xmath30 , we investigate the increment @xmath680 using the dominated convergence theorem
, we know that the the field @xmath681 is differentiable on the variable @xmath116 . therefore , the mean value theorem ensures that the second term is upper bounded by @xmath682 .
furthermore , the first component satisfies @xmath683 the increment @xmath684 corresponds to the increment a lfsm with hurst index @xmath685 whereas the second term can be upper bounded by @xmath686 , where @xmath560 satisfies @xmath687 .
since @xmath116 satisfies the @xmath688 assumption , the proof of theorem [ th:2ml_lfsm ] and the previous inequalities ensure that , @xmath689 furthermore , if @xmath690 , we also obtain @xmath691 , proving in particular that @xmath692 . finally , as the set of jumps @xmath182 of @xmath612 is dense in @xmath136}}$ ] , for all @xmath693 , there exists @xmath694 such that @xmath695 .
hence , the 2-microlocal frontier almost surely satisfies for all @xmath191}}$ ] @xmath696 therefore , theorem [ th:2ml_levy ] and the continuity of @xmath129 ensure equality for all @xmath697 .
firstly , if the hurst function is continuous and satisfies @xmath698}}$ ] , then the corresponding lmsm @xmath699 is almost surely continuous . indeed , according to proposition [ prop : rep_lfsm ] ,
the random field @xmath609 is continuous on the domain @xmath700}}$ ] .
hence , the composition with the hurst function @xmath129 also enjoys this property .
moreover , if the continuous hurst function satisfies the weaker condition @xmath701}}$ ] , then the lmsm @xmath699 is continuous if and only if the @xmath702 . otherwise , the process almost surely has cdlg sample paths .
indeed , still according to proposition [ prop : rep_lfsm ] , the random field @xmath609 is cdlg on the domain @xmath703}}$ ] and its jumps coincide with the jumps of @xmath612 .
hence , @xmath678 is continuous if and only if @xmath704 , i.e. iff @xmath705 .
since @xmath255 is a poisson measure of intensity @xmath706 , it occurs if and only if we have @xmath702 . in the case
@xmath129 does not satisfy the assumption @xmath707 , the proof of theorem [ th:2ml_lmsm ] can be modified to extend the statement and generalize results obtained in @xcite .
this complete study is made in @xcite for the multifractional brownian motion .
for sake of clarity , we prefer to focus in this work on @xmath123-hurst functions and lmsm s multifractal nature obtained in this case . although it is assumed all along this section that @xmath129 is deterministic , owing to the representation exhibited in proposition [ prop : rep_lfsm ] , theorems [ th:2ml_lfsm ] and [ th:2ml_lmsm ] still hold if @xmath129 is a continuous random process . hence , based on these results , a class of random processes with random and non - homogeneous multifractal spectrum can be easily exhibited . a similar extension of the multifractional brownian motion has been introduced and studied by @xcite .
p. balana and e. herbin .
2-microlocal analysis of martingales and stochastic integrals .
_ stochastic process .
_ , 1220 ( 6):0 23462382 , 2012 .
p. balana and e. herbin .
sample paths properties of irregular multifractional brownian motion .
_ in preparation _ , 2013 .
second microlocalization and propagation of singularities for semilinear hyperbolic equations . in _
hyperbolic equations and related topics ( katata / kyoto , 1984 ) _ , pages 1149 . academic press , boston , ma , 1986 .
d. revuz and m. yor .
_ continuous martingales and brownian motion _ , volume 293 of _ grundlehren der mathematischen wissenschaften [ fundamental principles of mathematical sciences]_. springer - verlag , berlin , third edition , 1999 .
_ lvy processes and infinitely divisible distributions _ , volume 68 of _ cambridge studies in advanced mathematics_. cambridge university press , cambridge , 1999 .
translated from the 1990 japanese original , revised by the author . | we investigate the regularity of lvy processes within the 2-microlocal analysis framework .
a local description of sample paths is obtained , refining the classic spectrum of singularities determined by jaffard . as a consequence of this result and the properties of the 2-microlocal frontier , we are able to completely characterize the multifractal nature of the linear fractional stable motion ( extension of fractional brownian motion to @xmath0-stable measures ) in the case of continuous and unbounded sample paths as well .
the regularity of its multifractional extension is also determined , thereby providing an example of a stochastic process with a non - homogeneous and random multifractal spectrum . |
For quite some time, scientists had a working theory of why certain piebald (patchy black-and-white) mammals look the way they do. They assumed the coloring is a directed pattern that involves pigmented cells instigating a controlled expansion. Turns out, it’s all just random.
Scientists at the University of Bath and the University of Edinburgh have been taking a look at developing mice. Specifically, they have been looking at embryos of piebald mice to see the patterns that determine a mouse’s final pigmentation. In a paper published in Nature Communications, the researchers just admitted that there don’t seem to be any patterns.
Advertisement
This comes as a surprise to many people. Scientists always assumed that piebald animals—especially mice, cats, and horses—got their color patterns in utero. In developed skin, pigment is put out by melanocytes, a specialized skin cell. Embryonic animals have proto-melanocytes, called melanoblasts. These spread slowly through the also-developing skin. Scientists didn’t assume it that each cat or mouse developed a perfect pattern, but they did assume that there was something guiding the way the melanoblasts moved.
For example, cats have black backs and white bellies tend to have a defective version of a gene named “kit.” From a scientist’s point of view, these were black cats whose melanoblasts started developing along their back, then moved down towards their belly. Unfortunately this happened in late development, the melanoblasts moved more slowly than usual, and the they didn’t quite close in over the belly before the cat was fully developed.
Advertisement
When they studied developing mice, however, the researchers found that melanoblasts don’t behave anything like that. First of all, they mostly proliferate in early development. Secondly, when they do spread out, they do so at random. Although melanoblasts do sometimes repel each other when they get too close, when scientists studied how they moved (and took images after in twenty minute intervals) they saw that the repulsion didn’t actually speed dispersal.
In the end, the researchers found that skin cells become pigmented through “process of undirected migration, proliferation and tissue expansion.” There’s no director-protein or special chemical that allows melanoblasts in one cell know they should expand to the next. The pigment just goes wherever. And while there are larger factors that affect how the cat looks—for example, it turns out the cats with the defective kit gene have melanoblasts that don’t multiply as fast as they do in other cats, which is why they are part-white—there’s no direction to where the melanoblasts go.
[Reconciling diverse mammalian pigmentation patterns with a fundamental mathematical model] ||||| Black and white cats get their distinctive colouring because of the way their cells develop in the womb, research suggests.
Scientists said cats with a two-tone coat do not have enough pigment cells because the cells divide too slowly.
The research also found that cells move randomly rather than following instructions, so in animals without enough pigment cells, colour is distributed in random patches.
The findings could aid understanding of medical conditions such as holes in the heart , which can also be caused by cells not moving to the right place as an embryo develops.
The black and white pattern, known as piebald, can also be seen in some horses.
• Black and white cats are most likely to have a hissy fit
Researchers at the Universities of Bath and Edinburgh carried out their study on mice and believe they have debunked the previous assumption that the unusual colouring is caused by slow-moving cells.
Photo: Adam Gray / SWNS.COM
Professor Ian Jackson of the University of Edinburgh said: “The black and white cat has a mutation and it was assumed that because we knew those cells moved through the skin, it was because the cells didn’t move fast enough, but what we have shown is actually the cells move faster in the black and white cat or spotted mice.
“The problem is there’s not enough of them so they don’t divide enough, they divide more slowly.
“It was always imagined that there would be a signal that would tell them where to go, but they just move at random.
“It’s like diffusion - if you put a drop of milk in a cup of coffee that milk spreads through the whole cup of coffee. Eventually the cells spread through the skin.”
Photo: ALAMY
However, when there are not enough of the pigment cells, they do not reach all areas of the skin, resulting in the distinctive patchy black and white patterns.
“The fact that they move randomly means there can be a random element in developments processes." Professor Ian Jackson
The mathematical model used by the scientists could now be used for further research tracking different cells during early development.
“The fact that they move randomly means there can be a random element in developments processes,” Professor Jackson said.
“If you have got enough cells then that random process doesn’t really matter because there’s enough cells to go around to do the job but if you have a mutation and there aren’t enough cells sometimes they will get there and sometimes they won’t, so there’s a random element.”
Dr Christian Yates a Mathematical Biologist from the University of Bath, said the findings were “counter intuitive”.
“Previously it was thought that the defective kit gene slowed cells down but instead we’ve shown that it actually reduces the rate at which they multiply.
“There are too few pigment cells to populate the whole of the skin and so the animal gets a white belly.
“In addition to kit, there are many other genes that can create piebald patterns, the mathematical model can explain piebald patterns regardless of the genes involved.’’
The research is published in Nature Communications.
• Why has our cat's fur changed colour? ||||| Animal models
All animal work was approved by a University of Edinburgh Internal Ethics Committee and was performed in accordance with the institutional guidelines under licence by the UK Home Office (PPL 60/4424 and PPL 60/3785). Mice were maintained in the animal facilities of the University of Edinburgh. Mouse lines containing the transgenes or modified alleles Dct::lacZ (generated in-house)6, R26R-YFP (kindly provided by Prof L. Smith, The University of Edinburgh)59, Tyr::CreA and Tyr::CreB (kindly provided by Prof. L. Larue, Institute Curie, Paris)60, Nf1flox (obtained from the National Cancer Institute, Mouse Repository, Frederick, USA)61 and KitW-v (obtained from the Medical Research Council, Harwell, UK)45 were genotyped according to published methods. PmelCreERT2 mice (unpublished, generated in-house) were genotyped using the PCR primers Pmel_For (5′- GGGTAAAGAAGAGGGGAGAGG -3′), Pmel_Rev (5′- GGGATGTTCCATCACCTTCA -3′) and CreERT2_Rev (5′- AGGCAAATTTTGGTGTACGG -3′) to distinguish between targeted and wild-type alleles. Animals used to investigate adult belly spots were male progeny from a cross between Nf1+/flox; KitW-v/+ males and a Tyr::CreATg/Tg; Nf1+/flox; KitW-v/+ females on a mixed genetic background. Only male animals were considered as the Tyr::CreA transgene is X-linked. Tyr::CreA+ve; Nf1flox/flox animals were smaller than their litter mates. For live imaging of embryonic skin on a KitW-v/+ background E14.5 progeny from a cross between Tyr::CreBTg/Tg; KitW-v/+ and R26YFPRTg/Tg; KitW-v/+ individuals or between Tyr::CreBTg/Tg; Kit+/+ and R26YFPRTg/Tg; KitW-v/+ individuals were used on a mixed genetic background. No melanoblasts were observed in the back skin of E14.5 Tyr::CreB+ve; R26YFPRTg+ve; KitW-v/W-v individuals. For live imaging of embryonic skin on a Nf1flox/+ and Nf1flox/flox background E14.5 progeny from a cross between Tyr::CreBTg/Tg; Nf1flox/+ and R26YFPRTg/Tg; Nf1flox/+ individuals or Tyr::CreBTg/Tg; Nf1flox/+ and R26YFPRTg/Tg; Nf1flox/flox individuals were used on a mixed genetic background. E14.5 Tyr::CreB+ve; Nf1flox/flox individuals were viable and morphologically indistinguishable from their litter mates. To image melanoblast behaviour in whole embryos at E11.5 we examined the progeny of a cross between PmelCreERT2 /CreERT2; R26R-YFPTg/Tg individuals on a mixed background. The pregnant mothers were given 8 mg of 4-hydroxytamoxifen per 40 g body weight gavage or injectionat E10.5. To investigate melanoblast numbers in fixed tissues Dct::lacZTg/Tg and Dct::lacZTg/+ embryos were used resulting from crosses between combinations of Dct::lacZTg/+, Dct::lacZTg/Tg and Dct::lacZ+/+ parents on CD1 background. Embryos used for optical projection tomography were F1 hybrids from a cross between the mouse strains C57Bl6 and CBA (obtained from Charles River Laboratories, UK).
Embryonic skin culture and imaging of whole embryos
Embryonic skin culture was performed as described in Mort et al.31. Briefly, up to six cultures were imaged in parallel per experiment. Skin was sampled from the flank of E13.5, 14.5 and 15.5 mouse embryos. The dorsoventral position varied but was never taken at the ventral extreme. The skin samples were mounted on a clip filled with 1% w/v agarose (in PBS) and secured with suture thread. The clip was then inserted into a custom designed six-well chamber so that the skin was sandwiched against a lummox gas-permeable membrane (Greiner). The wells were filled with DMEM (no phenol red) supplemented with 1 × Glutamax (Gibco), 1% v/v penicillin/streptomycin and 10% v/v fetal calf serum. Whole E11.5 embryos were embedded in 1% w/v agarose (in PBS) in a large custom-made imaging clip so that the dorsal region of the flank was just protruding above the surface of the agarose. The clip was then inserted into a custom designed six-well chamber so that the protruding region of the embryo was pressed against a lummox gas-permeable membrane (Greiner). The wells were filled with DMEM (no phenol red) supplemented with 1 × Glutamax (Gibco), 1% v/v penicillin/streptomycin and 10% v/v fetal calf serum.
X-Gal staining of embryos
X-Gal staining of Dct::lacZ embryos was performed as previously described6. Briefly, embryos were fixed in 4% w/v paraformaldehyde for varying times depending on developmental stage. They were then permeabilized in detergent wash solution (2 mM MgCl 2 , 0.05% w/v BSA, 0.1% w/v sodium deoxycholate and 0.02% v/v Igepal in 0.1 M sodium phosphate buffer; pH 7.3) before being stained overnight in X-Gal stain solution (5 mM K 3 Fe(CN) 6 , 5 mM K 4 Fe(CN) 6 , and 0.085% w/v NaCl with 0.1% w/v X-gal in detergent wash). Embryos were then subjected to further washes in detergent wash solution and then PBS before being post-fixed in 4% w/v paraformaldehyde (in PBS).
Image acquisition
Time-lapse sequences of migrating melanoblasts in embryonic skin culture and whole embryos were captured on a Nikon A1R inverted confocal microscope using a × 20 objective, images were captured at 2-min (for skin) or 5-min (for embryos) intervals over the course of the experiment. A stage top environmental chamber was used providing 5% CO 2 in air and maintaining a constant temperature of 37 °C. Images of X–Gal-stained Dct::lacZ embryos were captured on a Nikon macroscope using a ring light for illumination and a × 2 Nikon objective with × 2 optical zoom.
Optical projection tomography
Optical projection tomography (OPT) was performed on Bouin’s fixed mouse embryos at 1-day stages between E10.5 and E15.5. Samples were mounted in 1% low-melting-point agarose, dehydrated in methanol and then cleared overnight in BABB (1 part benzene alcohol:2 parts benzene benzoate). Samples were scanned using a Bioptonics OPT Scanner 3001 (Bioptonics, Edinburgh, UK) using a variety of fluorescent wavelengths to visualize tissue autofluorescence (excitation 425/60 nm/emission 480 nm and excitation 480/40 nm/emission 510 nm). Resultant scans were then reconstructed using proprietary software (nRecon/Skyscan, Belgium).
Image analysis and cell tracking
All image analysis tasks were performed using custom written macros for the Fiji distribution of ImageJ62. All morphological and tracking procedures were carried out on segmented images using standard ImageJ routines. To automatically track melanoblasts in the time-lapse sequences a modified version of the wrMTrck plugin (http://www.phage.dk/plugins/wrmtrck.html) was used on segmented TIFF stacks, the script used relied on Gabriel Landini’s morphology collection (http://www.mecourse.com/landinig/software/software.html). The tracking and morphology data generated by the procedure were recorded in a text file and used for the downstream analyses. The MSD of the melanoblast population was calculated from this data using the time ensemble averaging approach63, in a custom macro written for Fiji. Feret’s angles from time-lapse sequences (E14.5) were calculated from the shape of the cell body after image segmentation. The angle of migration in X–Gal-stained samples (E11.5, 12.5, 13.5 and 15.5) was measured manually by drawing a line along the longest axis of each cell in ImageJ. To calculate cell densities for stages E10.5 and E11.5 (Fig. 3a) the total number of cells was divided by the area of the trunk calculated from our OPT data. For all other stages the mid-domain density was measured.
To analyse the cell cycle, only the first 4 h of each time-lapse was considered to minimize laser exposure. The mean M-phase length (T m ) per time-lapse was calculated from the length of five mitotic events. The mean proportion of cells morphologically in M-phase (P mc ) per time-lapse was calculated from five frames spaced 60 min apart over the first 4 h of each time-lapse. The cell cycle time (T c ) for a given time-lapse was then calculated as T c =T m /P mc (refs 37, 38, 39).
The PCF is a summary statistic that provides a quantitative measure of spatial patterning. The function is derived by normalizing the counts of the distances between pairs of agents34, 35, 36. It is therefore able to capture patterning and the length scale of individual objects. We applied the PCF with non-periodic pairwise distance counting to multiple microscopy images of melanoblasts in the developing epidermis at E14.5 using a custom Matlab script. To avoid issues associated with the image boundary (where cells had been lost due to image processing) we used only a 256 × 256 μm central portion of each image (we find similar results for alternative window sizes if the central portion is positioned sufficiently far away from the boundary).
Measurement of domain expansion
To measure the expansion of the dorsoventral and axial domains of the trunk, OPT models were analysed using ImageJ/Fiji. Two measurements of the trunk circumference were made at the levels of the fore- and hindlimbs and averaged. For E10.5 where the umbilical hernia encompasses most of the axial width of the domain the region comprising the peritoneal membrane was excluded from the measurement as there is no dermal tissue at this level for melanoblasts to colonize. The dorsoventral length was defined as half the mean trunk circumference. Axial width was defined as the length between the hind- and forelimb junctions incorporating the curve of the domain at early stages (E10.5–E12.5).
Statistics
All statistical tests were performed using the ‘R’ statistics package, an open-source software package based on the ‘S’ programming language (http://www.R-project.org). The Berman’s64 test for a point process model was performed using the additional ‘spatstat’ package (http://www.spatstat.org/). All correlations were explored by examining the Pearson’s product-moment correlation coefficient. Comparisons between multiple groups were undertaken using a one-way analysis of variance. Subsequent pairwise comparisons were performed using a Tukey’s honest significant difference test, which is corrected for multiple testing.
Model framework
In our modelling framework we consider only the growth of the trunk region of the developing embryo between and not including the limb buds and its colonization by the migrating melanoblast population (Fig. 2a). We assume that melanoblast behaviour in the dermis (between E10.5 and E12.5) and the epidermis (between E12.5 and E15.5) is equivalent. The dermal and epidermal layers that can support melanoblast survival are collectively referred to as the DVI. We use an agent-based discrete random-walk model with volume exclusion on a two-dimensional square lattice of length L x (t) by L y (t) to model the DVI L x (t) represents the dorsoventral length of the domain at time t and L y (t) the axial length at time t. The lattice spacing is denoted Δ and time evolves continuously. Each agent (melanoblast) is assigned to a lattice site, from which it can move or place progeny into an adjacent site. Attempted agent movement or proliferation events occur with rate P m or P p per unit time, respectively. That is, P m δt is the probability of a given agent attempting to move in the next infinitesimally small time interval δt with events simulated as such using the Gillespie algorithm. If an agent attempts to move or proliferate into a site that is already occupied, the event is aborted.
Modelling tissue expansion
To model domain growth we employ a stochastic ‘pushing’ growth mechanism as described in Binder et al.65. The insertion of new lattice sites into the domain occurs with rates P ga and P gd per unit time, for growth in the axial and dorsoventral direction, respectively. When a ‘growth event’ occurs in the dorsoventral direction (horizontal direction in Supplementary Fig. 2a), for each row of the lattice one new site is added in a column, which is selected uniformly at random. To accommodate the new sites, in each row, the sites to the right of the added site are shifted a distance Δ rightwards carrying their contents with them (that is, cells move with their sites). Likewise, for axial growth (in the vertical direction in Supplementary Fig. 2b) one new site is added to each column in a row, which is selected uniformly at random and the appropriate sites are shifted upwards. Growth is linear in both the dorsoventral and axial directions as evidenced by experimental data (Supplementary Table 2).
Implementation of model
Movement, proliferation and growth events are modelled as exponentially distributed ‘reaction events’ in a Markov chain. Specifically we use the ‘Gillespie’ Monte Carlo simulation algorithm to simulate realizations of our model system. Each realization represents 5 days of real time from E10.5 to E15.5. We implement zero-flux boundary conditions on all boundaries in our discrete model. This represents the assumption that melanoblast efflux is balanced by melanoblast influx at the boundaries of the domain.
Modelling parameters from experimental data
Lattice spacing. The lattice spacing is chosen as Δ=38 μm. This implies that a single agent excludes a volume of 1,444 μm2, which is a realistic estimate for the size of a melanoblast. A completely colonized model domain (that is, every site in the computational domain is occupied by an agent) has a density of ~692 cells per mm2. Our experiments have established that the mean (±95% confidence interval (CI)) density of a ‘colonized’ domain at E15.5 is 701.21±137.70 cells per mm2 (main text, Fig. 2d).
Domain size and growth rates. Linear isotropic domain growth for the axial and dorsoventral domains was defined from morphological analysis of optical projection tomographs at embryonic stages between E10.5 and E15.5 (Supplementary Table 2). We initialize the domain as a rectangle of length 1,178 μm in the dorsoventral direction and 1,634 μm in the axial direction (corresponding to 31 lattice sites by 43 lattice sites, respectively). Although the domain grows stochastically, we employ constant growth rates P ga =0.00526, min−1 and P gd =0.0246, min−1 in the axial and dorsoventral directions, respectively, such that the mean-field growth in each direction is linear and matches with the experimentally measured linear domain growth relationship.
Initial number and position of cells. We defined the number of progenitor melanoblasts by counting the melanoblasts in the trunk of Dct::lacZ embryos at E10.5; a mean (±95% CI) of 20.32±5.95 melanoblasts (Fig. 2c). In our Dct::lacZ embryos we noted an under-representation of melanoblasts in the centre of the trunk region, although not always clear at E10.5 this was most striking at E11.5 (Supplementary Fig. 1c). We therefore weighted our initial distribution in a similar manner, initializing 21 agents such that on average one-third are between sites 12 and 32 of the axial axis, and the remaining two-thirds are evenly distributed between sites 1 and 11, and 33 and 43 corresponding to a slight under-representation in the middle of the axial axis. These agents are distributed so that 95% are between sites 8 and 17 of the dorsoventral axis. All agents are distributed between sites 8 and 19 of the dorsoventral axis.
Diffusion rate. As described in the main text we determined experimentally a density-dependent relationship between melanoblast diffusion and local density (main text, Fig. 2e). To determine the same relationship in our model we track agents moving on a 646 × 646-μm domain (corresponding to 17 × 17 sites) with periodic boundary conditions. This domain size corresponds, approximately, to the field of view of the microscope used to collect the experimental data on melanoblast movement. At t=0 in the simulation, a number of agents (from 1 to 289, representing all possible non-zero agent densities) are initialized with positions chosen uniformly at random throughout the domain. These agents are allowed to move (but not to proliferate, so as to keep the density constant) as described above for a simulation duration equivalent to 400 min of real time. This process is repeated 100 times for each agent density to guarantee enough data for an accurate representation of the MSD of the population. In each simulation the resulting agent tracks (excluding, for those agents that crossed a boundary, the portion of their tracks after their first boundary crossing event, since the tracks of these agents would be lost in our experimental system) are used to characterize the MSD as described in the main text. To determine the movement rate P m , we compare the relationship between density and effective diffusion coefficient for the experimental data to those for the model for a range of different values of P m . We chose the value of P m that gives the best fit (smallest least squares error, l2 norm). This value P m is given in Supplementary Table 3.
Proliferation rate. We defined the maximum proliferation rate by counting the number of melanoblasts in the trunk of Dct::lacZ E10.5 and E11.5 embryos. We found a mean (±95% CI) of 20.32±5.95 melanoblasts increasing to 151.09±27.95 melanoblasts in the first 24 h (Fig. 2c). To estimate a maximum doubling time for this period we used the mean cell number at E10.5 (–95% CI=14 cells) and the mean cell number at E11.5 (+95% CI=179 cells) implying a mean dermal doubling time of 6.6 h. We therefore chose a maximum possible population doubling time in the model of 7 h.
Simulation of rare clonal patterns. To investigate rare clones, at a time point chosen uniformly at random during the simulation, we chose one of the agents, from amongst all the agents that populate the domain at that time, with equal probability. This agent is marked and all the agent’s progeny inherit the same mark. At the end of the simulation all marked agents are plotted in a different colour to the non-marked agents resulting in a diffuse rare clonal pattern as seen in Supplementary Fig. 3a and in Fig. 3f.
Identification of stripe-like patterns in the model. To investigate chimeric stripe-like patterns in our discrete model we initialized our simulations with two distinctly labelled agent subpopulations and tracked the positions of their progeny over time (Fig. 4a). When the simulation was complete, we assigned the value +1 (associated with light grey cells) to one of the agent types and −1 (associated with black cells) to the other (while empty lattice sites are assigned the value 0). We then averaged the values associated with the lattice sites on each row. This provides a measure of the proportion of each agent colour in each row, which we call the ‘clonal signal’ (Fig. 4b). We repeat this process for each of the possible divisions of the 21 initial cells into two non-overlapping subsets, which we call a ‘clonal ratio’. For instance to investigate the patterning of a single clone we label one clone with +1s and remaining 20 clones with −1s. In this way we can investigate the pattern formed by a single clone amongst 21 differently labelled clones. Similarly, to investigate the pattern formed by ~11 distinctly labelled clones we label two of the randomly selected clones with 1s and 19 with −1s. To investigate the effect of having only two different clonal labels, we label approximately half (10 or 11) of the randomly selected clones with 1s and the other half with −1s.
To identify the presence of stripe-like patterns in our simulations we apply the DFFT to our intensity profiles. We repeat this process 100 times and generate an average DFFT for each different initial clonal ratio (Fig. 4c). For simulations without stripe-like patterns (that is, when the agents from different subpopulations are well mixed) no dominant frequency is clearly identifiable. However, in the case where different agent subpopulations are not well mixed and have formed dorsoventral stripes, a single, low frequency is identifiable that relates directly to the periodicity of the stripes in the simulation. A dominant frequency (corresponding to the maximum value of the averaged DFFT) can be identified for each initial clonal ratio. We call this dominant frequency the ‘stripe intensity’ (Fig. 3d). This method allows us to systematically identify the presence of stripes. ||||| Scientists have solved what shall henceforth be known as the piebald mystery: by discovering the origins of the broad white patches that can adorn the belly and head fur of cats, dogs and farm animals.
The distinctive patterns were known to be caused by a mutated gene, but how the faulty DNA produced the signature white bellies and other splashes of light on animals’ coats was far from clear.
The discovery has led researchers to a mathematical model that describes how the curious patterns arise through the movement and growth of pigment cells when the animals are still growing in the womb.
The formula could help scientists understand not just the variations of animal colourings but also more serious, related conditions called neurocristopathies that cause deafness, gut disorders, heart defects and cancer in humans when cells fail to move to the right positions.
Genetic map of cancer reveals trails of mutation that lead to disease Read more
“We’re interested in the patterning because it’s an analogy for these more serious diseases,” said Christian Yates, a mathematical biologist and senior author on the study at Bath University.
To work out how the patterns formed, the scientists tracked the fates of pigment cells called melanocytes in mouse embryos that carried mutations in a gene called Kit. The mutated gene is known to be the main cause of piebald patterns.
One leading theory held that the patterns were caused by the mutated Kit gene slowing down the migration of pigment cells. The cells are produced at the back of the embryo and move through the skin towards the front as the animal develops in the womb. If the cells moved too slowly, they would never reach the front of the animal, leaving the belly, and some other regions such as the head, dominated by white patches.
But Yates and colleagues at Edinburgh University found that if anything, pigment cells moved faster in mice with mutated Kit genes. What caused the piebald patterning was a different consequence of the mutation: the cells did not multiply as well as they moved through the skin, leaving the animals’ underbellies and extremities often devoid of the pigment needed to make dark fur.
Writing in the journal Nature Communications, the researchers describe how they went on to build a mathematical model that used factors such as the movement and growth of cells to reproduce piebald patterns. They found that even a small drop in the rate at which cells multiplied was enough to produce the characteristic white patches.
Rare genetic disorders: five positive steps for parents Read more
The work could steer research into human disorders such as Waardenburg syndrome, a form of congenital deafness, Hirschsprung’s disease, caused by a failure of nerve cells to grow properly into the gut, and Ondine’s curse, a form of sleep apnoea that can be fatal.
Among the scientists’ results are hints of why two people with the same mutation might experience different severities of disease. Unexpectedly, the pigment cells moved and multiplied at random, said Richard Mort, the first author on the study.
When cells are plentiful, such random effects even out, and have little impact on an animal’s health. But neurocristopathies often mean too few cells have reached the right place in the body, and random differences in how these cells develop could lead to huge differences in an animal’s health.
“There’s a randomness in the way the cells behave which means that the white patch you get is never the same, even in genetically identical individuals,” said Ian Jackson, a senior author on the study at Edinburgh University.
“You could have a situation where genetically identical twins would have the same disease, but to a different degree because of the random nature.” | – Researchers now know why and how tuxedo cats wear a tuxedo—and it's not for formal galas (cats hate those). The Guardian reports scientists had already figured out that piebald animals get their distinctive white patches because of a mutated gene. But they were way off in their theory of how that mutation produced white bellies and heads. According to the Telegraph, the long-running theory was that the mutation caused pigment cells to move too slowly to completely cover the developing animal. But new research shows the genetic mutation actually causes pigment cells to move faster. The problem is that it also causes them to divide slowly, meaning there aren't enough of them to cover the whole animal. This leads to the second busted hypothesis about piebald animals. Gizmodo reports scientists assumed something in the DNA was controlling pigment cells and telling them where to go. In reality, "the pigment just goes wherever." “There’s a randomness in the way the cells behave, which means that the white patch you get is never the same, even in genetically identical individuals,” researcher Ian Jackson tells the Telegraph. Jackson's study was published Wednesday in Nature Communications. While it's fun to know tuxedo cats' tuxedos are like snowflakes, the study actually has a practical application. According to the Guardian, a mathematical model developed by researchers could help scientists understand a related genetic mutation that can cause everything from deafness to cancer in humans. (In Finland, reindeer glow in the dark.) |
the study population was recruited from chennai , the fourth largest city in india . in our study , the computed sample size is 5830 .
this estimation is based on the following assumptions : the prevalence of dr in the general population is assumed to be 1.3% , with a relative precision of 25% , a drop - out rate of 20% , and a design effect of 2 . the sample size for the prevalence survey
/ d2 where p is the expected prevalence ; q = 1 p and d is the precision .
sampling was done in two stages : selection of divisions and selection of study subjects .
selection of divisions is done using computer generated random numbers ; of 155 divisions , ten are selected ensuring that one division per one corporate zone is represented in the sample .
600 individuals are enumerated for each division ( a total of 6000 in 10 zones ) .
family members living on the same premises and sharing a common kitchen were defined as being the members of one household . a door - to - door survey of all the households on the right side of the street was conducted in the selected division until a number of 600 subjects were reached .
institutional review board approval was obtained , and a written informed consent was obtained from the subjects as per the declaration of helsinki .
inclusion criteria included individuals aged 40 years and residing for a minimum of 6 months at the same residence .
the epidemiology team was provided with intensive training on a one - to - one basis that on doing a household survey , enumeration , and filling out the study data sheet and usage of the blood pressure ( bp ) apparatus and glucometer .
the main objective was to avoid bias or errors in any of the procedures employed .
each trainee was evaluated individually and allowed to participate in the study only after he / she displayed minimum error rates for the tasks involved in the study . in order to ensure accurate and reliable data ,
the glucometer was calibrated every day and its reproducibility was assessed by measuring the blood glucose for the same patient six times and also with two machines .
the scale for measuring the weight was calibrated with a known weight once a week .
glycosylated hemoglobin ( hba1c ) fractions were estimated by using merck micro lab 120 semi - automated analyzer ( bio - rad diastat hba1c reagent kit ) .
total serum cholesterol , high density lipoproteins ( hdls ) , and serum triglycerides ( cholesterol oxidase - peroxidase ) were estimated .
low serum hdl cholesterol levels were defined as < 1.03 mmol / l ( < 40 mg / dl ) for men and < 1.29 mmol / l ( < 50 mg / dl ) for women .
microalbuminuria estimation was done by a semi - quantitative procedure ( bayer clinitek 50 urine chemistry analyzer ) .
subjects were considered to have microalbuminuria , if the urinary albumin excretion was between 30 and 300 mg/24 h , and macroalbuminuria at more than 300 mg/24 h. the presence of diabetic neuropathy was considered if vibration perception threshold ( vpt ) value was > 20v .
vpt was measured using sensitometer by a single observer by placing a biothesiometer probe perpendicular to the distal plantar surface of the great toe of both legs .
anthropometric measurements , including weight , height , waist , and hip were obtained using standardized techniques .
abdominal obesity was defined as waist circumference ( wc ) 90 cm for men and 80 cm for women .
the clinical parameters were collected by trained nurses , whereas the interviews were held by trained personnel . out of 1414 subjects with type 2 diabetes , 248 ( 17.5% )
persons with newly diagnosed diabetes were defined as those who had their fasting blood glucose level 110 mg / dl on two separate days .
persons with known diabetes were those who were using either oral antiglycemic drugs or insulin or both .
insulin users were defined as subjects using insulin for glycemic control and insulin nonusers were defined as those either not using any antidiabetic treatment or using diet control or oral hypoglycemic medications .
the duration of insulin usage was calculated as the number of years since the patient is using insulin treatment to achieve glycemic control .
the duration prior to initiating insulin therapy was calculated by subtracting the duration of insulin usage from the duration of diabetes in years .
optimal control of hba1c was described based on the world health organization and the american diabetes association guidelines ( optimal hba1c , < 7% ; suboptimal hba1c , 7% ) .
dr was clinically graded using klein 's classification ( modified early treatment dr study [ etdrs ] scales ) .
digital photographs were assessed and graded by two independent observers ( experienced retinal specialists ) in a masked fashion .
the photographs were graded against the standard photographs of the etdrs grading system for the severity of retinopathy .
a computerized database was created for all the records . a statistical package for the social sciences - spss ( version 9.0 , spss inc . ,
all normally distributed data were compared using a student 's t - test , while those that did not follow normal distribution were examined using nonparametric tests .
all the data were expressed as mean standard deviation or as percentage . the statistical significance was assumed at p 0.05 .
for that reason , we utilized univariate and multivariate logistic regression analyses to elucidate the association between insulin usage and dr . the odds ratio ( or ) , with 95% confidence intervals ( cis ) , was calculated for the studied variables . using a logistic regression procedure , we calculated the area under receiver operating characteristic ( roc ) curve ( auc ) for dr in insulin users and nonusers .
predictive accuracy for logistic regression model was assessed by comparing the observed and the expected retinopathy by using the hosmer lemeshow ( hl ) goodness - of - fit test .
the chi - square test was carried out to determine if the observed and expected frequencies are significantly different .
a p > 0.05 for the hl test was considered suggestive of a calibrated model .
a computerized database was created for all the records . a statistical package for the social sciences - spss ( version 9.0 , spss inc . ,
all normally distributed data were compared using a student 's t - test , while those that did not follow normal distribution were examined using nonparametric tests .
for that reason , we utilized univariate and multivariate logistic regression analyses to elucidate the association between insulin usage and dr .
the odds ratio ( or ) , with 95% confidence intervals ( cis ) , was calculated for the studied variables .
using a logistic regression procedure , we calculated the area under receiver operating characteristic ( roc ) curve ( auc ) for dr in insulin users and nonusers .
predictive accuracy for logistic regression model was assessed by comparing the observed and the expected retinopathy by using the hosmer lemeshow ( hl ) goodness - of - fit test .
the chi - square test was carried out to determine if the observed and expected frequencies are significantly different .
a p > 0.05 for the hl test was considered suggestive of a calibrated model .
subjects using insulin had longer duration of diabetes ( 11.6 vs. 5.2 years , p < 0.0001 ) , lesser age of onset of diabetes ( 45.9 vs. 50.7 years , p < 0.0001 ) , lower diastolic bp ( dbp ) ( 78.1 mmhg vs. 82.2 mmhg . ,
p = 0.005 ) , lower total serum cholesterol ( 175.6 mg / dl vs. 187.1 mg / dl , p = 0.023 ) , lower serum triglycerides ( 127.9 mg / dl vs. 155.0 mg / dl , p = 0.030 ) , higher hba1c ( 9.0% vs. 8.2% , p = 0.001 ) , higher microalbuminuria ( 27.9 mg% vs. 15.4 mg% , p = 0.006 ) , higher macroalbuminuria ( 7.4% vs. 2.5% , p = 0.015 ) , and more neuropathy ( 29.4% vs. 18.3% , p = 0.002 ) in comparison to subjects not using insulin .
baseline characteristics of subjects not using insulin ( insulin nonusers ) and subjects using insulin ( insulin users ) fig . 1 shows the presence of dr and stdr in insulin nonusers and insulin users .
subjects using insulin were more likely to have dr ( 52.9% vs. 16.3% , p < 0.0001 ) and stdr ( 19.1% vs. 2.4% , p < 0.0001 ) in comparison to subjects not using insulin .
association of diabetic retinopathy and sight threatening diabetic retinopathy in insulin nonusers and insulin users table 2 shows the multivariate analysis for the factors associated with the development of dr in insulin users and insulin nonusers . in insulin nonusers , older age ( or 0.97 ,
95% ci 0.950.99 , p = 0.001 ) , longer duration of diabetes ( or 1.09 , 95% ci 1.071.12 , p < 0.0001 ) , lesser age of onset of diabetes ( or 0.95 , 95% ci 0.930.97 , p < 0.0001 ) , higher systolic bp ( or 1.01 , 95% ci 1.001.02 , p = 0.027 ) , higher hba1c ( or 1.23 , 95% ci 1.151.32 , p < 0.0001 ) , presence of microalbuminuria ( or 2.12 , 95% ci 1.433.13 , p < 0.0001 ) , presence of macroalbuminuria ( or 5.03 , 95% ci 2.2911.02 , p < 0.0001 ) , and presence of anemia ( or 1.90 , 95% ci 1.222.97 , p = 0.005 ) were associated with the development of dr . in insulin users ,
longer duration of diabetes ( or 1.12 , 95% ci 1.001.25 , p = 0.044 ) and abdominal obesity defined by higher wc ( or 1.15 , 95% ci 1.021.29 , p = 0.021 ) were associated with dr .
multivariate analysis for the factors associated with development of diabetic retinopathy in insulin nonusers and insulin users the roc curves for dr in insulin nonusers and insulin users are shown in fig .
the auc values and hl p values for dr for the insulin nonusers were 0.77 ( 95% ci 0.740.81 ) and 0.67 , respectively , and for insulin users were 0.86 ( 95% ci 0.770.95 ) and 0.54 , respectively .
receiver operating characteristic curve for presence of diabetic retinopathy in insulin nonusers ( a ) and insulin users ( b ) table 3 shows the association of dr with duration of insulin usage and duration prior to initiating insulin therapy , in relation to glycemic control of the subject .
there was no statistically significant association between the presence of dr and duration of insulin usage .
the presence of dr was significantly associated with longer duration ( 5 years ) prior to initiating insulin therapy overall ( 38.0 vs. 62.0% , p = 0.013 ) and in subjects with suboptimal glycemic control ( 32.5 vs. 67.5% , p = 0.022 ) .
association of diabetic retinopathy with duration of insulin usage and duration of insulin free period , in relation to glycemic control
we found that subjects with type 2 diabetes with suboptimal glycemic control , when insulin is started late in the course of the disease ( longer insulin free duration ) , the deleterious effects of long - term hyperglycemia due to neglected treatment is probably responsible for the development of dr .
it is also evident that poor glycemic control is the immediate cause for dr complication . over time , most patients with type 2 diabetes require insulin therapy , either alone or in combination with oral hypoglycemic agents , for satisfactory glycemic control
. since insulin therapy in type 2 diabetes is often initiated at a stage where glycemic control is suboptimal for the subject , insulin users have been reported to have a higher incidence of dr . moreover , recently , insulin has been independently implicated in the causation of dr and pdr . in the present study , we aimed to elucidate the relationship between insulin usage and dr .
for this , we divided the study population into two groups of insulin users and insulin nonusers . on studying the demographic profile of both groups , we observed statistically significant differences . as expected , insulin users had longer duration of diabetes , higher hba1c , and lesser age of onset of diabetes [ table 1 ] .
the insulin users also had a better lipid profile and lower dbp , when compared with insulin nonusers .
similar presence of more favorable lipid profile in subjects with type 2 diabetes using insulin in comparison to those using oral hypoglycemic agents has been reported earlier also .
the diastolic hypertension is known to be more prevalent among younger subjects and that in our study , the differences in age between insulin nonusers insulin users were not statistically significant . however , the regulation of bp in humans is a complex interplay between several exogenous and endogenous factors such as renin - angiotensin system and the presence of lower dbp among insulin users in the present study is difficult to explain without taking into consideration all these factors . concerning the microvascular diabetic complications , insulin users had higher albuminuria ( micro- and macro- ) and neuropathy .
likewise , insulin users also were more likely to have dr and stdr in comparison to insulin nonusers [ fig
, we also evaluated the factors associated with dr in insulin users and nonusers separately .
because of the small sample size , however , the same could not be studied in regard to stdr .
the factors associated with dr differed among insulin users and insulin nonusers . in insulin users ,
duration of diabetes and abdominal obesity in terms of wc were associated with dr , whereas hba1c , age of onset of diabetes , presence of microalbuminuria , macroalbuminuria , and anemia did not have a statistically significant association with dr as was noted in insulin nonusers .
in contrast , in insulin nonusers , we did not find an association between abdominal obesity and presence of dr .
this suggests that in insulin users , abdominal obesity is a more significant predictor of dr than it is for insulin nonusers . in our previous study , we examined the influence of insulin use as an independent variable in association with the prevalence of dr .
we observed that insulin users were at 3.5 times the risk of dr than those who were not using insulin . physical exercise , which has long been recognized as an effective interventional strategy in the treatment of type 2 diabetes , may especially prove very useful in insulin users to prevent retinopathy .
the usefulness of physical exercise has already been reported in patients with long - standing , insulin - treated type 2 diabetes with diabetic polyneuropathy .
on considering both the discrimination ( auc ) and calibration ( hl goodness - of - fit ) power of the multivariate analysis model , our study showed that it was appropriate for predicting the association of dr with various factors in both insulin nonusers and insulin users .
another finding of this study was lack of association between dr and the duration of insulin usage .
rather , association was noted between dr and longer duration before starting insulin , particularly in subjects with suboptimal glycemic control .
this finding suggests that insulin is simply a marker of glycemic control / disease severity rather than an independent risk factor for dr .
this suggests that in subjects with type 2 diabetes with suboptimal glycemic control , when insulin is started late in the course of the disease ( longer insulin free duration ) , the deleterious effects of long - term hyperglycemia due to neglected treatment is probably responsible for the development of dr .
outcome reduction with initial glargine intervention ( origin ) trial recommended early insulin initiation in type 2 diabetic patients with an hba1c > 9% at diagnosis , patients with a fast increase of hba1c after diagnosis and new onset symptoms , patients with multiple infections and patients with an hba1c > 7% despite maximal metformin treatment and diabetes - related complications .
origin also demonstrated reduced microangiopathy in patients with an hba1c value of 6.4% with basal insulin glargine .
patients with type 2 diabetes are initiated on additional blood glucose - lowering treatment only when the mean baseline hba1c reaches a value of 9.0% .
patients started on insulin have even higher mean hba1c of 9.6% and tend to have more severe baseline complications and co morbidities than those started on oral antidiabetic therapy .
in addition , the higher the starting a1c when therapy is initiated or changed , the less likely the patient can achieve adequate glycemic control .
chronic hyperglycemia increases production of reactive oxygen species , and subsequent oxidative stress affects insulin promoter activity ( pdx-1 and mafa binding ) resulting in diminished insulin gene expression in glucotoxic -cells .
patients presenting with significant hyperglycemia may benefit from timely initiation of insulin therapy that can effectively and rapidly correct their metabolic imbalance and reverse the deleterious effects of excessive glucose ( glucotoxicity ) and lipid ( lipotoxicity ) exposure on -cell function and insulin action .
glucotoxic effects are reversible with reinstitution of euglycemic conditions and causes greatest recovery of -cell function with shorter duration of exposure to hyperglycemia . hence , in subjects with type 2 diabetes having suboptimal glycemic control , starting insulin early may be more beneficial in preventing the development of dr in the longer course of the disease .
prospective studies will be required to evaluate the association of duration before starting insulin therapy with future development of dr .
one of the principal shortcomings of the study is its small sample size . of the 1414 subjects analyzed , only 68 were using insulin .
another major shortcoming is that because of its cross - sectional design , the cause - effect relationship can not be established between insulin usage and dr .
moreover , there was a lack of information regarding the type of insulin preparation used by the patient .
although we have performed additional analysis pertaining to the effect of duration of insulin usage versus insulin - free period , the numbers are too small , hence making it difficult to generalize from it .
the strengths of the study are that it was a well - conducted population - based prevalence study in type 2 dm , and retinopathy diagnosis was based on the gold standard fundus photography and comprehensive clinical and biochemical evaluation .
that insulin usage in type 2 diabetes was associated with the presence of dr , stdr , neuropathy , and albuminuria .
this finding suggests that insulin is simply a marker of glycemic control / disease severity rather than an independent risk factor for dr .
this suggests that in subjects with type 2 diabetes with suboptimal glycemic control , when insulin is started late in the course of the disease ( longer insulin free duration ) , the deleterious effects of long - term hyperglycemia due to neglected treatment is probably responsible for the development of dr . in insulin users ,
abdominal obesity was found to be a more significant predictor of dr than it is for insulin nonusers .
we also observed an association of dr with the longer duration before initiating insulin therapy , particularly in subjects with suboptimal glycemic control .
| context : insulin users have been reported to have a higher incidence of diabetic retinopathy ( dr).aim : the aim was to elucidate the factors associated with dr among insulin users , especially association between duration , prior to initiating insulin for type 2 diabetes mellitus ( dm ) and developing dr.materials and methods : retrospective cross - sectional observational study included 1414 subjects having type 2 dm .
insulin users were defined as subjects using insulin for glycemic control , and insulin nonusers as those either not using any antidiabetic treatment or using diet control or oral medications .
the duration before initiating insulin after diagnosis was calculated by subtracting the duration of insulin usage from the duration of dm .
dr was clinically graded using klein 's classification .
spss ( version 9.0 ) was used for statistical analysis.results:insulin users had more incidence of dr ( 52.9% vs. 16.3% , p < 0.0001 ) and sight threatening dr ( 19.1% vs. 2.4% , p < 0.0001 ) in comparison to insulin nonusers . among insulin users , longer duration of dm ( odds ratio [ or ] 1.12 , 95% confidence interval [ ci ] 1.001.25 , p = 0.044 ) and abdominal obesity ( or 1.15 , 95% ci 1.021.29 , p = 0.021 ) was associated with dr .
the presence of dr was significantly associated with longer duration ( 5 years ) prior to initiating insulin therapy , overall ( 38.0% vs. 62.0% , p = 0.013 ) , and in subjects with suboptimal glycemic control ( 32.5% vs. 67.5% , p = 0.022).conclusions : the presence of dr is significantly associated with longer duration of diabetes ( > 5 years ) and sub - optimal glycemic control ( glycosylated hemoglobin < 7.0% ) . among insulin users , abdominal obesity was found to be a significant predictor of dr ; dr is associated with longer duration prior to initiating insulin therapy in type 2 dm subjects with suboptimal glycemic control . |
the texono collaborationtexono ] has been built up since 1997 to pursue an experimental program in neutrino and astroparticle physics @xcite .
the `` flagship '' program is on reactor - based low energy neutrino physics at the kuo - sheng ( ks ) power plant in taiwan .
the ks experiment is the first large - scale particle physics experiment in taiwan .
the texono collaboration is the first research collaboration among scientists from taiwan and china @xcite .
results from recent neutrino experiments strongly favor neutrino oscillations which imply neutrino masses and mixings @xcite . their physical origin and experimental consequences
are not fully understood .
there are strong motivations for further experimental efforts to shed light on these fundamental questions by probing standard and anomalous neutrino properties and interactions @xcite .
the results can constrain theoretical models necessary to interpret the future precision data @xmath1 or may yield surprises which have been the characteristics of the field . in addition , these studies could also explore new detection channels to provide new tools for future investigations .
the `` kuo - sheng neutrino laboratory '' is located at a distance of 28 m from the core # 1 of the kuo - sheng nuclear power station at the northern shore of taiwan @xcite .
a multi - purpose `` inner target '' detector space of 100 @xmath280 @xmath275 cm is enclosed by 4@xmath3 passive shielding materials with a total weight of 50 tons .
different detectors can be placed in the inner space for the different scientific goals .
the detectors are read out by a versatile electronics and data acquisition systems @xcite based on 16-channel , 20 mhz , 8-bit flash analog - to - digital - convertor ( fadc ) modules .
the readout allows full recording of all the relevant pulse shape and timing information for as long as several ms after the initial trigger .
the reactor laboratory is connected via telephone line to the home - base laboratory , where remote access and monitoring are performed regularly .
data are stored and accessed with a cluster of multi - disks arrays each with 800 gbyte of memory .
the measure - able nuclear and electron recoil spectra due to reactor @xmath4 are depicted in figure [ spectra ] , showing the effects due to standard model [ @xmath5 and magnetic moment [ @xmath6 in @xmath4-electron scatterings @xcite , as well as in neutrino coherent scatterings on the nuclei ( @xmath7 and @xmath8 , respectively ) .
the uncertainties in the low energy part of the reactor neutrino spectra require that experiments to measure @xmath9 } $ ] should focus on higher electron recoil energies ( [email protected] mev ) , while mm searches should base on measurements with t@xmath11100 kev @xcite .
observation of @xmath12 would require detectors with sub - kev sensitivities .
accordingly , data taking were optimized with these strategies .
an ultra low - background high purity germanium ( ulb - hpge ) detector was used for period i ( june 2001 till may 2002 ) data taking , while 186 kg of csi(tl ) crystal scintillators were added in for period ii ( jan 2003 till sept 2003 ) .
both detector systems operate in parallel with the same data acquisition system but independent triggers .
the ulb - hpge is surrounded by nai(tl ) and csi(tl ) crystal scintillators as anti - compton detectors , and the whole set - up is further enclosed by another 3.5 cm of ofhc copper blocks , and housed in a radon shield .
after suppression of cosmic - induced background , anti - compton vetos and convoluted events by pulse shape discrimination , a background level at 20 kev at the range of 1 kev@xmath13kg@xmath13day@xmath13 and a detector threshold of 5 kev are achieved .
these are the levels comparable to underground dark matter experiment . comparison of the measured spectra for 4712/1250 hours of reactor on / off data in period i @xcite shows no excess and limits of the neutrino magnetic moment @xmath14 at 90(68)% confidence level ( cl ) were derived
. depicted in figure [ summaryplots]a is the summary of the results in @xmath15 searches versus the achieved threshold in reactor experiments .
the dotted lines denote the @xmath16 ratio at a particular ( t,@xmath15 ) .
the ks(ge ) experiment has a much lower physics threshold of 12 kev compared to the other measurements .
the large r - values imply that the ks results are robust against the uncertainties in the sm cross - sections .
the neutrino - photon couplings probed by @xmath17-searches in @xmath18-e scatterings are related to the neutrino radiative decays ( @xmath19 ) @xcite .
indirect bounds on @xmath19 can be inferred and are displayed in figure [ summaryplots]b for the simplified scenario where a single channel dominates the transition .
it corresponds to @xmath20 at 90(68)% cl in the non - degenerate case .
it can be seen that @xmath18-e scatterings give much more stringent bounds than the direct approaches .
the ks data with ulb - hpge are the lowest threshold data so far for reactor neutrino experiments , and therefore allow the studies of several new and more speculative topics .
nuclear fission at reactor cores also produce electron neutrino ( @xmath21 ) through the production of unstable isotopes , such as @xmath22cr and @xmath23fe , via neutron capture .
the subsequent decays of these isotopes by electron capture would produce mono - energetic @xmath21 .
a realistic neutron transfer simulation has been carried out to estimate the flux .
physics analysis on the @xmath17 and @xmath19 of @xmath21 will be performed , while the potentials for other physics applications will be studied . in additional , potentials for
an _ inclusive _ analysis of the anomalous neutrino interactions with matter , as well as studies on neutrino - induced nuclear transitions will be pursued .
period ii data taking includes an addition of an array of 186 kg of csi(tl ) crystals .
@xcite , each module being 2 kg in mass and 40 cm in length .
the physics goal is to measure the standard model neutrino - electron scattering cross sections , and thereby to provide a measurement of @xmath24 at the untested mev range .
the strategy @xcite is to focus on data at high ( @xmath102 mev ) recoil energy where the uncertainties due to the reactor neutrino spectra are small .
the large target mass compensates the drop in the cross - sections at high energy .
in addition , various r&d projects @xcite are pursued in parallel to the ks reactor neutrino experiment .
in particular , a prototype ultra - low - energy germanium ( ule - hpge ) detector of 5 g mass is being studied , with potential applications on dark matter searches and neutrino - nuclei coherent scatterings .
a hardware energy threshold of better than 100 ev has been achieved , as illustrated in figure [ lege ] .
the ule - hpge is placed inside the shieldings at the ks laboratory where the goal will be to perform the first - ever background studies at the sub - kev energy range .
it is technically feasible to build an array of such detectors to increase the target size to the 1 kg mass range .
99 h.t . wong and j. li ,
phys . lett . * a 15 * , 2011 ( 2000 ) ; h.t .
wong , j. li and z.y .
zhou , hep - ex/0307001 ( 2003 ) , and references therein .
d. normile , science * 300 * , 1074 ( 2003 ) .
neutrino-02 proc .
* b * ( proc . suppl . )
* 118 * ( 2003 ) ; for updates , see these proceedings .
j. valle , in ref .
@xcite , and references therein .
lai et al . , texono coll .
methods * a 465 * , 550 ( 2001 ) .
b. kayser et al . , phys . rev . *
d 20 * , 87 ( 1979 ) ; + p.vogel and j.engel , phys . rev . *
d 39 * , 3378 ( 1989 ) .
li and h.t .
wong , j. phys .
* g 28 * , 1453 ( 2002 ) .
li et al . , texono coll . , phys . rev .
lett . * 90 * , 131802 ( 2003 ) , and references therein .
raffelt , phys . rev . *
d 39 * , 2066 ( 1989 ) .
li et al . , texono coll . , nucl .
methods * a 459 * , 93 ( 2001 ) ; y. liu et al .
, texono coll .
methods * a 482 * , 125 ( 2002 ) . | a laboratory has been constructed by the texono collaboration at the kuo - sheng reactor power plant in taiwan to study low energy neutrino physics .
the facilities of the laboratory are described .
a limit on the neutrino magnetic moment of @xmath0 at 90% confidence level has been achieved from measurements with a high - purity germanium detector .
other physics topics , as well as the various r&d program , are surveyed . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Witness Security and Protection
Grant Program Act of 2013''.
SEC. 2. WITNESS PROTECTION GRANT PROGRAM.
(a) Definitions.--In this section--
(1) the term ``applicant'' means a State, tribal, or local
government that applies for a grant under this section; and
(2) the terms ``serious drug offense'' and ``serious
violent felony'' have the meaning given those terms in section
3559(c)(2) of title 18, United States Code.
(b) Grants Required.--Subject to subsection (j), the Attorney
General shall make competitive grants to State, tribal, and local
governments to establish or maintain programs that provide protection
or assistance to witnesses in court proceedings involving--
(1) a homicide, serious violent felony, or serious drug
offense; or
(2) gangs or organized crime.
(c) Criteria.--In making grants under this section, the Attorney
General shall evaluate applicants based upon the following:
(1) The extent to which the applicant lacks infrastructure
to support programs that provide protection or assistance to
witnesses.
(2) The prevalence of witness intimidation in the
jurisdiction of the applicant.
(3) The percentage of cases not prosecuted by the applicant
due to witness intimidation.
(4) The number of homicides per capita committed in the
jurisdiction of the applicant.
(5) The number of serious violent felonies or serious drug
offenses per capita committed in the jurisdiction of the
applicant.
(6) The extent to which organized crime is present in the
jurisdiction of the applicant.
(7) Any other criteria that the Attorney General determines
appropriate.
(d) Technical Assistance.--From amounts made available under
subsection (j) to carry out this section, the Attorney General, upon
request of a recipient of a grant under this section, shall direct the
appropriate offices within the Department of Justice to provide
technical assistance to the recipient to the extent the Attorney
General determines technical assistance is needed to establish or
maintain a program that provides protection or assistance to witnesses.
(e) Best Practices.--
(1) Report.--A recipient of a grant under this section
shall submit to the Attorney General a report, in such form and
manner and containing such information as specified by the
Attorney General, that evaluates each program established or
maintained pursuant to the grant, including policies and
procedures under the program.
(2) Development of best practices.--Based on the reports
submitted under paragraph (1), the Attorney General shall
develop best practice models to assist State, tribal, and local
governments in addressing--
(A) witness safety;
(B) short-term and permanent witness relocation;
(C) financial and housing assistance; and
(D) any other services related to witness
protection or assistance that the Attorney General
determines necessary.
(3) Dissemination to states.--Not later than 1 year after
developing best practice models under paragraph (2), the
Attorney General shall disseminate the models to State, tribal,
and local governments.
(4) Sense of congress.--It is the sense of Congress that
State, tribal, and local governments should use the best
practice models developed and disseminated under this
subsection to evaluate, improve, and develop witness protection
or witness assistance programs as appropriate.
(5) Rule of construction relating to sensitive
information.--Nothing in this section shall be construed to
require the dissemination of any information that the Attorney
General determines--
(A) is law enforcement sensitive and should only be
disclosed within the law enforcement community; or
(B) poses a threat to national security.
(f) Federal Share.--
(1) In general.--The Federal share of the cost of a program
carried out using a grant made under this section shall be not
more than 75 percent.
(2) In-kind contributions.--
(A) In general.--Subject to subparagraph (B), the
non-Federal share for a program carried out using a
grant made under this section may be in the form of in-
kind contributions that are directly related to the
purpose for which the grant was made.
(B) Maximum percentage.--Not more than 50 percent
of the non-Federal share for a program carried out
using a grant made under this section may be in the
form of in-kind contributions.
(g) Administrative Costs.--Of amounts made available to carry out
this section for a fiscal year, the Attorney General may use not more
than 5 percent for administrative costs.
(h) Geographic Distribution.--In making grants under this section,
the Attorney General shall--
(1) to the extent reasonable and practical, ensure an
equitable geographical distribution throughout the United
States of programs that provide protection or assistance to
witnesses; and
(2) give due consideration to applicants from both urban
and rural areas.
(i) Report to Congress.--The Attorney General shall submit a report
to Congress--
(1) not later than December 31, 2014, on the implementation
of this section, including any information on programs funded
by grants made under this section; and
(2) not later than December 31, 2019, on the programs
funded by grants made under this section, including on best
practice models developed under subsection (e)(2).
(j) Authorization of Appropriations.--There is authorized to be
appropriated to carry out this section $30,000,000 for each of fiscal
years 2014 through 2018. | Witness Security and Protection Grant Program Act of 2013 - Directs the Attorney General to: (1) make competitive grants to state, tribal, and local governments to establish or maintain programs that provide protection or assistance to witnesses in court proceedings involving a homicide, a serious violent felony, a serious drug offense, gangs, or organized crime; (2) evaluate grant applicants based on specified criteria, including the prevalence of witness intimidation and the number of such crimes per capita in the applicant's jurisdiction; (3) provide technical assistance to applicants for establishing or maintaining such programs; and (4) develop and disseminate best practice models to assist such governments in addressing witness safety, short-term and permanent witness relocation, financial and housing assistance, and other necessary witness protection or assistance services. Urges such governments to use such models to evaluate, improve, and develop witness protection or witness assistance programs. |
SECTION 1. TREATMENT OF GOODS EXPORTED FOR MODIFICATION AND REIMPORTED.
(a) Textile and Apparel Goods.--
(1) In general.--Subchapter II of chapter 98 of the
Harmonized Tariff Schedule of the United States is amended by
inserting in numerical sequence the following subheading, with
the article description having the same degree of indentation
as the article description for subheading 9802.00.60:
`` 9802.00.70 Textile and apparel A duty upon the Free (CL, CO, IL, A duty upon the ''
goods classifiable full value of the JO, KR, P, PA, PE, full value of the .
under chapter 61, imported article, SG) imported article,
except goods of less the cost or A duty upon the less the cost or
heading 9802.00.90 value of full value of the value of
and goods imported materials, imported article, materials,
under provisions including thread, less the cost or including thread,
of subchapter XIX yarn, fabric, or value of yarn, fabric, or
or XX of this components the materials, components the
chapter, if product of the including thread, product of the
exported for United States and yarn, fabric, or United States and
further processing provided for under components the provided for under
any of headings product of the any of headings
5106 through 5110, United States and 5106 through 5110,
5204 through 5207, provided for under 5204 through 5207,
5306 through 5308, any of headings 5306 through 5308,
5401 through 5406, 5106 through 5110, 5401 through 5406,
or 5508 through 5204 through 5207, or 5508 through
5511, or chapter 5306 through 5308, 5511, or chapter
60 or 61 (see U.S. 5401 through 5406, 60 or 61 (see U.S.
note 4 of this or 5508 through note 4 of this
subchapter) 5511, or chapter subchapter)
60 or 61 (see U.S.
note 4 of this
subchapter) (AU,
B, BH, C, CA, E,
MA, MX, OM)
Free, for products
described in U.S.
note 7 to this
subchapter
Free, for
qualifying
articles from sub-
Saharan African
countries
enumerated in U.S.
note 7 to this
subchapter
(2) Conforming amendment.--U.S. note 4 to subchapter II of
chapter 98 of the Harmonized Tariff Schedule of the United
States is amended, in the matter preceding paragraph (a), by
inserting ``and subheading 9802.00.70'' after ``9802.00.90''.
(b) Commingling of Fungible Goods Exported for Repairs or
Alterations.--U.S. note 3 to subchapter II of chapter 98 of the
Harmonized Tariff Schedule of the United States is amended by adding at
the end the following:
``(e) For purposes of subheadings 9802.00.40 and 9802.00.50, if an
article is exported from the United States for the purpose of repairing
or altering the article and the article is subsequently imported into
the United States--
``(1) the article shall be considered to be the same
article that was exported without regard to whether the article
contains 1 or more components recovered from an identical or
similar article that was also exported from the United States;
and
``(2) the cost or value of any such components shall not be
included in the value of the article when the article enters
the United States.''.
(c) Articles Previously Imported.--
(1) Duty treatment.--The article description for heading
9801.00.20 of the Harmonized Tariff Schedule of the United
States is amended to read as follows: ``Articles, previously
imported, with respect to which the duty was paid upon such
previous importation or which were imported previously free of
duty, if (1) reimported, without having been advanced in value
or improved in condition by any process of manufacture or other
means while abroad, after having been exported under lease or
similar use agreements, bailment agreements, or for
warehousing, repackaging, or both, and (2) reimported by or for
the account of the person who imported it into, and exported it
from, the United States.''.
(2) Commingling of fungible goods.--The U.S. notes to
subchapter I of chapter 98 of the Harmonized Tariff Schedule of
the United States are amended by adding at the end the
following new note:
``3.(a) For purposes of heading 9801.00.20--
``(i) fungible goods exported from the United States may be
commingled, and
``(ii) the origin, value, and classification of such goods
may be accounted for using an inventory management method.
``(b) If a person chooses to use an inventory management method
under paragraph (a) with respect to fungible goods, the person shall
use the same inventory management method for any goods with respect to
which the person claims fungibility.
``(c) For purposes of this note--
``(i) the term `fungible good' means any good that is
commercially interchangeable with another good and that has
properties that are essentially identical to the properties of
another good; and
``(ii) the term `inventory management method' means any
method for managing inventory that is based on generally
accepted accounting principles.''.
(d) Use of Manufacturer's Identification Code for Textile and
Apparel Products.--The U.S. notes to chapter 98 of the Harmonized
Tariff Schedule of the United States are amended by adding at the end
the following new note:
``4. For textile and apparel products classified in subchapter I or
II of this chapter, the manufacturer's identification code (MID) of the
facility that repairs, alters, assembles, processes, stores, or
otherwise handles the products may be used on any customs entry
documentation or electronic data transmission that requires
identification of the manufacturer.''.
SEC. 2. EFFECTIVE DATE.
(a) In General.--Subject to subsection (b), the amendments made by
this Act shall apply to goods entered, or withdrawn from warehouse for
consumption, on or after the 15th day after the date of the enactment
of this Act.
(b) Retroactive Application.--
(1) In general.--Notwithstanding section 514 of the Tariff
Act of 1930 (19 U.S.C. 1514) or any other provision of law, and
subject to paragraph (2), the entry of any good--
(A) that was liquidated or made on or after January
9, 2008, and before the 15th day after the date of the
enactment of this Act, and
(B) with respect to which there would have been no
duty if the amendment made by section 1(c)(1) applied
to such entry,
shall be liquidated or reliquidated as if such amendment
applied to such entry.
(2) Requests.--A liquidation or reliquidation may be made
under paragraph (1) with respect to an entry only if a request
therefor is filed with U.S. Customs and Border Protection
before the later of the 180th day after the date of the
enactment of this Act or the 180th day after the date of
liquidation of the entry, that contains sufficient information
to enable U.S. Customs and Border Protection--
(A) to locate the entry; or
(B) to reconstruct the entry if it cannot be
located.
(3) Payment of amounts owed.--Any amount owed by the United
States pursuant to the liquidation or reliquidation of an entry
of an article under paragraph (1) shall be paid, without
interest, not later than 90 days after the date of the
liquidation or reliquidation (as the case may be).
(4) Definition.--In this subsection, the term ``entry''
includes a withdrawal from warehouse for consumption. | Amends the Harmonized Tariff Schedule of the United States to prescribe requirements for the duty treatment of certain textile and apparel goods exported for processing abroad and subsequently reimported into the United States. Revises requirements granting duty-free treatment of previously imported articles, for which a duty was paid or where no duty was paid, if: reimported, without having been advanced in value or improved in condition while abroad, after having been exported under bailment agreements or for warehousing, repackaging, or both; and (as under current law) reimported by or for the account of the person who imported it into, and exported it from, the United States. Declares that, with respect to the duty imposed on the value of repairs or alterations made abroad to articles and subsequently imported into the United States: the article shall be considered to be the same article that was exported without regard to whether it contains one or more components recovered from an identical or similar article that was also exported from the United States, and the cost or value of any such components shall not be included in the value of the article when it enters the United States. Permits, with respect to such articles, the commingling of fungible goods exported from the United States, as well as use of an inventory management method to account for the origin, value, and classification of such goods. Permits use of the manufacturer's identification (MID) code of the facility that repairs, alters, assembles, processes, stores, or otherwise handles the textile and apparel goods on any customs entry documentations or electronic data transmissions. |
this study was a crossover , randomized clinical trial for assessing accuracy of the swede scores1113 by colposcopy performed by standard colposcopes and the gynocular , using biopsy as a criterion standard .
the inclusion criteria were ( 1 ) women who resulted positive for via at opportunistic screening by trained family welfare visitors , senior staff nurses , and doctors in the dhaka region,6 bangladesh , referredfor colposcopy at the colposcopy clinic of bangabandhu sheikh mujib medical university ( bsmmu ) from june 2012 to september 2012 . in bangladesh , 2.3% of the women have been screened with via up to date . among the women who had via of approximately 5% are via positive.6 a woman is considered to be via positive when a trained doctor or nurse noticed sharp , distinct , well - defined , dense acetowhite areas on the cervix , with or without raised margins , close to the squamocolumnar junction in the transformation zone.6 other inclusion criteria were ( 2 ) ability to understand written and oral information and ( 3 ) women signing an informed consent to participate in the study after receiving oral and written information from a social worker .
exclusion criteria were the following : ( 1 ) on - going vaginal bleeding , ( 2 ) any previous gynecologic examinations for at least 1 week before , and ( 3 ) pregnancy .
women who chose not to participate in the study had a standard colposcopy examination . in total
, 540 women were included in the study after written consent . all women in the study
were examined by 1 of the 6 colposcopy specialists in the colposcopy clinic of bsmmu .
the colposcopy specialists were physicians or gynecologists who were trained in colposcopy , cold coagulation , and loop electrical excision procedure at the colposcopy clinic of bsmmu.6 colposcopy was performed using 1 of the 2 standard colposcopes ( karl kaps som 52 , karl kaps gmbh & co.kg , asslar / wetzlar , germany or leisegang 1df , leisegang feinmechanik - optik gmbh , berling , germany ) and the gynocular .
women were randomly allocated in blocks of 50 to start the examination with either standard colposcope or gynocular examination of the cervix .
a total of 298 women started the examination with colposcope and 242 women with the gynocular .
all women were inspected with a standard colposcope and the gynocular by the same examiner in a crossover design to assess the performance of agreement between the standard colposcope and the gynocular .
this crossover design was selected to reduce potential observer variability.1416 performing the swede score , each of the 5 colposcopic variables ( acetowhiteness , margins plus surface , vessel pattern , lesion size , and iodine staining ) was given a score of 0 , 1,or 2 points.1113 a nonlubricated self - holding speculum was inserted into the vagina and the cervix was visualized .
the examination started with inspection of cervical vessel patterns with colposcope or the gynocular as randomized using the red - free ( green filter ) mode . a cervical cell sample
was then collected with a soft spatula from the cervix and cytobrush from the cervical canal for liquid - based cytology ( thinprep ; hologic inc , marlborough , mass ) .
then , the cervix was wiped with 5% acetic acid for 1 minute , followed by completion of the first colposcopic examination .
after each examination , the 4 swede score variables ( acetowhiteness , margins plus surface , vessel pattern , and lesion size ) were scored by the colposcopist and documented by the study coordinator .
the examiner then changed instruments and repeated the examination procedure , and the new 4 swede score variables were documented by the study coordinator .
next , the cervix was swabbed with 5% lugol iodine solution , and the colposcopist scored the swede score s fifth variable ( iodine staining ) with both instruments as randomized .
the examination was completed with 1 or more biopsies taken from areas of suspected cervical lesions .
punch biopsies of the cervix were done in all women with swede score of 4 and above.11,12 the cervical biopsies were analyzed at the histopathology laboratory of bsmmu .
the thinprep ( hologic inc ) tests were analyzed at the laboratory of clinical pathology and cytology in karolinska university hospital at karolinska institutet in stockholm , sweden .
the human papillomavirus ( hpv ) tests were analyzed in the laboratory of clinical microbiology and the laboratory of viruses of karolinska university hospital at karolinska institutet in stockholm , sweden , using the cobas hpv test ( roche molecular systems inc , pleasanton , calif ) .
the test specifically identifies ( types ) hpv-16 and hpv-18 while detecting the rest of the high - risk hpv types ( 31 , 33 , 35 , 39 , 45 , 51 , 52 , 56 , 58 , 59 , 66 , and 68 ) at clinically relevant infection levels .
the final diagnosis was the histopathology result , using the cervical intraepithelial neoplasia ( cin ) system,17 or other histopathologic diagnosis in situation of such outcome .
women with cin1 lesions were given the choice of being treated directly or a follow - up appointment after 6 months .
women with invasive cancers were referred to the department of gynecology and oncology at bsmmu for further management .
the study was approved by the local ethics committees in bangladesh and in sweden : the institutional review board of bsmmu ( dnr bsmmu/2012/3176 ) and the stockholm regional ethical review board ( dnr 2012/545 - 31/1 ) .
the gynocular ( gynius ab , stockholm , sweden ) has similar specifications to traditional colposcopes ( table 1 ) .
the gynocular is a monocular with 300 mm focal distance and 3 magnifications : 5 8 and 12. it is a handheld device measuring 50 33 166 mm and comes with a tripod - mounting clip that screws into a standard tripod , enabling the medical professional to also perform colposcopy hands - free mode for ease of biopsy ( fig .
the gynocular uses high - intensity leds for warm white illumination , has a green filter light , and is powered by a rechargeable lithium - ion battery ( fig .
the gynocular is an approved product by the swedish national drug authority as a noninvasive medical diagnostic class i tool and ce marked .
technical characteristics of the gynocular and colposcopes leisegang idf and karl kaps som 52 the gynocular with charger and mounting clip for camera tripod .
all statistical analyses have been performed using r version 2.14.1 , kurt hornik , free software foundation , inc , vienna , switzerland , .
the baseline patient characteristics of the women were summarized using means ( sd ) and frequencies ( % ; table 1).to test the level of agreement between the colposcopy and the gynocular , we calculated the percentage agreement and the weighted statistic.15 cervical lesions were classified by the swede scores system using the gynocular and a standard colposcope.10,11 swede scores of 4 and above were the cutoff scores for cervical biopsy by any of the instruments .
we calculated detection rates of cin1 , cin2 , cin3 , cin3 + ( invasive cancer ) , cervicitis , cervical tuberculosis , hpv-16 , hpv-18 , and the other high - risk hpv types ( 31 , 33 , 35 , 39 , 45 , 51 , 52 , 56 , 58 , 59 , 66 , and 68 ) .
a positive biopsy result was defined as cin1 , cin2 , cin3 , or cin3 + , and we calculated the sensitivity , specificity , positive predictive value ( ppv ) , and negative predictive value ( npv ) of the swede score using biopsy as a criterion standard for all cutoff levels of swede score between 4 and 10 .
the gynocular ( gynius ab , stockholm , sweden ) has similar specifications to traditional colposcopes ( table 1 ) .
the gynocular is a monocular with 300 mm focal distance and 3 magnifications : 5 8 and 12. it is a handheld device measuring 50 33 166 mm and comes with a tripod - mounting clip that screws into a standard tripod , enabling the medical professional to also perform colposcopy hands - free mode for ease of biopsy ( fig .
the gynocular uses high - intensity leds for warm white illumination , has a green filter light , and is powered by a rechargeable lithium - ion battery ( fig .
the gynocular is an approved product by the swedish national drug authority as a noninvasive medical diagnostic class i tool and ce marked .
technical characteristics of the gynocular and colposcopes leisegang idf and karl kaps som 52 the gynocular with charger and mounting clip for camera tripod . the gynocular showing lens , green filter , and warm white led illumination .
all statistical analyses have been performed using r version 2.14.1 , kurt hornik , free software foundation , inc , vienna , switzerland , .
the baseline patient characteristics of the women were summarized using means ( sd ) and frequencies ( % ; table 1).to test the level of agreement between the colposcopy and the gynocular , we calculated the percentage agreement and the weighted statistic.15 cervical lesions were classified by the swede scores system using the gynocular and a standard colposcope.10,11 swede scores of 4 and above were the cutoff scores for cervical biopsy by any of the instruments .
we calculated detection rates of cin1 , cin2 , cin3 , cin3 + ( invasive cancer ) , cervicitis , cervical tuberculosis , hpv-16 , hpv-18 , and the other high - risk hpv types ( 31 , 33 , 35 , 39 , 45 , 51 , 52 , 56 , 58 , 59 , 66 , and 68 ) .
a positive biopsy result was defined as cin1 , cin2 , cin3 , or cin3 + , and we calculated the sensitivity , specificity , positive predictive value ( ppv ) , and negative predictive value ( npv ) of the swede score using biopsy as a criterion standard for all cutoff levels of swede score between 4 and 10 .
the mean ( sd ) age of first marriage was 17.1 ( 3.5 ) years , and the mean age at first delivery was 19.2 ( 3.5 ) years .
baseline characteristics in our study , hpv-16 was present in 20 ( 3.9% ) women , hpv-18 was found in 2 ( 0.4% ) women , and other high - risk hpv ( 31 , 33 , 35 , 39 , 45 , 51 , 52 , 56 , 58 , 59 , 66 , and 68 ) were detected in 22 ( 4.3% ) women .
liquid - based cytology was normal in 432 ( 80.0% ) women and detected 15 ( 2.8% ) women with atypical squamous cells of unknown significance , 8 ( 1.5% ) women with cin1 , 9 ( 1.7% ) women with cin2 , 2 ( 0.4% ) women with cin3 , and 2 ( 0.4% ) women with cin3 + ( invasive cancer ) .
biopsy was normal in 16 ( 3.0% ) women and identified 85 ( 15.6% ) women with chronic cervicitis , 94 ( 17.4% ) women with cin1 , and 28 ( 5.2% ) women with cin2 .
two ( 0.4% ) women had cin3 , and 2 ( 0.4% ) women had invasive cervical cancer ( cin3 + ) . in 2
( 0.4% ) women , biopsy result showed cervical tuberculosis . thus , swede score directed biopsy diagnosed cin2 + ( cin2 , cin3 , and invasive cancer ) in 38 ( 7.1% ) women , whereas cytology detected cin2 + in 13 ( 2.4% ) of the women .
swede scores were obtained by cervical examination with colposcope and the gynocular . a cross tabulation of swede scores by colposcope versus the gynocular showed high agreement in 521 measurements , with a coefficient of 0.998 and p value of less than 0.001 ( fig .
3 ) . there were no significant differences between the gynocular and the colposcope in identifying cervical lesions detected by biopsy ( fig .
receiver operating characteristic curves for predicting a positive biopsy result defined as cin2 , cin3 , or cin3 + . to address the possibility that the swede score taken by the first instrument has biased the swede score taken by the second instrument
, we reanalyzed the receiver operating characteristic curves excluding the swede score taken by the second instrument ( fig .
figure 5 compares the first swede score taken by gynocular or standard colposcope and presents the sensitivity and specificity for detecting cin2 + and show that there are no significant differences between the gynocular and the standard colposcope in detecting cin2 + lesions .
comparison of gynocular and colposcope only using first measurement . using the cutoff value of 4 and above for swede score and biopsy , it showed that colposcopy by the gynocular had a sensitivity of 83.3% ( 95% ci [ confidence interval ] , 65.3%94.4% ) and specificity of 23.6% ( 95% ci , 17.4%30.9% ) and a sensitivity of 83.3% ( 95% ci , 65.3%94.7% ) and specificity of 24.2% ( 95% ci , 17.9%31.5% ) for the colposcope ( tables 2 and 3 ) .
positive predictive value was 88.6% ( 95% ci , 75.4%96.2% ) for the gynocular and 88.9% ( 95% ci , 75.9%96.3% ) for the colposcope .
negative predictive value was 16.6% ( 95% ci , 11.0%23.5% ) for the gynocular and 16.7% ( 95% ci , 75.9%96.3% ) for the colposcope ( table 3 ) . with increased swede score ,
the sensitivity decreased whereas the sensitivity increased , both for the gynocular and colposcope , and further analysis of each individual item of the swede score showed no differences between the different instruments ( table 3 ) .
the main finding of our study is that there were nodifferences between the gynocular and the standard colposcope in detecting cervical lesions in biopsy , showing excellent agreement of swedes scores by the gynocular and colposcope , as well as high intraobserver agreement . a swede score of above 4 in via - positive women gave a good indication for biopsy , whereas the possibility of a high - grade cervical lesion and a swede score of 8 and above was highly correlated with cin2 + and thus could serve as an aid to decide at site for direct removal of abnormal cervical areas .
another important finding was that very few of the referred via - positive women resulted positive for hpv infection or had a cin2 + lesion on cytology or biopsy .
the main strength of our study is the large sample size and randomized crossover design , which reduced the risk of intraobserver variability .
the fact that all patients had hpv and a cervical cytology test analyzed in an accredited laboratory increases the strength of the study .
the main weakness of our study is that biopsy was performed in patients with swede score of 4 and above and thus not in all patients .
this might have biased our results because we could not compare histopathologic results for swede scores of lower than 4 with swede scores of 4 and above . however , although all women had both a cervical cytology and hpv test , the low incidence of hpv and abnormal cytology supports the finding that few via - positive women had an increased swede score and thus was not in need of a biopsy .
another weakness is that women who were included were referred as via positive , and we could not control for how the referrer assessed the woman as via positive .
our finding that most referred via - positive women had no hpv infection nor cervical lesions is in line with via studies from nigeria , peru , and uganda.4,7,13 in a pooled study from india , women who were never screened before for cervical cancer had a prevalence of 5.8% high - risk hpv.17 this finding is interesting to compare with the prevalence of 8.6% high - risk hpv among our study group of via - positive women from bangladesh , whom thus did not differ much in hpv prevalence to women who were never screened before in a close - by region .
these findings emphasize the need of a more accurate method for cervical cancer screening in low - resource settings to avoid overtreatment or unnecessary referrals and thus economic burden on already - restrained economies .
in addition , the finding that 2 of the via - positive women had cervical tuberculosis on biopsy is important keep in mind , especially because it has only rarely been described in literature before.18 pimple et al10 suggested a
single - visit approach with colposcopy , which gives direct results and could be considered as a secondary testing tool to triage women who were found positive on via in settings where cytology and histopathology services are unavailable .
in addition , a see - and - treat approach was a well - accepted management strategy of high - grade cin in bangladesh because it reduced the number of visits to the clinic and failure to receive treatment.19 moreover , bowring et al12 proposed that a modified swede score in low - resource settings could predict cervical abnormalities and avoid overtreatment,11 and strander et al11 suggested see - and - treat approach when the patient has a swede score of 8 and above .
it is reassuring to note that the observed high specificity for cin2 + for swede score of 8 and above in our study is well in line with the results from both strander et al11 and bowring et al12 and support the see - and - treat method when the patient has a swede score of 8 and above .
a recent meta - analysis20 of the accuracy of colposcopy found that the pooled sensitivity for a punch biopsy defined cin2 + disease was 91.3% and the specificity was as low as 24.6% . in most of the studies
included , most women had positive punch biopsies , and the authors concluded that the observed high sensitivity of the punch biopsy was probably the result of verification bias , whereas very few cases of negative punch biopsies were referred for colposcopy and thus lowering the specificity of colposcopy.20 also , previous studies on the swede score method took place in settings where most of the patients were referred for colposcopy because of abnormal cytology . in these studies , most patients with cin2 + had swede score of 5 and above.11,12 the study population in our study was different ; cytology was not available before colposcopy , only the via result , and for many women included in the study , the current visit to the colposcopy clinic might have been their only lifetime opportunity to have a colposcopy examination .
therefore , it is reassuring to note that also in our study population of via - positive women , colposcopy with swede score was able to identify those who are in need of further assessment with biopsy and also which women would benefit from direct treatment .
, the results from cytology showed that 2.5% of the via - positive women had cin2 + , whereas punch biopsy from swede scores of 4 and above found cin2 + in 7% of the women .
these findings are comparable with the results from a multicenter study in india , where cytology for cin2 + reported where colposcopy in via - positive women also detected more cin2 + lesions in via - positive women than cytology.21 increasing the swede score cutoff decreased the sensitivity and increased the specificity .
lower specificity for a diagnostic test increases further burden on the health system in increased health care cost for treatment of the false positives .
thus , we recommend a colposcopy swede score cutoff of 4 for biopsy in low - resource settings as an alternative cervical screening method . in conclusion ,
swede score by colposcopy using standard colposcope or the gynocular might be an alternate cervical cancer screening method in low - resource settings , enabling the single - visit approach and avoiding overtreatment . | objectivethis study aimed to evaluate cervical lesions by the swede coloscopy system , histologic finding , liquid - based cytology , and human papillomavirus ( hpv ) in women who resulted positive for visual inspection of the cervix with acetic acid ( via ) by using a pocket - sized battery - driven colposcope , the gynocular ( gynius ab , sweden).methodsthis study was a crossover , randomized clinical trial at the colposcopy clinic of bangabandhu sheikh mujib medical university in dhaka , bangladesh , with 540 via - positive women .
swede scores were obtained by the gynocular and stationary colposcope , as well as samples for liquid - based cytology , hpv , and cervical biopsies .
the swede scores were compared against the histologic diagnosis and used as criterion standard . the percentage agreement and the statistic for the gynocular and standard colposcope
were also calculated.resultsthe gynocular and stationary colposcope showed high agreement in swede scores with a statistic of 0.998 , p value of less than 0.0001 , and no difference in detecting cervical lesions in biopsy .
biopsy detected cervical intraepithelial neoplasia ( cin ) 2 + ( cin2 , cin3 , and invasive cancer ) in 38 ( 7% ) of the women , whereas liquid - based cytology detected cin2 + in 13 ( 2.5% ) of the women .
forty - four ( 8.6% ) women who were tested resulted positive for hpv ; 20 ( 3.9% ) women had hpv-16 , 2 ( 0.4% ) had hpv-18 , and 22 ( 4.3% ) had other high - risk hpv.conclusionsour study showed that few via - positive women had cin2 + lesions or hpv infection .
colposcopy by swede score identified significantly more cin2 + lesions than liquid - based cytology and could offer a more accurate screening and selection for immediate treatment of cervical lesions in low - resource settings . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Chocolate Mountain Aerial Gunnery
Range Transfer Act of 2013''.
SEC. 2. TRANSFER OF ADMINISTRATIVE JURISDICTION, CHOCOLATE MOUNTAIN
AERIAL GUNNERY RANGE, CALIFORNIA.
(a) Transfer Required.--The Secretary of the Interior shall
transfer to the administrative jurisdiction of the Secretary of the
Navy certain public land administered by the Bureau of Land Management
in Imperial and Riverside Counties, California, consisting of
approximately 226,711 acres, as generally depicted on the map titled
``Chocolate Mountain Aerial Gunnery Range Proposed-Withdrawal'' dated
1987 (revised July 1993), and identified as WESTDIV Drawing No. C-
102370, which was prepared by the Naval Facilities Engineering Command
of the Department of the Navy and is on file with the California State
Office of the Bureau of Land Management.
(b) Valid Existing Rights.--The transfer of administrative
jurisdiction under subsection (a) shall be subject to any valid
existing rights, including any property, easements, or improvements
held by the Bureau of Reclamation and appurtenant to the Coachella
Canal. The Secretary of the Navy shall provide for reasonable access by
the Bureau of Reclamation for inspection and maintenance purposes not
inconsistent with military training.
(c) Time for Conveyance.--The transfer of administrative
jurisdiction under subsection (a) shall occur pursuant to a schedule
agreed to by the Secretary of the Interior and the Secretary of the
Navy, but in no case later than the date of the completion of the
boundary realignment required by section 4.
(d) Map and Legal Description.--
(1) Preparation and publication.--The Secretary of the
Interior shall publish in the Federal Register a legal
description of the public land to be transferred under
subsection (a).
(2) Submission to congress.--The Secretary of the Interior
shall file with the Committee on Energy and Natural Resources
of the Senate and the Committee on Natural Resources of the
House of Representatives--
(A) a copy of the legal description prepared under
paragraph (1); and
(B) a map depicting the legal description of the
transferred public land.
(3) Availability for public inspection.--Copies of the
legal description and map filed under paragraph (2) shall be
available for public inspection in the appropriate offices of--
(A) the Bureau of Land Management;
(B) the Office of the Commanding Officer, Marine
Corps Air Station Yuma, Arizona;
(C) the Office of the Commander, Navy Region
Southwest; and
(D) the Office of the Secretary of the Navy.
(4) Force of law.--The legal description and map filed
under paragraph (2) shall have the same force and effect as if
included in this Act, except that the Secretary of the Interior
may correct clerical and typographical errors in the legal
description or map.
(5) Reimbursement of costs.--The transfer required by
subsection (a) shall be made without reimbursement, except that
the Secretary of the Navy shall reimburse the Secretary of the
Interior for any costs incurred by the Secretary of the
Interior to prepare the legal description and map under this
subsection.
SEC. 3. MANAGEMENT AND USE OF TRANSFERRED LAND.
(a) Use of Transferred Land.--Upon the receipt of the land under
section 2, the Secretary of the Navy shall administer the land as the
Chocolate Mountain Aerial Gunnery Range, California, and continue to
authorize use of the land for military purposes.
(b) Protection of Desert Tortoise.--Nothing in the transfer
required by section 2 shall affect the prior designation of certain
lands within the Chocolate Mountain Aerial Gunnery Range as critical
habitat for the desert tortoise (Gopherus Agassizii).
(c) Withdrawal of Mineral Estate.--Subject to valid existing
rights, the mineral estate of the land to be transferred under section
2 are withdrawn from all forms of appropriation under the public land
laws, including the mining laws and the mineral and geothermal leasing
laws, for as long as the land is under the administrative jurisdiction
of the Secretary of the Navy.
(d) Integrated Natural Resources Management Plan.--Not later than
one year after the transfer of the land under section 2, the Secretary
of the Navy, in cooperation with the Secretary of the Interior, shall
prepare an integrated natural resources management plan pursuant to the
Sikes Act (16 U.S.C. 670a et seq.) for the transferred land and for
land that, as of the date of the enactment of this Act, is under the
jurisdiction of the Secretary of the Navy underlying the Chocolate
Mountain Aerial Gunnery Range.
SEC. 4. REALIGNMENT OF RANGE BOUNDARY AND RELATED TRANSFER OF TITLE.
(a) Realignment; Purpose.--The Secretary of the Interior and the
Secretary of the Navy shall realign the boundary of the Chocolate
Mountain Aerial Gunnery Range, as in effect on the date of the
enactment of this Act, to improve public safety and management of the
Range, consistent with the following:
(1) The northwestern boundary of the Chocolate Mountain
Aerial Gunnery Range shall be realigned to the edge of the
Bradshaw Trail so that the Trail is entirely on public land
under the jurisdiction of the Department of the Interior.
(2) The centerline of the Bradshaw Trail shall be
delineated by the Secretary of the Interior in consultation
with the Secretary of the Navy, beginning at its western
terminus at Township 8 South, Range 12 East, Section 6 eastward
to Township 8 South, Range 17 East, Section 32 where it leaves
the Chocolate Mountain Aerial Gunnery Range.
(b) Transfers Related to Realignment.--The Secretary of the
Interior and the Secretary of the Navy shall make such transfers of
administrative jurisdiction as may be necessary to reflect the results
of the boundary realignment carried out pursuant to subsection (a).
(c) Applicability of National Environmental Policy Act of 1969.--
The National Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.)
shall not apply to any transfer of land made under subsection (b) or
any decontamination actions undertaken in connection with such a
transfer.
(d) Decontamination.--The Secretary of the Navy shall maintain, to
the extent funds are available for such purpose and consistent with
applicable Federal and State law, a program of decontamination of any
contamination caused by defense-related uses on land transferred under
subsection (b). The Secretary of Defense shall include a description of
such decontamination activities in the annual report required by
section 2711 of title 10, United States Code.
(e) Timeline.--The delineation of the Bradshaw Trail under
subsection (a) and any transfer of land under subsection (b) shall
occur pursuant to a schedule agreed to by the Secretary of the Interior
and the Secretary of the Navy, but in no case later than two years
after the date of the enactment of this Act.
SEC. 5. EFFECT OF TERMINATION OF MILITARY USE.
(a) Notice and Effect.--Upon a determination by the Secretary of
the Navy that there is no longer a military need for all or portions of
the land transferred under section 2, the Secretary of the Navy shall
notify the Secretary of the Interior of such determination. Subject to
subsections (b), (c), and (d), the Secretary of the Navy shall transfer
the land subject to such a notice back to the administrative
jurisdiction of the Secretary of the Interior.
(b) Contamination.--Before transmitting a notice under subsection
(a), the Secretary of the Navy shall prepare a written determination
concerning whether and to what extent the land to be transferred are
contaminated with explosive, toxic, or other hazardous materials. A
copy of the determination shall be transmitted with the notice. Copies
of the notice and the determination shall be published in the Federal
Register.
(c) Decontamination.--The Secretary of the Navy shall decontaminate
any contaminated land that is the subject of a notice under subsection
(a) if--
(1) the Secretary of the Interior, in consultation with the
Secretary of the Navy, determines that--
(A) decontamination is practicable and economically
feasible (taking into consideration the potential
future use and value of the land); and
(B) upon decontamination, the land could be opened
to operation of some or all of the public land laws,
including the mining laws; and
(2) funds are appropriated for such decontamination.
(d) Alternative.--The Secretary of the Interior is not required to
accept land proposed for transfer under subsection (a) if the Secretary
of the Interior is unable to make the determinations under subsection
(c)(1) or if Congress does not appropriate a sufficient amount of funds
for the decontamination of the land.
SEC. 6. TEMPORARY EXTENSION OF EXISTING WITHDRAWAL PERIOD.
Notwithstanding subsection (a) of section 806 of the California
Military Lands Withdrawal and Overflights Act of 1994 (title VIII of
Public Law 103-433; 108 Stat. 4505), the withdrawal and reservation of
the land transferred under section 2 of this Act shall not terminate
until the date on which the land transfer required by section 2 is
executed.
SEC. 7. WATER RIGHTS.
(a) Water Rights.--Nothing in this Act shall be construed--
(1) to establish a reservation in favor of the United
States with respect to any water or water right on lands
transferred by this Act; or
(2) to authorize the appropriation of water on lands
transferred by this Act except in accordance with applicable
State law.
(b) Effect on Previously Acquired or Reserved Water Rights.--This
section shall not be construed to affect any water rights acquired or
reserved by the United States before the date of the enactment of this
Act. | Chocolate Mountain Aerial Gunnery Range Transfer Act of 2013 - Directs the Secretary of the Interior to transfer to the Secretary of the Navy administrative jurisdiction over certain public lands in Imperial and Riverside Counties, California, for inclusion within the Chocolate Mountain Aerial Gunnery Range. Withdraws the mineral estate of such land from all forms of appropriation under the public land laws, including the mining laws and the mineral and geothermal leasing laws, for as long as the land is under the administrative jurisdiction of the Secretary of the Navy. Requires the Secretary of the Navy to prepare an integrated natural resources management plan with respect to the transferred lands. Requires the Secretary of the Navy, upon determining that there is no longer a military need for the lands transferred, to transfer such lands back to the Secretary of the Interior. Requires appropriate land decontamination preceding the retransfer. Amends the California Military Lands Withdrawal and Overflights Act of 1994 to extend a land withdrawal and reservation period consistent with the above transfer. Provides that nothing in this Act shall affect existing water rights on the transferred lands. |
entanglement plays an important role in quantum information processing @xcite .
quantum teleportation @xcite , quantum dense coding @xcite , quantum key distribution @xcite , and some other protocols @xcite all require the entanglement to set up the maximally entanglement channel . on the other hand , to complete the quantum computation , they should also create the entanglement @xcite .
unfortunately , during the distribution and storage of the entanglement , it always suffers from the environment noise . the environment noise will make the entanglement degrade .
the degraded entanglement will make the quantum communication insecure .
it will also make the quantum computation cause error . generally , the maximally entangled state will degrade to a mixed state . in an optical system , the maximally
entangled state such as the bell state @xmath0 will become the mixed state as @xmath1 @xcite . here
@xmath3 is the horizonal polarization of the photon and @xmath4 is the vertical polarization of the photon . on the other hand , the maximally entangled state @xmath5
also can degrade to a pure less - entangled state @xmath6 @xcite . here
distilling the high fidelity of the mixed states from the low quality of the mixed states is the entanglement purification @xcite .
there are a lot of excellent works focused on the entanglement purification , such as the entanglement purification protocols based on the controlled - not gate @xcite , linear optics@xcite , nonlinear optics @xcite , and so on @xcite .
the approach of distilling the pure maximally entangled states from the less - entangled states is entanglement concentration .
there are also many excellent works for entanglement concentration , such as the entanglement concentration based on the collective measurement @xcite , unitary operation @xcite , linear optics @xcite and so on @xcite .
actually , the decoherence models described above are both existed in a practical entanglement distribution .
unfortunately , in the previous works , people deal with these problem independently .
they either focused on the entanglement purification nor entanglement concentration .
therefore , they can only partially solve the problem of decoherence . in this paper
, we will describe a general distillation mode for decoherence .
suppose that the pure maximally entangled state @xmath5 will both degrade to the mixed state and the less - entangled state , and the decoherence model can be described as @xmath8 here we denote @xmath9 ) , it is shown that the initial state @xmath5 becomes a mixed state @xmath10 , while in each part of the mixed state , it is still a less - entangled state , say @xmath11 or @xmath12 .
therefor , in order to distill such mixed state @xmath10 , we not only need to improve the fidelity of the mixed state @xmath13 , but also concentrate the less - entangled state @xmath11 and @xmath12 to the maximally entangled state . in this paper
, we will describe the approach that we can distill such mixed state effectively .
after performing the protocol , we can obtain the high fidelity mixed state with each components being the maximally entangled state .
that is , our protocol can both realize the entanglement purification and entanglement concentration in one step .
this paper is organized as follows : in sec .
ii , we will explain our protocol with the correcting of the bit - flip error . in sec .
iii , we will describe the distillation of the phase - flip error .
interestingly , both steps can be repeated to obtain a high success probability . in sec.iv
, we will extend our protocol to the case of multi - partite entangled systems . in sec .
v , we will make a discussion and conclusion .
before we start to explain our protocol . it is necessary to introduce the cross - kerr nonlinearity , which is the key element in our protocol . from fig .
1 , the hamiltonian of the cross - kerr nonlinearity can be written as @xcite @xmath14 if we consider a single photon with @xmath3 polarization in @xmath15 spatial mode .
this photon combined with the coherent state @xmath16 will evolve to @xmath17 . on the other hand ,
the @xmath4 polarization in @xmath15 spatial mode combined with the coherent state @xmath18 will evolve to @xmath19 .
therefore , by measuring the phase shift of the coherent state , we can judge the single photon number . in this way
, we do not need to detect the single photon directly .
it is so called the quantum nondemolition ( qnd ) measurement , which has been widely used in quantum information processing @xcite . from fig .
2 , suppose that alice and bob share two pairs of the mixed states in the spatial modes @xmath15 , @xmath20 and @xmath21 , @xmath22 , respectively .
the mixed state @xmath10 can be described in eq .
( [ general0 ] ) . before the two pairs passing through the qnd
, they first perform a bit - flip operation on the second pair in @xmath21 and @xmath22 modes .
the state @xmath11 will become latexmath:[\ ] ]
in this section , we will describe the distillation of the phase - flip error . in the previous entanglement purification protocols ,
the phase - flip error can be conversed to the bit - flip error by hadamard operation and can be purified in the next round .
however , in this protocol , we can not treat it like the previous entanglement purification protocols .
suppose that alice and bob share the mixed state of the form @xmath85 here @xmath86 is in the @xmath15 and @xmath20 spatial modes and @xmath87 is in the @xmath21 spatial mode .
alice first let his two photons pass through the qnd .
the state @xmath88 and @xmath89 combined with the coherent state can be described as @xmath90 obviously , if the coherent state picks up no phase shift , they will obtain the state @xmath5 after measuring the photon in @xmath91 spatial modes in @xmath63 basis , with the probability of @xmath92 . on the other hand ,
if the initial state is @xmath93 , they will obtain @xmath55 with the probability of @xmath94 . in this way
, the mixed state @xmath95 becomes a new mixed state @xmath96 the success probability is @xmath97 .
certainly , they can also repeat the protocol to increase the success probability , if the coherent state picks up the phase shift @xmath57 . in the second round
, alice only needs to prepare another single photon of the form of @xmath98 as shown in eq .
( [ single1 ] ) .
following the same principle , if the coherent state picks up no phase shift , they will obtain @xmath99 with the success probability of @xmath100 .
if they repeat it for @xmath78 round , the success probability is @xmath101 if they obtain the state @xmath99 , it is the standard mixed state with a phase - flip error like ref .
@xcite . in this way
, they both perform the hadamard operation to converse the phase - flip error to a bit - flip error . after performing the hadamard operations on two photons , the state of @xmath5 does not change , while @xmath55 becomes @xmath5 .
the form of the transformed mixed state is the same as the mixed state shown in eq .
( [ newmixed ] ) .
therefore , it can be distilled in a next round .
in this section , we will describe the distillation of the multipartite greenberg - horne - zeilinger ( ghz ) state .
an @xmath78-particle ghz state can be described as @xmath102 suppose that the mixed state can be written as @xmath103 represents the @xmath78-particle ghz state.,width=302 ] here @xmath104 and @xmath105 the principle of distillation the multipartite ghz state is similar to the previous description .
the @xmath78 photons are distributed to alice , bob , charlie , etc . in each round , they choose two pairs of mixed state @xmath106 .
the first pair is in the spatial modes @xmath15 , @xmath20 , @xmath107 , and so on .
the second pair is in the @xmath21 , @xmath22 , @xmath108 , and so on .
they first let the state @xmath109 perform bit - flip operation on each photon and make the whole state become @xmath110 the state @xmath111 becomes @xmath112 the new mixed state can be written as @xmath113 as shown in fig .
3 , they choose two copies of mixed states in each round .
the whole system can also be regarded as the mixture of four states . with the probability of @xmath114 , it is in the state @xmath115 . with the same probability of @xmath26 , they are in the states @xmath116 and @xmath117 . with the probability of @xmath29 ,
it is in the state @xmath118 .
the principle of the distillation is similar to the previous one .
after the photons passing through the qnds , they pick up the case that all the coherent states have no phase shift . in this way
, the cross - combination items can be eliminated automatically .
the remained states are @xmath102 with the probability of @xmath38 , and the state @xmath119 with the probability of @xmath40 . finally , by measuring the even number of the photons in the basis @xmath63
, they will ultimately obtain the new mixed state @xmath120 here @xmath121 is altered with the initial coefficient @xmath122 . here
we let @xmath123 .
@xmath78 is the iteration number.,width=264 ] on the other hand , if they pick up the same phase shift @xmath57 , and measuring the even number photons in the @xmath63 basis , they will obtain @xmath124 and @xmath125 similarly , they only need to prepare the single photon state of the form as show in eq .
( [ single1 ] ) , and following the same operation shown in the previous section , they will obtain the same mixed state @xmath126 with the probability of @xmath72 $ ] . in this way , it can also be repeated to obtain a high success probability .
once the bit - flip error can be corrected , the phase flip error can also be corrected in a next round .
so far , we have fully described our protocol . we first explained the protocol with a bit - flip error . by selecting the cases that both coherent states picking up no phase shift
, they can obtain the higher fidelity of the mixed state . on the other hand ,
if both the coherent states pick up the phase shift with @xmath57 , this protocol can be repeated to reach a high success probability . in our protocol
, we also show that the phase - flip error can also be well distilled .
different from the bit - flip error correction , it can be achieved in two steps . in the first step ,
they first prepare a single photon . after the single photon and the mixed state both pass through the qnd in alice s location , they select the case that the coherent state pick up no phase shift . in this way
, the original mixed state becomes a standard mixed state in eq .
( [ general3 ] ) .
subsequently , they perform the hadamard operation on both photons to convert the phase - flip error to the bit - flip error , which can be distilled in a conventional way .
interestingly , in the first step , if the coherent state pick up the phase shift @xmath57 , it can also be repeated to obtain a high success probability .
it is interesting to compare this protocol with previous entanglement purification protocols and concentration protocols . in the pioneer work of purification @xcite
, they achieved the bit - flip error correction with the post - selection principle .
@xcite also described the entanglement concentration protocol . in their protocol , they can obtain the maximally entangled state with the success probability of @xmath127 . in our protocol , if we let @xmath128 , our model is simplified to the standard entanglement concentration . on the other hand ,
if we let @xmath129 , this protocol is essentially the entanglement purification model .
therefore , our distillation protocol is more practical and universal than both purification and concentration protocols . actually , if the pure maximally entangled state degrades to the mixed state shown in eq .
( [ general0 ] ) , one can also perform the entanglement concentration first to obtain the standard mixed state in eq .
( [ general3 ] ) and perform the entanglement purification subsequently . in our protocol , one of the advantage is that it can be completed in one step .
moreover , this protocol can be repeated to obtain a high success probability . in fig .
4 , we calculated the success probability altered with the initial coefficient @xmath122 in the distillation of the bit - flip error .
we also let @xmath123 . from fig .
4 , it is shown that the success probability increases greatly if we repeat this protocol .
the max success probability is only 0.34 with @xmath130 , while it can reach 0.66 with @xmath131 . in our protocol
, the cross - kerr nonlinearity plays an important role in distillation .
the success case is that the coherent state picks up no phase shift .
it is essentially to make a parity check .
they can distinguish the even parity state @xmath32 and @xmath33 from the odd parity state @xmath34 and @xmath35 , according to the different phase shifts of the coherent states . in a practical experiment
, we should require the phase shift of the cross - kerr nonlinearity to reach a visible value .
that is to require the @xmath132 .
one method is to increase the amplitude of the coherent state .
the other method is to amplify the normal cross - kerr nonlinearity .
recently theoretical work showed that the giant kerr nonlinearity can be obtain in a multiple quantum - well structure with a four - level , double @xmath133-type configuration @xcite . on the other hand
, the weak measurement can also be used to amplify the cross - kerr nonlinearity @xcite . moreover
, current experiment reported that the `` giant '' cross - kerr effect with phase shift of 20 degrees per photon has been observed @xcite . in conclusion
, we presented a practical entanglement distillation protocol for general mixed state model . in the general mixed state model ,
each components are still the less - entangled states .
therefore , our protocol can not only improve the fidelity of the mixed state , but also can concentrate the less - entangled state to the maximally entangled state .
for the bit - flip correction , the distinct advantage is that such process can be completed in one step .
moreover , this protocol can reach a high success probability by repeating this protocol , resorting to the qnd measurement .
this protocol has practical application in the future quantum information processing .
this work is supported by the national natural science foundation of china under grant nos .
11104159 and 11347110 , and the project funded by the priority academic program development of jiangsu higher education institutions . | we present a way for entanglement distillation for genuine mixed state .
different from the conventional mixed state in entanglement purification protocol , each components of the mixed state in our protocol is a less - entangled state , while it is always a maximally entangled state . with the help of the weak cross - kerr nonlinearity
, this entanglement distillation protocol does not require the sophisticated single - photon detectors .
moreover , the distilled high quality entangled state can be retained to perform the further distillation .
it makes more convenient in practical applications . |
null | we report a patient with end - stage renal disease status after two renal transplantations .
milky - like ascites was noted since the immunosuppressant agent was switched to sirolimus ( 1 mg / day ) .
chylous ascites was diagnosed owing to the triglyceride of dialysate to serum being 15.98:15.99 .
series studies were all negative .
it is highly suspected that the cause of chylous ascites is sirolimus related because surgically related lymph vessel injury happens most often 6 months after transplantation .
sirolimus - related chylous ascites is a rare cause of chylous ascites but the incidence rate increases after transplantation .
side effects of sirolimus include hyperlipidemia , anemia , thrombocytopenia , hepatotoxicity , delayed wound healing and a high rate of lymphoceles , lymph edema , and pulmonary alveolar proteinosis .
chylous ascitis has improved since the switch from sirolimus to other immunosuppressant agents . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Alaska Native and American Indian
Direct Reimbursement Act of 1998''.
SEC. 2. FINDINGS.
Congress finds the following:
(1) In 1988, Congress enacted section 405 of the Indian
Health Care Improvement Act (25 U.S.C. 1645) that established a
demonstration program to authorize 4 tribally-operated Indian
Health Service hospitals or clinics to test methods for direct
billing and receipt of payment for health services provided to
patients eligible for reimbursement under the medicare or
medicaid programs under titles XVIII and XIX of the Social
Security Act (42 U.S.C. 1395 et seq.; 1396 et seq.), and other
third-party payors.
(2) The 4 participants selected by the Indian Health
Service for the demonstration program began the direct billing
and collection program in fiscal year 1989 and unanimously
expressed success and satisfaction with the program. Benefits
of the program include dramatically increased collections for
services provided under the medicare and medicaid programs, a
significant reduction in the turn-around time between billing
and receipt of payments for services provided to eligible
patients, and increased efficiency of participants being able
to track their own billings and collections.
(3) The success of the demonstration program confirms that
the direct involvement of tribes and tribal organizations in
the direct billing of, and collection of payments from, the
medicare and medicaid programs, and other third payor
reimbursements, is more beneficial to Indian tribes than the
current system of Indian Health Service-managed collections.
(4) Allowing tribes and tribal organizations to directly
manage their medicare and medicaid billings and collections,
rather than channeling all activities through the Indian Health
Service, will enable the Indian Health Service to reduce its
administrative costs, is consistent with the provisions of the
Indian Self-Determination Act, and furthers the commitment of
the Secretary to enable tribes and tribal organizations to
manage and operate their health care programs.
(5) The demonstration program was originally to expire on
September 30, 1996, but was extended by Congress to September
30, 1998, so that the current participants would not experience
an interruption in the program while Congress awaited a
recommendation from the Secretary of Health and Human Services
on whether to make the program permanent.
(6) It would be beneficial to the Indian Health Service and
to Indian tribes, tribal organizations, and Alaska Native
organizations to provide permanent status to the demonstration
program and to extend participation in the program to other
Indian tribes, tribal organizations, and Alaska Native health
organizations who operate a facility of the Indian Health
Service.
SEC. 3. DIRECT BILLING OF MEDICARE, MEDICAID, AND OTHER THIRD PARTY
PAYORS.
(a) Permanent Authorization.--Section 405 of the Indian Health Care
Improvement Act (25 U.S.C. 1645) is amended to read as follows:
``(a) Establishment of Direct Billing Program.--
``(1) In general.--The Secretary shall establish a program
under which Indian tribes, tribal organizations, and Alaska
Native health organizations that contract or compact for the
operation of a hospital or clinic of the Service under the
Indian Self-Determination and Education Assistance Act may
elect to directly bill for, and receive payment for, health
care services provided by such hospital or clinic for which
payment is made under title XVIII of the Social Security Act
(42 U.S.C. 1395 et seq.) (in this section referred to as the
`medicare program'), under a State plan for medical
assistance approved under title XIX of the Social Security Act (42
U.S.C. 1396 et seq.) (in this section referred to as the `medicaid
program'), or from any other third party payor.
``(2) Application of 100 percent fmap.--The third sentence
of section 1905(b) of the Social Security Act (42 U.S.C.
1396d(b)) shall apply for purposes of reimbursement under the
medicaid program for health care services directly billed under
the program established under this section.
``(b) Direct Reimbursement.--
``(1) Use of funds.--Each hospital or clinic participating
in the program described in subsection (a) of this section
shall be reimbursed directly under the medicare and medicaid
programs for services furnished, without regard to the
provisions of section 1880(c) of the Social Security Act (42
U.S.C. 1395qq(c)) and sections 402(a) and 813(b)(2)(A), but all
funds so reimbursed shall first be used by the hospital or
clinic for the purpose of making any improvements in the
hospital or clinic that may be necessary to achieve or maintain
compliance with the conditions and requirements applicable
generally to facilities of such type under the medicare or
medicaid programs. Any funds so reimbursed which are in excess
of the amount necessary to achieve or maintain such conditions
shall be used--
``(A) solely for improving the health resources
deficiency level of the Indian tribe; and
``(B) in accordance with the regulations of the
Service applicable to funds provided by the Service
under any contract entered into under the Indian Self-
Determination Act (25 U.S.C. 450f et seq.).
``(2) Audits.--The amounts paid to the hospitals and
clinics participating in the program established under this
section shall be subject to all auditing requirements
applicable to programs administered directly by the Service and
to facilities participating in the medicare and medicaid
programs.
``(3) Secretarial oversight.--
``(A) Quarterly reports.--Subject to subparagraph
(B), the Secretary shall monitor the performance of
hospitals and clinics participating in the program
established under this section, and shall require such
hospitals and clinics to submit reports on the program
to the Secretary on a quarterly basis during the first
2 years of participation in the program and annually
thereafter.
``(B) Annual reports.--Any participant in the
demonstration program authorized under this section as
in effect on the day before the date of enactment of
the Alaska Native and American Indian Direct
Reimbursement Act of 1998 shall only be required to
submit annual reports under this paragraph.
``(4) No payments from special funds.--Notwithstanding
section 1880(c) of the Social Security Act (42 U.S.C.
1395qq(c)) or section 402(a), no payment may be made out of the
special funds described in such sections for the benefit of any
hospital or clinic during the period that the hospital or
clinic participates in the program established under this
section.
``(c) Requirements for Participation.--
``(1) Application.--Except as provided in paragraph (2)(B),
in order to be eligible for participation in the program
established under this section, an Indian tribe, tribal
organization, or Alaska Native health organization shall submit
an application to the Secretary that establishes to the
satisfaction of the Secretary that--
``(A) the Indian tribe, tribal organization, or
Alaska Native health organization contracts or compacts
for the operation of a facility of the Service;
``(B) the facility is eligible to participate in
the medicare or medicaid programs under section 1880 or
1911 of the Social Security Act (42 U.S.C. 1395qq;
1396j);
``(C) the facility meets the requirements that
apply to programs operated directly by the Service; and
``(D) the facility is accredited by an accrediting
body designated by the Secretary or has submitted a
plan, which has been approved by the Secretary, for
achieving such accreditation.
``(2) Approval.--
``(A) In general.--The Secretary shall review and
approve a qualified application not later than 90 days
after the date the application is submitted to the
Secretary unless the Secretary determines that any of
the criteria set forth in paragraph (1) are not met.
``(B) Grandfather of demonstration program
participants.--Any participant in the demonstration
program authorized under this section as in effect on
the day before the date of enactment of the Alaska
Native and American Indian Direct Reimbursement Act of 1998 shall be
deemed approved for participation in the program established under this
section and shall not be required to submit an application in order to
participate in the program.
``(C) Duration.--An approval by the Secretary of a
qualified application under subparagraph (A), or a
deemed approval of a demonstration program under
subparagraph (B), shall continue in effect as long as
the approved applicant or the deemed approved
demonstration program meets the requirements of this
section.
``(d) Examination and Implementation of Changes.--
``(1) In general.--The Secretary, acting through the
Service, and with the assistance of the Administrator of the
Health Care Financing Administration, shall examine on an
ongoing basis and implement--
``(A) any administrative changes that may be
necessary to facilitate direct billing and
reimbursement under the program established under this
section, including any agreements with States that may
be necessary to provide for direct billing under the
medicaid program; and
``(B) any changes that may be necessary to enable
participants in the program established under this
section to provide to the Service medical records
information on patients served under the program that
is consistent with the medical records information
system of the Service.
``(2) Accounting information.--The accounting information
that a participant in the program established under this
section shall be required to report shall be the same as the
information required to be reported by participants in the
demonstration program authorized under this section as in
effect on the day before the date of enactment of the Alaska
Native and American Indian Direct Reimbursement Act of 1998.
The Secretary may from time to time, after consultation with
the program participants, change the accounting information
submission requirements.
``(e) Withdrawal From Program.--A participant in the program
established under this section may withdraw from participation in the
same manner and under the same conditions that a tribe or tribal
organization may retrocede a contracted program to the Secretary under
authority of the Indian Self-Determination Act (25 U.S.C. 450 et seq.).
All cost accounting and billing authority under the program established
under this section shall be returned to the Secretary upon the
Secretary's acceptance of the withdrawal of participation in this
program.''.
(b) Conforming Amendments.--
(1) Section 1880 of the Social Security Act (42 U.S.C.
1395qq) is amended by adding at the end the following:
``(e) For provisions relating to the authority of certain Indian
tribes, tribal organizations, and Alaska Native health organizations to
elect to directly bill for, and receive payment for, health care
services provided by a hospital or clinic of such tribes or
organizations and for which payment may be made under this title, see
section 405 of the Indian Health Care Improvement Act (25 U.S.C.
1645).''.
(2) Section 1911 of the Social Security Act (42 U.S.C.
1396j) is amended by adding at the end the following:
``(d) For provisions relating to the authority of certain Indian
tribes, tribal organizations, and Alaska Native health organizations to
elect to directly bill for, and receive payment for, health care
services provided by a hospital or clinic of such tribes or
organizations and for which payment may be made under this title, see
section 405 of the Indian Health Care Improvement Act (25 U.S.C.
1645).''.
(c) Effective Date.--The amendments made by this section shall take
effect on October 1, 1998. | Alaska Native and American Indian Direct Reimbursement Act of 1998 - Amends the Indian Health Care Improvement Act to make permanent the demonstration program under which Indian tribes, tribal organizations, and Alaska Native health organizations that contract or compact for the operation of a hospital or clinic of the Indian Health Service may directly bill for, and receive payment for, health care services provided by such hospital or clinic for which payment is made under Medicare or Medicaid or from any other third party payor.
Requires participating hospitals and clinics to submit to the Secretary of Health and Human Services quarterly reports on the program during the first two years of participation and annual reports thereafter.
Provides for: (1) application to the Secretary by an Indian tribe, tribal organization, or Alaska Native health organization for participation of a Service facility in the program (the demonstration program was limited to four facilities); (2) the ongoing examination and implementation of necessary administrative changes to facilitate direct billing and reimbursement under the program; and (3) withdrawal from participation in the program. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Horse Protection Amendments Act of
2014''.
SEC. 2. DEFINITION.
Section 2 of the Horse Protection Act (15 U.S.C. 1821) is amended--
(1) by redesignating paragraphs (1), (2) and (3) as
paragraphs (2), (4) and (5), respectively;
(2) by inserting before paragraph (2), as redesignated, the
following:
``(1) The term `Horse Industry Organization' means the
organization established pursuant to section 4(c)(1).''; and
(3) by inserting after paragraph (2), as redesignated, the
following:
``(3) The term `objective inspection' means an inspection
conducted using only inspection methods based on science-based
protocols (including swabbing or blood testing protocols)
that--
``(A) have been the subject of testing and are
capable of producing scientifically reliable,
reproducible results;
``(B) have been subjected to peer review; and
``(C) have received acceptance in the veterinary or
other applicable scientific community.''.
SEC. 3. INCREASING PROTECTIONS FOR HORSES PARTICIPATING IN HORSE SHOWS,
EXHIBITIONS, OR SALES OR AUCTIONS.
(a) Findings.--Section 3 of the Horse Protection Act (15 U.S.C.
1822) is amended--
(1) by redesignating paragraphs (4) and (5) as paragraphs
(5) and (6), respectively; and
(2) by inserting after paragraph (3) the following:
``(4) the Inspector General of the Department of
Agriculture has determined that the program through which the
Secretary inspects horses is not adequate to ensure compliance
with this Act;''.
(b) Horse Shows and Exhibitions.--Section 4 of the Horse Protection
Act (15 U.S.C. 1823) is amended--
(1) by striking subsection (a) and inserting the following:
``(a) Disqualification of Horses.--
``(1) In general.--In addition to being subject to
applicable criminal or civil penalties authorized under section
6, the management of any horse show or horse exhibition shall
disqualify any horse from being shown or exhibited--
``(A) which, upon objective testing, is determined
to be sore; or
``(B) if the management has been notified that the
horse is sore by--
``(i) a person appointed in accordance with
regulations prescribed under subsection (c); or
``(ii) the Secretary.
``(2) Duration of disqualification.--In addition to any
other requirements or penalties imposed under this Act, any
horse that has been determined to be sore by objective testing
shall be disqualified from being shown or exhibited for--
``(A) a period of not less than 30 days for the
first such determination; and
``(B) a period of 90 days for a second
determination and any subsequent determination.''; and
(2) by striking subsection (c) and inserting the following:
``(c) Appointment of Inspectors; Manner of Inspections.--
``(1) Establishment of horse industry organization.--
``(A) In general.--Not later than 180 days after
the date of the enactment of the Horse Protection
Amendments Act of 2014, the Secretary shall prescribe,
by regulation, the establishment of the Horse Industry
Organization, which shall be governed by a board
consisting of not more than 9 individuals, who shall be
appointed in accordance with subparagraphs (B) and (C).
``(B) Members.--Of the 9 members constituting the
Horse Industry Organization Board--
``(i) 2 members shall be appointed by the
Commissioner of Agriculture for the State of
Tennessee to serve for a term of 4 years;
``(ii) 2 members shall be appointed by the
Commissioner of Agriculture for the
Commonwealth of Kentucky to serve for a term of
4 years;
``(iii) 2 members shall represent the
Tennessee Walking Horse industry and shall be
appointed from within such industry by the
members appointed pursuant to clauses (i) and
(ii), in accordance with a process developed by
such members, to serve for an initial term of 3
years; and
``(iv) not more than 3 members shall be
appointed by the 6 members appointed pursuant
to clauses (i) through (iii) to serve for a
term of 4 years.
``(C) Quorum; vacancies.--
``(i) Quorum.--Five members of the Horse
Industry Organization Board shall constitute a
quorum for the transaction of business.
``(ii) Effect of vacancy.--A vacancy on the
Horse Industry Organization Board shall not
impair the authority of the Board.
``(iii) Subsequent appointments.--
Subsequent appointments, including
reappointments of existing Board members, shall
be made in accordance with subparagraph (B),
except that all such appointments shall be for
a term of 4 years.
``(iv) Bylaws.--The members of the Horse
Industry Organization Board, in consultation
with the Secretary, shall develop bylaws and
other policies for operations, the
establishment of committees, and filling
vacancies on the Board.
``(D) Termination.--Section 14(a)(2)(B) of the
Federal Advisory Committee Act (5 U.S.C. App.) shall
not apply to the Horse Industry Organization.
``(E) Licensing requirements.--
``(i) In general.--The Horse Industry
Organization shall establish requirements to
appoint persons qualified--
``(I) to detect and diagnose a
horse which is sore; or
``(II) to otherwise inspect horses
for the purposes of enforcing this Act.
``(ii) Conflicts of interest.--Requirements
established pursuant to clause (i) shall
require any person appointed by the Horse
Industry Organization Board, or a member of the
immediate family of such a person, to be free
from conflicts of interest, by reason of any
association or connection with the walking
horse industry, including--
``(I) through employment by, or the
provision of any services to, any show
manager, trainer, owner, or exhibitor
of Tennessee Walking horses, Spotted
Saddle horses, or Racking horses; and
``(II) training, exhibiting,
shoeing, breeding, or selling Tennessee
Walking horses, Spotted Saddle horses,
or Racking horses.
``(F) Certification.--
``(i) Certification.--After the members of
the Horse Industry Organization Board have been
appointed pursuant to subparagraph (B), the
Secretary shall certify the Horse Industry
Organization in accordance with section 11.7 of
title 9, Code of Federal Regulations
(Certification and licensing of designated
qualified persons), including the training of
inspectors.
``(ii) Revocation of certification.--Not
later than 90 days after the date on which the
Horse Industry Organization is established
pursuant to this paragraph, the Secretary shall
revoke the certification issued to any other
horse industry organization under section 11.7
of title 9, Code of Federal Regulations (or any
successor regulation), as in effect on such
date.
``(2) Responsibilities of horse industry organization.--The
Horse Industry Organization shall--
``(A) establish a formal affiliation with the
management of each horse sale, horse exhibition, and
horse sale or auction;
``(B) appoint inspectors to conduct inspections at
each such show, exhibition, and sale or auction;
``(C) identify and contract with equine veterinary
experts to advise the Horse Industry Organization Board
on--
``(i) objective scientific testing methods
and procedures; and
``(ii) the certification of testing
results; and
``(D) otherwise ensure compliance with this Act, in
coordination with the Secretary.''.
(c) Unlawful Acts.--Section 5 of the Horse Protection Act (15
U.S.C. 1824) is amended--
(1) in paragraph (3), by striking ``appoint and retain a
person in accordance with section 4(c) of this Act'' and
inserting ``establish a formal affiliation with the Horse
Industry Organization under section 4(c)(2)(A)'';
(2) in paragraph (4), by striking ``appoint and retain a
qualified person in accordance with section 4(c) of this Act''
and inserting ``establish a formal affiliation with the Horse
Industry Organization under section 4(c)(2)(A)'';
(3) in paragraph (5), by striking ``appointed and retained
a person in accordance with section 4(c) of this Act'' and
inserting ``established a formal affiliation with the Horse
Industry Organization under section 4(c)(2)(A)''; and
(4) in paragraph (6)--
(A) by striking ``appointed and retained a person
in accordance with section 4(c) of this Act'' and
inserting ``established a formal affiliation with the
Horse Industry Organization under section 4(c)(2)(A)'';
and
(B) by striking ``such person or the Secretary''
and inserting ``a person licensed by the Horse Industry
Organization''.
SEC. 4. RULEMAKING.
Not later than 180 days after the date of the enactment of this
Act, the Secretary of Agriculture shall issue regulations to carry out
the amendments made by this Act. | Horse Protection Amendments Act of 2014 - Amends the Horse Protection Act to replace the Designated Qualified Persons program responsible for inspecting horses for soring with a new inspection system. (The soring of horses is any of various actions taken on a horse's limb to produce a higher gait that may cause pain, distress, inflammation, or lameness.) Requires a sore horse to be disqualified from being shown or exhibited for at least 30 days for the first determination that the horse is sore and 90 days for a second determination and any subsequent determination. Requires the Secretary of Agriculture (USDA) to establish a single Horse Industry Organization (HIO) in order to establish a formal affiliation with the management of each horse sale, horse exhibition, and horse sale or auction, appoint inspectors to conduct inspections, contract with equine veterinary experts to advise the HIO Board on objective scientific testing methods and certification of testing results, and otherwise ensure compliance with the Horse Protection Act. Directs the appointment of individuals by the Commissioners of Agriculture for Tennessee and Kentucky to govern the HIO. Requires those individuals to appoint individuals representing the Tennessee Walking Horse industry. |
low - scale supersymmetry has been recognized for some time as the most natural solution to the hierarchy problem . in addition , supersymmetry with r - parity imposed can provide a natural dark matter candidate in the form of the lightest supersymmetric partner ( lsp ) , which is typically the lightest neutralino @xcite . moreover , extending the standard model ( sm ) to the minimal supersymmetric standard model ( mssm ) results in much - improved gauge coupling unification @xcite .
although the putative superpartners have yet to be observed , the recent discovery of a higgs boson with a mass near @xmath4 gev @xcite is consistent with the upper bound on the higgs mass in the mssm , @xmath5 gev @xcite .
naively , one might have expected the superpartners to have already been found based upon naturalness arguments which imply that the masses of the superpartners should be tev - scale or lower .
however direct searches from the atlas and cms experiments at the large hadron collider are pushing the mass limits on squarks and gluons above the tev - scale @xcite . furthermore , while the higgs mass is below the mssm upper bound , it is still somewhat larger than expected . in order to obtain such a large higgs mass in the mssm
requires large radiative corrections from couplings to the top / stop quark sector , implying muli - tev scale squark masses , and/or large values of tan@xmath6 .
contrary to naive expectations , it is known that it is possible for squarks and other scalars to have heavy multi - tev masses while still solving the hierarchy problem naturally , or at least by only introducing a small amount of fine - tuning .
perhaps the best known such scenario is that of the hyperbolic brand / focus point ( hb / fp ) supersymmetry @xcite .
fp superpartner spectra usually feature heavy multi - tev scalars with lighter gauginos .
the lightest neutralino in these spectra is typically of mixed bino - higgsino composition , while the gluino is typically the heaviest of the gauginos and can have a mass up to a few tev . in frameworks such as msugra / cmssm @xcite ,
focus point supersymmetry is realized in regions of the parameter space occur where the universal scalar mass @xmath2 is much larger than the universal gaugino mass , @xmath3 .
it has been pointed out that although spectra which lie on the focus point can solve the hierarchy problem with low fine - tuning of the electroweak scale , this still requires a large amount of high - scale fine - tuning , at least within the context of msugra / cmssm where @xmath7 is unnatural as there is no _ a priori _ correlation of the high - scale parameters @xcite .
although msugra / cmssm provides a simple and general framework for studying the phenomenology of gravity - mediated supersymmetry breaking , ultimately the supersymmetry breaking soft terms should be determined within a specific model which provides a complete description of physics at the planck scale , such as string theory .
for example , in the context of type ii flux compactifications , soft terms of the form @xmath7 and @xmath8 may be induced by fluxes in type iib string theory with d@xmath9-branes @xcite .
soft terms of this form were studied in @xcite where it was shown to lead to focus - point regions of the parameter space and where it is possible to obtain a @xmath4 gev higgs , satisfy the wmap9 @xcite and planck @xcite results on the dark matter relic density as well as all standard experimental constraints while maintaining low electroweak fine - tuning . in the following , the possible sets of universal supersymmetry breaking soft terms that may arise in an mssm constructed from intersecting / magnetized d - branes in type iia / type iib string theory will be analyzed .
this model satisfies all global consistency conditions and has many attractive features which make it a suitable candidate for study .
these phenomenological features include three families of quarks and leptons , a single pair of higgs fields , automatic gauge coupling unification , and exotics which are decoupled . from the low - energy effective action of this model
, it will be shown that the well - known special dilaton solution may be obtained in the model from the simplest set of possible f - terms , a result which should be generic to all models of this type .
it will then be shown that there exist more general sets of universal soft terms in the model , where supersymmetry - breaking is also dominated by the dilaton .
for these sets of soft terms , it will be shown that the universal scalar mass and trilinear coupling are fixed so that @xmath10 and @xmath8 , where @xmath11 is the gravitino mass .
it will then be shown that universal soft terms where the universal scalar mass is much larger than the universal gaugino mass , @xmath7 , may be obtained .
finally , the no - scale strict moduli form of the soft terms , @xmath12 , @xmath13 will be shown to be obtainable .
the resulting phenomenology will then be discussed .
.general spectrum for intersecting d6 branes at generic angles , where @xmath14 and @xmath15 , where @xmath16 .
in addition , @xmath17 is the multiplicity , and @xmath18 and @xmath19 denote the symmetric and antisymmetric representations of u(@xmath20 ) , respectively . [ cols="^,^",options="header " , ]
from the effective scalar potential it is possible to study the stability @xcite , the tree - level gauge couplings @xcite , gauge threshold corrections @xcite , and gauge coupling unification @xcite .
the effective yukawa couplings @xcite , matter field khler metric and soft - susy breaking terms have also been investigated @xcite .
a more detailed discussion of the khler metric and string scattering of gauge , matter , and moduli fields has been performed in @xcite .
although turning on type iib 3-form fluxes can break supersymmetry from the closed string sector @xcite , there are additional terms in the superpotential generated by the fluxes and there is currently no satisfactory model which incorporates this .
thus , we do not consider this option in the present work .
the @xmath21 supergravity action depends upon three functions , the holomorphic gauge kinetic function , @xmath22 , k``ahler potential @xmath23 , and the superpotential @xmath24 .
each of these will in turn depend upon the moduli fields which describe the background upon which the model is constructed .
the holomorphic gauge kinetic function for a d6-brane wrapping a calibrated three - cyce is given by @xcite @xmath25.\ ] ] in terms of the three - cycle wrapped by the stack of branes , we have @xmath26 from which it follows that @xmath27 where @xmath28 for @xmath29 and @xmath30 for @xmath31 or @xmath32 gauge groups and where we use the @xmath33 and @xmath34 moduli in the supergravity basis . in the string theory basis
, we have the dilaton @xmath35 , three khler moduli @xmath36 , and three complex structure moduli @xmath37 @xcite .
these are related to the corresponding moduli in the supergravity basis by @xmath38 and @xmath39 is the four - dimensional dilaton . to second order in the string matter fields , the k''ahler potential is given by @xmath40 the untwisted moduli @xmath41 , @xmath42 are light , non - chiral scalars from the field theory point of view , associated with the d - brane positions and wilson lines . in the following , it will be assumed that these fields become massive via high - dimensional operators . for twisted moduli arising from strings stretching between stacks @xmath43 and @xmath44 , we have @xmath45 , where @xmath46 is the angle between the cycles wrapped by the stacks of branes @xmath43 and @xmath44 on the @xmath47 torus respectively .
then , for the k"ahler metric in type iia theory we find the following two cases : * @xmath48 , @xmath49 , @xmath50 + @xmath51 * @xmath48 , @xmath52 , @xmath50 + @xmath53 for branes which are parallel on at least one torus , giving rise to non - chiral matter in bifundamental representations ( for example , the higgs doublets which arise from the bc sector where stacks b and c are parallel on the first torus ) , the k``ahler metric is @xmath54 the superpotential is given by @xmath55 while the minimum of the f part of the tree - level supergravity scalar potential @xmath56 is given by @xmath57 where @xmath58 and @xmath59 , @xmath60 is inverse of @xmath61 , and the auxiliary fields @xmath62 are given by @xmath63 supersymmetry is broken when some of the f - terms of the hidden sector fields @xmath64 acquire vevs .
this then results in soft terms being generated in the observable sector .
for simplicity , it is assumed in this analysis that the @xmath65-term does not contribute ( see @xcite ) to the susy breaking .
then , the goldstino is eaten by the gravitino via the superhiggs effect .
the gravitino then obtains a mass @xmath66 the normalized gaugino mass parameters , scalar mass - squared parameters , and trilinear parameters respectively may be given in terms of the k''ahler potential , the gauge kinetic function , and the superpotential as @xmath67 , \label{softterms}\end{aligned}\ ] ] where @xmath68 is the k"ahler metric appropriate for branes which are parallel on at least one torus , i.e. involving non - chiral matter .
in the present case , the higgs fields arise from vectorlike matter in the @xmath69 sector , where the @xmath70 and @xmath71 stacks are parallel on the first two - torus .
we allow the dilaton @xmath33 to obtain a non - zero vev as well as the @xmath34-moduli . to do this
, we parameterize the @xmath72-terms as @xmath73\ ] ] the goldstino is included in the gravitino by @xmath74 in @xmath35 field space , and @xmath75 parameterize the goldstino direction in @xmath37 space , where @xmath76 .
the goldstino angle @xmath77 determines the degree to which susy breaking is being dominated by the dilaton @xmath33 and/or complex structure ( @xmath78 ) and khler ( @xmath79 ) moduli .
then , the formula for the gaugino mass associated with each stack can be expressed as @xmath80 , \\
\nonumber \qquad ( j , k , l)=(\overline{1,2,3}).\end{aligned}\ ] ] the bino mass parameter is a linear combination of the gaugino mass for each stack , and the coefficients corresponding to the linear combination of @xmath81 factors define the hypercharge .
the trilinear parameters generalize as @xmath82 \nonumber \\ & & + \frac{\sqrt{3}}{2}m_{3/2}({\theta}_{1}e^{-i{\gamma}_1 } + \theta_s e^{-i{\gamma}_s}),\end{aligned}\ ] ] where @xmath83 corresponds to @xmath77 and there is a contribution from the dilaton via the higgs ( 1/2 bps ) k"ahler metric , which also gives an additional contribution to the higgs scalar mass - squared values : @xmath84.\ ] ] where @xmath43,@xmath44 , and @xmath85 label the stacks of branes whose mutual intersections define the fields present in the corresponding trilinear coupling and the angle differences are defined as @xmath86 we must be careful when dealing with cases where the angle difference is negative .
note for the present model , there is always either one or two of the @xmath87 which are negative .
let us define the parameter @xmath88 such that @xmath89 indicates that only one of the angle differences are negative while @xmath90 indicates that two of the angle differences are negative .
finally , the squark and slepton ( 1/4 bps ) scalar mass - squared parameters are given as @xmath91,\end{aligned}\ ] ] where we include the @xmath92 in the sum .
the functions @xmath93 and @xmath94 are given by eq .
( [ eqn : psi1 ] ) and eq .
( [ eqn : psi2 ] ) .
the terms associated with the complex moduli in @xmath95 and @xmath96 are shown in eq .
( [ idb : eq : dthdu ] ) and eq .
( [ idb : eq : dth2du ] ) , the functions @xmath97 in the above formulas defined for @xmath98 are @xmath99 and for @xmath100 are @xmath101 the function @xmath94 is just the derivative @xmath102 and @xmath103 and @xmath104 are defined @xcite as @xmath105^p_q & \mbox { when } j = k \vspace*{0.6 cm } \\
\left[\frac{1}{4\pi } \sin(2\pi\theta^j ) \right]^p_q & \mbox { when } j\neq k ,
\end{array}\right.\label{idb : eq : dthdu}\ ] ] @xmath106^p_q & \mbox{when } j = k = l \vspace*{0.6 cm } \\
\frac{1}{16\pi } \left [ \sin(4\pi\theta^j)-4\sin(2\pi\theta^j ) \right]^p_q & \mbox{when } j\neq k = l \vspace*{0.6 cm } \\ -\frac{1}{16\pi}\left [ \sin(4\pi\theta^j ) \right]^p_q & \mbox { when } j = k\neq l\mbox { or } j = l\neq k \vspace*{0.4 cm } \\ \frac{1}{16\pi}\left [ \sin(4\pi\theta^j ) \right]^p_q & \mbox{when } j\neq
k\neq l\neq j. \end{array}\right.\label{idb : eq : dth2du}\ ] ] the terms associated with the dilaton are given by @xmath107^p_q \label{idb : eq : dthdus2}\ ] ] @xmath108^p_q & \mbox{when } \ j = k \vspace*{0.6 cm } \\
-\frac{1}{16\pi}\left[\sin{4\pi\theta^j}\right]^p_q & \mbox{when } \
j\neq k , \vspace*{0.2 cm } \end{array}\right.\label{idb : eq : dth2duds}\ ] ] and @xmath109^p_q,\label{idb : eq : dth2dss}\ ] ] where @xmath110 .
the @xmath75 parameters are constrained as @xmath111 .
first , we consider the case where the goldstino angles and dilaton are all equal , namely @xmath112 . in addition
, we set @xmath113 . for the gaugino mass associated with each stack of d - branes we have @xmath114 , \\
\nonumber \qquad ( j , k , l)=(\overline{1,2,3}),\end{aligned}\ ] ] where the holomorphic gauge kinetic function is given by @xmath115 from which it follows that there is a universal gaugino mass associated with each stack of dbranes : @xmath116 the @xmath117 holomorphic gauge function is given by taking a linear combination of the holomorphic gauge functions from all the stacks .
note that we have absorbed a factor of @xmath118 in the definition of @xmath117 so that the electric charge is given by @xmath119 . in this way
, it is found @xcite that @xmath120 where the the coefficients @xmath121 correspond to the linear combination of @xmath81 factors which define the hypercharge , @xmath122 .
the gaugino mass for @xmath123 is a linear combination of the gaugino mass for each stack , @xmath124 thus , the gaugino masses are universal : @xmath125 the higgs scalar masses are given by @xmath84.\ ] ] with @xmath126 , we have @xmath127 for the trilinear couplings , we have @xmath128 now , @xmath129 therefore we have @xmath130 the scalar masses for squarks and sleptons are given by @xmath131.\end{aligned}\ ] ] now , @xmath132 thus , we find that the scalar masses for squarks and sleptons are universal : @xmath133 in summary , taking all goldstino angles to equal yields universal soft terms of the form @xmath134 it should be noted that this solution for the soft terms is more - or - less model independent and should be present for any pati - salam model of this type constructed from intersecting / magnetized d - branes .
we have seen in the previous section that if all of the goldstino angles are equal , then the soft terms are universal and we obtain the well - known special dilaton solution . in the following , let us consider more general possibilities where universal soft terms may be obtained . for the present model , the complex structure moduli and dilaton in the field theory basis are given by @xmath135 while the real part of the holomorphic gauge kinetic functions for each stack are given by @xmath136 and @xmath137 from which it follows that the mssm gauge couplings are unified at the string scale , @xmath138 @xcite .
a scan of some of the soft terms for non - universal soft terms was made in @xcite , and some of the phenomenological consequences have been studied in @xcite . the gaugino masses may be written in terms of the goldstino angles as @xmath139 from these expressions , it can be seen that setting @xmath140 and @xmath141 results in universal gaugino masses of the form @xmath142 the trilinear a - term may also be written in terms of the goldstino angles as @xmath143 .
\end{aligned}\ ] ] letting @xmath144 and @xmath145 as in the gaugino masses , the universal trilinear coupling takes the same simple form as for the special dilaton : @xmath146 however , in the present case the relationship between the gaugino mass and the gravitino mass may be different than for the special dilaton , depending upon the values assigned to @xmath147 and @xmath148 .
this will have important consequences for phenomenology , as shall be discussed later .
next , let us turn to the expressions for the scalar masses . as before , the higgs scalar masses
are given by @xmath149 = m^2_{3/2}\left[1-\frac{3}{2}\left(\theta_{12}^2 + \theta_{3s}^2\right)\right ] \\
\nonumber & = & \frac{m^2_{3/2}}{4 } , \end{aligned}\ ] ] where @xmath150 .
it should be noted that this result is the same as for the special dilaton solution of the previous section . for @xmath151 and @xmath152 , the scalar masses for squarks and sleptons
may be written as @xmath153^p_q + \theta^2_{12}\cdot\left[\mbox{sin}4\pi\theta^3 - 4\mbox{sin}2\pi\theta^3\right]^p_q \\
\nonumber & + & 2\left[\mbox{sin}4\pi\theta^3\right]^p_q\cdot(\theta_{3s}^2 + \theta_{12}^2 -4\theta_{3s}\cdot\theta_{12})\cdot\psi\left(\theta^3_{pq}\right ) .
\end{aligned}\ ] ] inserting the appropriate angles as shown in table [ angles ] , this expression then becomes @xmath154,\end{aligned}\ ] ] where @xmath155 and @xmath156 then , if we set @xmath157 we then obtain a universal scalar mass given by @xmath158 in summary , setting @xmath140 , @xmath141 , and @xmath159 results in universal soft terms of the form : @xmath160 these soft terms appear to be a generalized form of the special dilaton solution .
in particular , setting all goldstino angles equal results in precisely the special dilaton . however , in the present case one may obtain a different result for the gaugino mass for more general goldstino angles .
let us include the k"ahler moduli in the supersymmetry breaking by parameterizing the f - terms as @xmath161 in the following we shall take @xmath162 and @xmath163 and set the cp violating phases to zero . in addition , we shall set @xmath164 in order to have a universal scalar mass for squarks and sleptons . then , the gaugino masses take the same universal form as before : @xmath165 the higgs scalar massses then become @xmath166 , \end{aligned}\ ] ] while the scalar masses for squarks and sleptons becomes @xmath167.\end{aligned}\ ] ] assuming that the dependence of the soft terms on the yukawa couplings through the k"ahler moduli may be ignored , the trilinear coupling becomes @xmath168 if we take @xmath169 , and @xmath170 , we obtain the no - scale strict moduli scenario : @xmath171 note that in this case , the supersymmetry breaking is dominated by the k"ahler moduli , while the dilaton does not participate .
this is just what is expected for the no - scale form and is referred to as the strict moduli - dominated scenario .
plane with @xmath8 , @xmath172 , tan@xmath173 , and @xmath174 gev .
the region shaded in black indicates a relic density @xmath175 , the region shaded in red indicates @xmath176 , while the region shaded in green has a charged lsp .
the black contour lines indicate the lightest cp - even higgs mass.,scaledwidth=100.0% ] .
the region shaded in black indicates @xmath177 .
the upper limit on the cross - section obtained from the xenon100 experiment is shown in blue with the @xmath178 bounds shown as dashed curves , while the red dashed curved indicates the future reach of the xenon1 t experiment.,scaledwidth=100.0% ]
in the previous sections , several different forms for the supersymmetry breaking soft terms that may arise from a realistic intersecting / magnetized d - brane model were discussed .
two of these are well - known , namely the special dilaton form and the no - scale strict moduli form which arise from dilaton - dominated and k"ahler moduli dominated supersymmetry respectively .
the phenomenology of both of these scenarios has been extensively explored , and neither of these two cases presently has a viable parameter space which can satisfy experimental constraints @xcite .
on the other hand , a different form for the soft terms was also explored , which appears to be a generalized form of dilaton - dominated supersymmetry breaking . in particular
, it was found that if @xmath179 and @xmath180 , the universal trilinear term is always equal to the negative of the gaugino mass , @xmath8 .
furthermore , the universal scalar mass is given by @xmath181 from this expression , it may been observed that @xmath2 is always larger than the universal gaugino mass , @xmath182 if @xmath183 in particular , it is possible to have a scalar mass which is arbitrarily large compared to the gaugino mass such that @xmath7 .
generically , this may occur if either @xmath147 or @xmath148 is negative .
an important question is whether or not this form for the soft terms leads to phenomenologically viable superpartner spectra .
it should be noted that these soft terms correspond to one corner of the of the full msugra / cmssm parameter space .
a scan of the msugra / cmssm parameter space was made in @xcite with the trilinear term fixed as @xmath8 , .
a plot of this parameter space is shown in fig . [
fig : msugra_countourplanetb30 ] . as we can see from this plot , the viable parameter space consist of a strip in the @xmath2vs.@xmath3 plane where @xmath2 is several times larger than @xmath3 .
this , of course , corresponds to a focus point region of the hyperbolic branch of msugra / cmssm . the spectra corresponding to these regions of the parameter space feature squarks and sleptons with masses above @xmath184 tev , a gluino mass in the @xmath185 tev range , as well as neutralinos and charginos below @xmath186 tev .
the lsp for these spectra is of mixed bino - higgsino composition with masses in the range @xmath187 gev . a plot of the direct dark matter detection proton - neutralino cross - sections versus neutralino mass is shown in fig .
[ fig : directdetcrosssectionsvsneutralinomasstb30a ] . as can be seen from this plot ,
the direct detection cross - sections for these spectra are just in the range probed by the xenon100 experiment @xcite .
in addition , the upcoming xenon1 t experiment @xcite will thoroughly cover this parameter space and either will make a discovering or rule out this parameter space , assuming r - parity conservation , leading to a stable dark matter candidate . it should be pointed out that a variation of this model exist where baryon and lepton number may be gauged , so the imposition of r - parity may not be necessary to solve the problem of rapid proton decay @xcite .
it has been demonstrated that universal supersymmetry breaking soft terms may arise in a realistic mssm constructed in type ii string theory with intersecting / magnetized d - branes . in particular , it has been found that these soft terms are characterized by a universal scalar mass which is always equal to one - half of the gravitino mass , and a universal trilinear term wich is always equal to the negative of the universal gaugino mass .
for the simplest case where the goldstino angles for the three complex structure moduli and the dilaton are all equal , the soft terms are that of the well - known special dilaton . however , it was found that more general sets of universal soft terms with different values for the universal gaugino also exist .
in particular , it was found that it is possible for the universal scalar mass to be arbitrarily large in comparison to the universal gaugino mass .
thus , for the model which has been under study , it may be natural to have scalar masses which are much larger than the gaugino mass .
while the observed mass of the higgs is below the expected mssm upper bound , to obtain a @xmath4 gev higgs mass requires large radiative corrections from the top / stop sector , implying heavy squarks with multi - tev masses .
superpartner spectra with such large scalar masses may solve the hierarchy problem with low fine - tuning of the electroweak scale .
the parameter space corresponding to the particular form of the soft terms @xmath7 and @xmath8 has been previously studied and results of this study were reviewed .
viable spectra from this region of the parameter space feature squarks and sleptons with masses above @xmath184 tev , a @xmath185 tev gluino mass , as well as light neutralinos and charginos at the tev - scale or below .
in addition , the lsp for these spectra is of mixed bino - higgsino composition with masses in the range @xmath187 gev and a higgsino fraction of roughly @xmath188 .
moreover , the spin - indepenent dark matter direct - detection proton - neutralino cross - sections are currently being probed by the xenon100 experiment and will be completely tested by the upcoming xenon1 t experiment .
it was shown that the soft terms corresponding to to this parameter space naturally and easily obtained from the model .
the author would like to thank james maxin and dimitri nanopoulos for helpful discussions while this manuscript was being prepared .
j. r. ellis , j. s. hagelin , d. v. nanopoulos and m. srednicki , phys .
b * 127 * , 233 ( 1983 ) .
j. r. ellis , j. s. hagelin , d. v. nanopoulos , k. a. olive and m. srednicki , nucl .
b * 238 * , 453 ( 1984 ) .
j. r. ellis , d. v. nanopoulos and k. tamvakis , phys .
b * 121 * , 123 ( 1983 ) .
s. dimopoulos , s. raby and f. wilczek , phys .
d * 24 * , 1681 ( 1981 ) .
l. e. ibanez and g. g. ross , phys .
b * 105 * , 439 ( 1981 ) .
g. aad _ et al . _ [ atlas collaboration ] , phys .
b * 716 * , 1 ( 2012 ) [ arxiv:1207.7214 [ hep - ex ] ] .
s. chatrchyan _ et al . _
[ cms collaboration ] , phys .
b * 716 * , 30 ( 2012 ) [ arxiv:1207.7235 [ hep - ex ] ] .
m. s. carena and h. e. haber , prog .
* 50 * , 63 ( 2003 ) [ hep - ph/0208209 ] .
g. aad _ et al . _ [ atlas collaboration ] , arxiv:1208.0949 [ hep - ex ] .
g. aad _ et al .
_ [ atlas collaboration ] , jhep * 1207 * , 167 ( 2012 ) [ arxiv:1206.1760 [ hep - ex ] ]
. j. l. feng , k. t. matchev and t. moroi , phys .
lett . *
2322 ( 2000 ) [ hep - ph/9908309 ] . j. l. feng , k. t. matchev and t. moroi , phys .
d * 61 * , 075005 ( 2000 ) [ hep - ph/9909334 ]
. h. baer , c. -h .
chen , f. paige and x. tata , phys .
d * 52 * , 2746 ( 1995 ) [ hep - ph/9503271 ] .
h. baer , c. -h .
chen , m. drees , f. paige and x. tata , phys .
d * 59 * , 055014 ( 1999 ) [ hep - ph/9809223 ] .
u. chattopadhyay , a. corsetti and p. nath , phys .
d * 68 * , 035005 ( 2003 ) [ hep - ph/0303201 ] .
a. h. chamseddine , r. l. arnowitt and p. nath , phys .
* 49 * , 970 ( 1982 ) .
n. ohta , prog .
* 70 * , 542 ( 1983 ) .
l. j. hall , j. d. lykken and s. weinberg , phys .
d * 27 * , 2359 ( 1983 ) .
h. baer , v. barger , p. huang , d. mickelson , a. mustafayev and x. tata , arxiv:1210.3019 [ hep - ph ] .
p. g. camara , l. e. ibanez and a. m. uranga , nucl .
b * 689 * , 195 ( 2004 ) [ hep - th/0311241 ] .
r. blumenhagen , m. cvetic , p. langacker and g. shiu , ann . rev
. nucl . part .
sci . * 55 * , 71 ( 2005 ) [ arxiv : hep - th/0502005 ] .
r. blumenhagen , b. kors , d. lust and s. stieberger , phys .
* 445 * , 1 ( 2007 ) [ arxiv : hep - th/0610327 ]
. m. cveti , g. shiu and a.m. uranga , phys .
* 87 * , 201801 ( 2001 ) ; nucl .
b * 615 * , 3 ( 2001 ) . c. m. chen , t. li , v. e. mayes and d. v. nanopoulos , phys .
b * 665 * , 267 ( 2008 ) [ arxiv : hep - th/0703280 ] . c. m. chen , t. li , v. e. mayes and d. v. nanopoulos , phys .
d * 77 * , 125023 ( 2008 ) [ arxiv:0711.0396 [ hep - ph ] ] .
m. cvetic , t. li and t. liu , nucl .
b * 698 * , 163 ( 2004 ) [ arxiv : hep - th/0403061 ] . c. m. chen , t. li and d. v. nanopoulos , nucl .
b * 740 * , 79 ( 2006 ) [ arxiv : hep - th/0601064 ] .
r. blumenhagen , b. krs , d. lst and t. ott , nucl .
b * 616 * , 3 ( 2001 ) [ arxiv : hep - th/0107138 ] .
d. lst , p. mayr , r. richter and s. stieberger , nucl .
b * 696 * , 205 ( 2004 ) ; a. font and l.e .
ibanez , jhep * 0503 * , 040 ( 2005 ) .
j. f. g. cascales and a. m. uranga , jhep * 0305 * , 011 ( 2003 ) .
f. marchesano and g. shiu , phys . rev .
d * 71 * , 011701 ( 2005 ) ; jhep * 0411 * , 041 ( 2004 ) .
m. cveti and t. liu , phys .
b * 610 * , 122 ( 2005 ) .
m. cveti , t. li and t. liu , phys .
d * 71 * , 106008 ( 2005 ) .
j. kumar and j. d. wells , jhep * 0509 * , 067 ( 2005 ) .
chen , v. e. mayes and d. v. nanopoulos , phys .
b * 633 * , 618 ( 2006 ) .
j. a. maxin , v. e. mayes and d. v. nanopoulos , phys .
b * 690 * , 501 ( 2010 ) [ arxiv:0911.2806 [ hep - ph ] ] .
j. a. maxin , v. e. mayes and d. v. nanopoulos , phys .
d * 79 * , 066010 ( 2009 ) [ arxiv:0809.3200 [ hep - ph ] ] .
e. aprile _ et al .
_ [ xenon100 collaboration ] , phys . rev .
lett . * 107 * , 131302 ( 2011 ) [ arxiv:1104.2549 [ astro-ph.co ] ] .
e. aprile _ et al .
_ [ xenon100 collaboration ] , phys . rev .
lett . * 109 * , 181301 ( 2012 ) [ arxiv:1207.5988 [ astro-ph.co ] ] . | * abstract * in type ii string vacua constructed from intersecting / magnetized d - branes , the supersymmetry - breaking soft terms are genericaly non - universal .
it is shown that universal supersymmetry - breaking soft terms may arise in a realistic mssm constructed from intersecting / magnetized d - branes in type ii string theory .
for the case of dilaton - dominated supersymmetry - breaking , it is shown that the universal scalar mass and trilinear coupling are fixed such that @xmath0 and @xmath1 .
in addition , soft terms where the universal scalar mass @xmath2 is much larger than the universal gaugino mass @xmath3 may be easily obtained within the model .
finally , it is shown that the special dilaton and no - scale strict moduli boundary conditions , which are well - known in heterotic string constructions , may also be obtained . |
Jeremy Lin and Tim Tebow have been phenomenons who transcended their sports this fall and winter. Each helped reverse stumbling franchises; each has extraordinary leadership skills; each is a model of humility and a person of faith.
But that's about where the similarities end. Lin has shown himself to be a singular talent, scoring more points in his first five starts than any NBA player since the ABA merger, according to the Elias Sports Bureau. Tebow ranked 27th in NFL quarterback ratings.
Sure, Tebow led the Denver Broncos to the playoffs when they were going nowhere, his season highlighted by his third overtime victory in eight starts on Dec. 11, a 13-10 win over the Chicago Bears. Lin, who has started five games now, has led the New York Knicks to six consecutive victories, including the stunning victory Tuesday night over Toronto when he nailed a last-second three-pointer for the win.
PHOTOS: Jeremy Lin
Their paths to iconic status, however, have been entirely different. Tebow came into the NFL as one of the most highly publicized rookies ever. He won the Heisman Trophy at Florida, was on a couple of BCS championship teams. He was the quintessential college quarterback.
To be sure, there were questions about whether he could be an effective NFL quarterback, with his limited passing skills and running-game mentality. But the polarizing quarterback was on everyone's radar. Tim Tebow didn't slip through any cracks.
Lin played at Harvard, about as far under the radar as a Division I basketball player can get. The Crimson didn't sniff NCAA tournament play; the last time Harvard made it was 1946, 42 years before Lin was born. No NBA team drafted him, and there was little indication in his 29 games with Golden State last season that Lin was anything truly special.
But once Lin took over the Knicks this season, all that changed. There are still many doubters about Tebow and his long-term effectiveness in the NFL. After the last six games, no one can doubt Lin's potential.
With the Knicks missing Amare Stoudemire and Carmelo Anthony, Lin suddenly emerged in Coach Mike Antoni's point-guard oriented offense. And his self-assurance in waving his teammates back to take the winning three-pointer Tuesday not only showed that he's got exceptional skill, but his confidence is on the same level.
Now the Knicks await the return of Anthony from his groin injury. But if the way Lin played Tuesday is any indication, dishing out 11 assists to go with his 27 points, it seems unlikely that he won't be able to adapt. Tebowmania swept the sports world last fall; Linsanity is all of that and more.
RELATED:
Video: Jeremy Lin hits winning 3-pointer
Jeremy Lin: No more crashing on other people's couches
Jeremy Lin's high school coach says race hindered opportunities
-- Mike James
Left photo: Tim Tebow. Credit: Barry Gutierrez / Associated Press
Right photo: Jeremy Lin. Credit: Frank Gunn / Associated Press ||||| TORONTO -- Even after his amazing week, this one took Linsanity to a whole new level. Against Toronto on Tuesday, it was Lin for the win!
Knicks sensation Jeremy Lin made a tiebreaking 3-pointer with less than a second to play to cap his finishing flurry of six straight points and New York rallied to beat the Raptors 90-87, extending its winning streak to six games.
"I'm just glad it went like this so we can calm the Linsanity down," cracked Knicks coach Mike D'Antoni.
No chance of that. The NBA's first American-Taiwanese player, Lin scored 27 points and added a career-high 11 assists in his first game since being named Eastern Conference player of the week.
"He continues to impress every night," New York's Jared Jeffries said. "Every game he plays better, he does more and more to help us win basketball games. You can't ask any more of a kid coming into this situation."
While he's the one standing in the spotlight, Lin said sharing success with his teammates is as big a reward.
"It's not because of me, it's because we're coming together as a team," Lin said. "We started making these steps earlier but we were still losing close games and so obviously it wasn't fun. But when you win, that solves a lot of problems. We've been winning and we've been playing together."
Making just his fifth career start, Lin had no hesitation in taking the decisive shot.
"I'm thankful that the coach and my teammates trust me with the ball at the end of the game," he said. "I like having it at the end of the game. I'm just very thankful."
The season-high crowd of 20,092 roared as Lin shot over Jose Calderon and drained a pull-up jumper from the top with half a second to play, giving the Knicks their first lead since the opening quarter.
D'Antoni declined to call timeout before Lin's winning shot, saying it would only give the Raptors time to draw up their defense. Even so, D'Antoni was impressed with the calm and confidence Lin showed in letting the clock run down.
"You just watch and you're in awe," D'Antoni said. "He held it until five tenths of a second left. He was pretty confident that was going in, no rebounds, no nothing. That ball was being buried."
Toronto's Rasual Butler airballed his attempt at the buzzer as the Knicks swarmed their newest hero at center court.
"The kid made a great shot," Raptors coach Dwane Casey said. "(Calderon) had to give him space or he was going to give up a driving path and (Lin) made a tough shot."
But Casey was left steamed at his team's failure to hold a lead, especially after the Raptors made nine turnovers in the final quarter.
"It should not have come to (Lin's) play," Casey said. "We had some many boneheaded plays to get to that play and to make that play relevant. We should have taken care of business before that."
Amare Stoudemire returned from a four-game absence with 21 points and Tyson Chandler had 13 for New York.
Calderon scored 25 points, Linas Kleiza had 15 points and 11 rebounds, and DeMar DeRozan scored 14 for the Raptors.
Up 75-66 to start the fourth, Toronto widened its lead with a three-point play by Barbosa before the Knicks stormed back with a 10-0 run, cutting it to 78-76 and forcing the Raptors to call timeout with 6:22 remaining.
Kleiza stopped the run with a driving layup, Amir Johnson added a hook shot and, after Lin made one of two from the line, Barbosa's layup made it 84-77 with 4:49 to go.
Toronto led 87-82 with less than two minutes to go when Iman Shumpert stole the ball from Calderon and drove in for an uncontested dunk. After a missed shot, Lin completed a three-point play, tying it at 87 with 1:05 left.
Barbosa missed a 3 for Toronto and, at the other end, Shumpert missed a jumper but Chandler grabbed the rebound. Lin took the ball near midcourt and let the clock run down to 5 seconds before driving and pulling up against Calderon to launch the decisive shot, touching off the latest instance of Linsanity.
"(Calderon) was trying to push me left," Lin said. "He was giving me a little bit of space. I just tried to take it down as low as I could. I figured it's probably not going to be possible to get to the basket with the help that they had."
Calderon was surprised to see Lin pull up for the game-winner.
"I thought he was going to drive," Calderon said. "That's why I tried to give him his left hand but he made a great shot."
Calderon was held scoreless in the final quarter, with D'Antoni crediting Shumpert for keeping the Toronto guard in check.
"He did a great job," D'Antoni said. "Shump got on Calderon and changed up everything."
The Raptors had a photo of Lin on their team website in the hours before the game, and his visit generated major interest among Toronto's Asian community, estimated at over 280,000 people, or more than 11 percent of the local population. The Chinese Canadian Youth Athletics Association and the Taiwanese Canadian Association of Toronto both sent groups of almost 300 fans as Toronto sold out for the second time in 13 home games. One group of fans in the upper deck wore white T-shirts spelling out his name.
Not all the fans were so positive: Lin was booed several times throughout the game.
Local media also took note; some 75 reporters and 16 cameras packed a Tuesday morning press conference to hear Lin speak, with dozens more turned away to prevent overcrowding. More than 25 Chinese Canadian journalists were due to cover the game, including one who presented Lin with a book of "Year of the Dragon" stamps from Canada Post and asked him to record a message in Mandarin, which he did.
Even D'Antoni was shocked by the size of the throng upon walking in Tuesday morning for his turn at the microphone.
"Are we in the playoffs now?" D'Antoni joked as he made his way to the front of the room.
It was Calderon, coming off a career-high 30 points in Sunday's loss to the Lakers, who was hot early, scoring 12 points in the first as the Raptors led 28-21 after one. Lin missed his first shot and didn't score until a driving layup with 3:46 left in the first. He had four points and four assists in the opening quarter.
Lin turned the ball over on three straight possessions early in the second and Toronto took advantage with a 6-0 run, widening its lead to 13 points. He also missed a running bank shot as the half ended as the Raptors took a 47-36 lead into the break.
Stoudemire scored seven points and Lin had six points and four assists as the Knicks scored 30 points in the third, but still trailed 75-66 heading into the fourth.
NOTES: Lin matched a career high with eight turnovers. ... Stoudemire returned after missing the past four games following the death of his older brother, Hazell, who was killed in a car crash. ... Raptors G Jerryd Bayless (left ankle) missed his third straight game. Casey said Bayless could be available for Wednesday's game against San Antonio. ... Before the game, Chandler presented Casey with his NBA championship ring. Both were with Dallas last season when they beat Miami. ... Carmelo Anthony (right groin) missed his fourth straight game. ... New York's 17-point comeback was the biggest by a Toronto opponent this season. ... Musician and actor Steven Van Zandt watched from a courtside seat. ||||| Jeremy Lin has taken the Knicks and their fans on a memorable ride. Chris Chambers/Getty Images
When I first heard that people were comparing Jeremy Lin to Tim Tebow, I assumed they were joking.
Seriously? Tim Tebow and Jeremy Lin?
It's not even close, ladies and gentlemen.
Jeremy Lin was an undrafted player out of Harvard where he never once played in an NCAA tournament game. He qualified for a few postseason awards but never won any.
Lin was All-Ivy League first team and holds several conference records. That's nice. Then there's Tim Tebow.
Tim Tebow was the starting quarterback on a Florida team that dominated college football. Tebow started for almost his entire Florida career. Florida was the dominant team in all of college football. Tebow played key roles on two National Championship teams in 2007 and 2009.
Tebow also won the 2007 Heisman Trophy.
Lin's Harvard education would automatically eliminate him from "underdog" status if he were pursuing almost any post-college career except for professional sports.
The University of Florida is to professional sports what Harvard University is to the legal, financial and medical professions. If it's not, then at the very least it's of Ivy League Quality. The Florida men's basketball team won two national titles while Tebow was starring for the football team. Harvard was noticeably absent from the NCAA Finals in both years.
Tim Tebow led the Broncos on a run to the playoffs last season but his passing numbers were awful.
Jim Rogash/Getty Images
When the professional drafts were held, Tim Tebow was chosen in the first round of the NFL Draft. Jeremy Lin? No one drafted him at all.
Tebow went to the Denver Broncos where he didn't start. He sat on the bench biding his time. Jeremy Lin went to the NBA developmental league, and he was cut by two professional NBA teams as well.
While Tebow languished on the Broncos bench, his fans ponied up money to buy a billboard in Denver begging the team to start Tebow. Jeremy Lin? No billboards, no underground surge of fan support begging the Knicks to start him. Lin simply sat on the bench until he was asked to play. When he was asked, he responded.
What really separates Lin from Tebow though is those pesky numbers. Yes, both teams have produced the all-important "wins" with their respective leaders at the helm.
However, Tebow's Denver team won games through a variety of metrics. Defense and special teams both played key roles in the Broncos' surge. Tebow was the quarterback and emotional leader but he was producing some of the worst passing numbers in the league. That's not denying Tebow credit—it's reality.
Tim Tebow shows off his 2007 Heisman Trophy
Chris Trotman/Getty Images
Lin has also played a key role as an emotional leader of the Knicks over the course of their current six-game win streak. He's also leading the team in both scoring and assists. Lin has scored more points in his first five NBA starts than any NBA player since the 1976 NBA-ABA merger.
That would be like Tebow leading his team in rushing and passing during Denver's streak—while also breaking a slew of rookie passing records or records for passing in first starts at Quarterback.
Instead, you have Tebow with a pass completion percentage of 46.5 and Lin shooting 50 percent over the current win streak. For those not completely familiar with both sports a quarterback should be completing a higher percentage of his passes than an NBA player's field goal percentage.
Lin isn't a one-man team. He is not the only reason that the Knicks are on a winning streak right now. Tyson Chandler and Iman Shumpert have both stepped up their defensive games and Chandler is rebounding at a high rate. Lin may also be slightly aided by the absence of Carmelo Anthony who would be taking many of the shots that Lin has found himself taking.
The comparison of Lin to Tebow is absurd, mainly because one of the two players is an actual underdog rags-to-riches story and the other quite simply is not.
What Tebow did this past season in the NFL was miraculous. He led his Broncos team to the playoffs and then produced a dramatic win over the league's best defense in an overtime thriller. That's what teams are paying for when they draft someone in the first round of the NFL Draft.
What Lin is currently doing with the New York Knicks is far more outside the realm of expectations. We're talking about an undrafted player from a non-traditional college basketball school taking one of the most historically relevant NBA Franchises on a memorable win streak.
So if you're looking for an NFL underdog story to compare Jeremy Lin to, that's fine. It's not Tim Tebow, though. | – Partly because Jeremy Lin is a man of faith, his crazy-fast ride into sports stardom has led to endless comparisons with Tim Tebow. Well sorry, Tebow fans, but Lin wins this one easily in terms of athleticism and sheer amazement value, writes Mike James at the Los Angeles Times. The young New York Knicks star "has shown himself to be a singular talent, scoring more points in his first five starts than any NBA player since the ABA merger," writes James. (Click here to see video of last night's dramatic game-wnning shot.) Tebow had some late-game heroics for the Broncos, but he was still 27th in the NFL quarterback rankings. Consider, too, that Tebow was Heisman winner and a first-round draft pick on every team's radar, writes Ben Shapiro at Bleacher Report. Lin, not so much. He played for Harvard and got drafted by nobody. Now he's working a relative miracle in New York. "We're talking about an undrafted player from a non-traditional college basketball school taking one of the most historically relevant NBA franchises on a memorable win streak," writes James. "The comparison of Lin to Tebow is absurd, mainly because one of the two players is an actual underdog rags-to-riches story and the other quite simply is not." Click for more. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``National All Schedules Prescription
Electronic Reporting Reauthorization Act of 2014''.
SEC. 2. AMENDMENT TO PURPOSE.
Paragraph (1) of section 2 of the National All Schedules
Prescription Electronic Reporting Act of 2005 (Public Law 109-60) is
amended to read as follows:
``(1) foster the establishment of State-administered
controlled substance monitoring systems in order to ensure
that--
``(A) health care providers have access to the
accurate, timely prescription history information that
they may use as a tool for the early identification of
patients at risk for addiction in order to initiate
appropriate medical interventions and avert the tragic
personal, family, and community consequences of
untreated addiction; and
``(B) appropriate law enforcement, regulatory, and
State professional licensing authorities have access to
prescription history information for the purposes of
investigating drug diversion and prescribing and
dispensing practices of errant prescribers or
pharmacists; and''.
SEC. 3. AMENDMENTS TO CONTROLLED SUBSTANCE MONITORING PROGRAM.
Section 399O of the Public Health Service Act (42 U.S.C. 280g-3) is
amended--
(1) in subsection (a)(1)--
(A) in subparagraph (A), by striking ``or'';
(B) in subparagraph (B), by striking the period at
the end and inserting ``; or''; and
(C) by adding at the end the following:
``(C) to maintain and operate an existing State-
controlled substance monitoring program.'';
(2) by amending subsection (b) to read as follows:
``(b) Minimum Requirements.--The Secretary shall maintain and, as
appropriate, supplement or revise (after publishing proposed additions
and revisions in the Federal Register and receiving public comments
thereon) minimum requirements for criteria to be used by States for
purposes of clauses (ii), (v), (vi), and (vii) of subsection
(c)(1)(A).'';
(3) in subsection (c)--
(A) in paragraph (1)(B)--
(i) in the matter preceding clause (i), by
striking ``(a)(1)(B)'' and inserting
``(a)(1)(B) or (a)(1)(C)'';
(ii) in clause (i), by striking ``program
to be improved'' and inserting ``program to be
improved or maintained'';
(iii) by redesignating clauses (iii) and
(iv) as clauses (iv) and (v), respectively;
(iv) by inserting after clause (ii), the
following:
``(iii) a plan to apply the latest advances
in health information technology in order to
incorporate prescription drug monitoring
program data directly into the workflow of
prescribers and dispensers to ensure timely
access to patients' controlled prescription
drug history;'';
(v) in clause (iv) (as so redesignated), by
inserting before the semicolon the following:
``and at least one health information
technology system such as electronic health
records, health information exchanges, and e-
prescribing systems''; and
(vi) in clause (v) (as so redesignated), by
striking ``public health'' and inserting
``public health or public safety'';
(B) in paragraph (3)--
(i) by striking ``If a State that submits''
and inserting the following:
``(A) In general.--If a State that submits'';
(ii) by inserting before the period at the
end ``and include timelines for full
implementation of such interoperability. The
State shall also describe the manner in which
it will achieve interoperability between its
monitoring program and health information
technology systems, as allowable under State
law, and include timelines for the
implementation of such interoperability''; and
(iii) by adding at the end the following:
``(B) Monitoring of efforts.--The Secretary shall
monitor State efforts to achieve interoperability, as
described in subparagraph (A).'';
(C) in paragraph (5)--
(i) by striking ``implement or improve''
and inserting ``establish, improve, or
maintain''; and
(ii) by adding at the end the following:
``The Secretary shall redistribute any funds
that are so returned among the remaining
grantees under this section in accordance with
the formula described in subsection
(a)(2)(B).'';
(4) in subsection (d)--
(A) in the matter preceding paragraph (1)--
(i) by striking ``In implementing or
improving'' and all that follows through
``(a)(1)(B)'' and inserting ``In establishing,
improving, or maintaining a controlled
substance monitoring program under this
section, a State shall comply, or with respect
to a State that applies for a grant under
subparagraph (B) or (C) of subsection (a)(1)'';
and
(ii) by striking ``public health'' and
inserting ``public health or public safety'';
and
(B) by adding at the end the following:
``(5) The State shall report on interoperability with the
controlled substance monitoring program of Federal agencies,
where appropriate, interoperability with health information
technology systems such as electronic health records, health
information exchanges, and e-prescribing, where appropriate,
and whether or not the State provides automatic, real-time or
daily information about a patient when a practitioner (or the
designee of a practitioner, where permitted) requests
information about such patient.'';
(5) in subsections (e), (f)(1), and (g), by striking
``implementing or improving'' each place it appears and
inserting ``establishing, improving, or maintaining'';
(6) in subsection (f)--
(A) in paragraph (1)(B) by striking ``misuse of a
schedule II, III, or IV substance'' and inserting
``misuse of a controlled substance included in schedule
II, III, or IV of section 202(c) of the Controlled
Substance Act''; and
(B) by adding at the end the following:
``(3) Evaluation and reporting.--Subject to subsection (g),
a State receiving a grant under subsection (a) shall provide
the Secretary with aggregate data and other information
determined by the Secretary to be necessary to enable the
Secretary--
``(A) to evaluate the success of the State's
program in achieving its purposes; or
``(B) to prepare and submit the report to Congress
required by subsection (k)(2).
``(4) Research by other entities.--A department, program,
or administration receiving nonidentifiable information under
paragraph (1)(D) may make such information available to other
entities for research purposes.'';
(7) by striking subsection (k);
(8) by redesignating subsections (h) through (j) as
subsections (i) through (k), respectively;
(9) in subsections (c)(1)(A)(iv) and (d)(4), by striking
``subsection (h)'' each place it appears and inserting
``subsection (i)'';
(10) by inserting after subsection (g) the following:
``(h) Education and Access to the Monitoring System.--A State
receiving a grant under subsection (a) shall take steps to--
``(1) facilitate prescriber and dispenser use of the
State's controlled substance monitoring system; and
``(2) educate prescribers and dispenser on the benefits of
the system both to them and society.'';
(11) in subsection (k)(2)(A), as redesignated--
(A) in clause (ii), by striking ``or affected'' and
inserting ``, established or strengthened initiatives
to ensure linkages to substance use disorder services,
or affected''; and
(B) in clause (iii), by striking ``including an
assessment'' and inserting ``between controlled
substance monitoring programs and health information
technology systems, and including an assessment'';
(12) in subsection (l)(1), by striking ``establishment,
implementation, or improvement'' and inserting ``establishment,
improvement, or maintenance'';
(13) in subsection (m)(8), by striking ``and the District
of Columbia'' and inserting ``, the District of Columbia, and
any commonwealth or territory of the United States''; and
(14) by amending subsection (n), to read as follows:
``(o) Authorization of Appropriations.--To carry out this section,
there are authorized to be appropriated $7,000,000 for each of fiscal
years 2014 through 2018.''. | National All Schedules Prescription Electronic Reporting Reauthorization Act of 2014 - Amends the National All Schedules Prescription Electronic Reporting Act of 2005 to include as a purpose of such Act to foster the establishment of state-administered controlled substance monitoring systems in order to ensure that appropriate law enforcement, regulatory, and state professional licensing authorities have access to prescription history information for the purposes of investigating drug diversion and prescribing and dispensing practices of errant prescribers or pharmacists. Amends the Public Health Service Act to revise and update the controlled substance monitoring program, including to: allow grants to be used to maintain and operate existing state controlled substance monitoring programs, require submission by a state of a plan to apply the latest advances in health information technology to incorporate prescription drug monitoring program data directly into the workflow of prescribers and dispensers, require timelines and descriptions for implementation of interoperability for purposes of information sharing with a bordering state that already operates a monitoring program, require health information interoperability standards to be consistent with at least one health information technology system, require the Secretary of Health and Human Services (HHS) to redistribute any funds that are returned among the remaining grantees, require a state to provide the Secretary with aggregate data and other information to enable the Secretary to evaluate the success of the state's program and to submit a progress report to Congress, and expand the program to include any commonwealth or territory of the United States. Authorizes the Drug Enforcement Administration (DEA) or a state Medicaid program or health department receiving nonidentifiable information from a controlled substance monitoring database to make such information available to other entities for research purposes. Requires a state receiving a grant to: (1) facilitate prescriber and dispenser use of the state's controlled substance monitoring system, and (2) educate prescribers and dispensers on the benefits of the system both to them and society. Removes the preferences for grants related to drug abuse for states with approved applications to implement controlled substances monitoring programs. Revises requirements for studies on progress to include assessment of the effects upon linkages to substance abuse disorder services and interoperability with health information technology systems. |
About Jeff Schmalz
Jeff Schmalz was a journalistic prodigy. He was hired by The New York Times while still a college student, and he was essentially running its metropolitan coverage by his mid-20s. From his crisply pressed trousers and shirts to his unerring sense of how to structure a feature story, he was a consummate Timesman. People in the newsroom speculated that someday he could be “on the masthead” – the list of the top editors on the world’s most important newspaper. All the while, though, Jeff was struggling with his identity as a gay man. He came out to many friends and peers on the Times, but he kept his sexual orientation secret from the newsroom management, the people who had control over his professional life. Under the executive editor A.M. Rosenthal, the Times newsroom of the 1970s and 80s was a homophobic place, and journalists known to be gay or lesbian were stalled or even demoted in their careers.
Then, one day in December 1990, Jeff collapsed in the newsroom with a brain seizure. It was the first evidence that he had full-blown AIDS – a death sentence in these years before drug cocktails were available to victims of the disease. With AIDS, Jeff was endangered and he was outed. Yet he was also cracked wide open in positive ways. He found his calling in writing about HIV and AIDS, doing memorable portraits of Magic Johnson, Mary Fisher, and Harold Brodkey, among others, and chronicling his own experience reporting on the most personal beat imaginable. As Jeff himself said at the time, having AIDS stirred an empathy in him that he had long obscured beneath a witty, cynical, hard-driven exterior.
Who Jeff was and what he did deeply changed The New York Times, sensitizing it as never before to the humanity of gay people. The Times of today – publishing same-sex wedding announcements, editorializing in favor of marriage equality – is the fruition of changes that Jeff helped set into motion but never lived long enough to fully see.
And now, 22 years after Jeff died at age 39, his contributions have been largely forgotten. “Dying Words” will restore his name and work to the annals of gay history and journalistic history.
About Our Project
Dying Words: The AIDS Reporting of Jeffrey Schmalz is a project with two parts – an audio documentary and a book. Both were based on our contemporary interviews with many of Jeff’s friends and colleagues, existing recordings of Jeff himself, and excerpts from his AIDS coverage. The project features interviews with major journalists as Anna Quindlen, Adam Moss, Arthur Sulzberger, Jr., and Elizabeth Kolbert, as well as the AIDS activist Mary Fisher and the LGBT historian Eric Marcus. Our project had the full and enthusiastic support of Wendy Schmalz, Jeff’s sister, who is his closest living relative. Thanks to Wendy, we had access to original microcassette recordings of Jeff’s interviews with Larry Kramer, Magic Johnson, Randy Shilts, and Bill Clinton, among others.
The audio documentary was produced by Kerry Donahue and edited by Ben Shapiro, both award winning journalists. It was distributed by PRX to more than 125 public radio stations, including eight of the top 10 markets. It is also available for download. The Dying Words book was published by CUNY Journalism Press and released on December 1, 2015 to coincide with World AIDS Day. ||||| The two come into conflict, for example, when an infectious patient willfully refuses treatment and keeps passing on a disease. In the 1990s, during an outbreak of drug-resistant tuberculosis in New York City, Dr. Frieden famously detained patients who refused to take their pills, locking them in hospitals for months until they were cured.
In 2005, he advocated H.I.V.-control measures that he said would “offend both sides of the political establishment.” Condoms and clean syringes were needed even if conservatives disliked them, he said, and the tracing of sexual partners needed to be done even if H.I.V. activists opposed it.
Some of those goals have become law.
Although H.I.V. testing is not mandatory except in the military and under a few other circumstances, the counseling requirement has been dropped. Many hospitals test all patients unless they specifically refuse, and some cities even pay people applying for driver’s licenses to take H.I.V. tests.
Lab results are now reported by name almost everywhere. That makes it possible for local health workers to contact patients to make sure they receive help and to ask for the names of their sex partners so they, too, can be treated.
But that is failing, Dr. Frieden said in the interview. Only about half of those testing positive for H.I.V. are ever asked for names, and those who choose to answer “named relatively few.”
Local health departments lack the staff to trace contacts. “To be fair, it’s legitimately hard,” Dr. Frieden added. Some people find sex partners through apps like Grindr without ever learning their names.
New tests can detect the virus within 10 days of infection, but they are not being used enough, he said. An estimated 155,000 Americans with H.I.V. do not know it. And of those who are tested, 20 percent already have AIDS or are close to it — meaning they may have been spreading the virus for years. ||||| “I have come to the realization that I will almost certainly die of AIDS," New York Times reporter Jeff Schmalz wrote more than two decades ago. He died in 1993.
Schmalz reported on AIDS while he battled the disease. His reporting helped transform coverage of AIDS.
Kerry Donahue is director of the radio program at Columbia Journalism School. She produced the documentary “Dying Words," which explores Schmalz' career and airs on KERA 90.1 FM at 8 p.m. Tuesday, World AIDS Day.
Donahue talked with KERA about Schmalz' life and legacy.
Interview Highlights: Kerry Donahue ...
... on Jeff Schmalz' passion for journalism: "He was a very driven man. He grew up in a small town in Willow Grove, Penn., about an hour north of Philadelphia. He decided somewhere in high school, he worked on the high school newspaper, and he decided when he got to Columbia University that he would get a job at The New York Times. He worked as a copyboy. He immediately fell in love with it, so much so that he dropped out of Columbia and started working at the Times full-time, and he was very very ambitious about it.
... on how being gay affected his career: "He was a gay man and he stayed in the closet to people above him. To his colleagues and people below him, he was very out and open. But there was a consequence at the Times for being gay in much of the '70s and '80s. It was a very homophobic newsroom. Often, for gay staffers, if it was found out you were gay, [there was] a punishment or a sort of setback. That happened to Jeff Schmalz in 1983. He was passed over for promotion and about four months after that he was sent out to be a reporter - what would have been after a significant number of years of being an editor at The New York Times - a very entry-level reporter job - he felt he had been exiled a bit."
... on when Schmalz realized he had AIDS: "He had a seizure while at work on Dec. 21, 1990. He was just at his desk and he collapsed. He had a grand mal seizure, at which point the entire newsroom came running. When he got the diagnosis, it was quite severe. He had PML, a brain infection which typically was very fatal, is very fatal, often killing people within months. He really thought he would die almost immediately."
... on Schmalz' reporting changing after his diagnosis: "When he came back to work, I think he still in some ways was a bit in denial. He wanted to cover the 1992 presidential campaign, which he did - and in the course of that I think he started to find a way ... to cover AIDS. It took him a little while to figure out what angle on AIDS he wanted to cover. He was always a newspaper man thinking of how to make the story the most compelling and he ultimately settled on a series of profiles of people like himself who were living with AIDS and working and being in the world."
Kerry Donahue is director of the radio program at Columbia Journalism School.
“Dying Words," which explores the life and legacy of Jeff Schmalz, airs on KERA 90.1 FM at 8 p.m. Tuesday, World AIDS Day.
Video: Jeff Schmalz on "The Charlie Rose Show" in 1992
Jeff Schmalz on The Charlie Rose Show, November 19, 1992. from Kerry Donahue on Vimeo. ||||| World AIDS Day is, like other world disease days, a publicity gambit that has a distinctly absurd pageantry. But there is substance beyond candlelight vigils, a red ribbon hung from the White House portico, and proclamations from politicians and public health officials. Most tangibly, each World AIDS Day, held annually on Dec. 1, broadcasts official updates about the damage caused by HIV, which at last count had infected 78 million people since the epidemic surfaced in 1981 and had killed half of them. And beginning in 2012, World AIDS Day began to trumpet a new possibility that a few years earlier would have seemed preposterous, even delusional: ending the epidemic.
A growing number of locales on this World AIDS Day can rightly claim to have made important steps toward ending their epidemics. Key to this progress is the recent evidence that treatment is prevention: Anti-HIV drugs can reduce the virus to such low levels in appropriately treated people that they rarely infect others, whether it be through sex, mother-to-child transmission, or even sharing needles. Pre-exposure prophylaxis, or PrEP, protects uninfected people with the same drugs. San Francisco and New York state both have explicit plans to end their epidemics by more aggressively exploiting treatment as prevention and PrEP in concert with staples such as condom promotion, education, clean needle provision, and behavior change.
Advertisement
To retain its epidemic status, AIDS requires that each infected person spread the virus to at least one other. If interventions can reduce transmissions such that, say, only 75 percent of people with HIV infect someone else, an epidemic will sputter and, eventually, stop. But for millions of people, a great distance separates the aspiration to end AIDS from the achievement of that goal.
Get Slate in your inbox.
Photo by Malcolm Linton/Polaris
In the summer of 2014, Oscar, 28, earned his living as a sex worker in Tijuana, the Mexican border town that abuts San Diego. During the day Oscar was Beto, a gay man who cruised the park. At night he became a transgender woman, Alessandra, Alé for short, who worked the clubs and the streets in the red-light district known as the Zona Norte. “Alé makes more money than Beto,” Oscar said, adding that the life he shared with his two alter egos was “very balanced.”
Photo by Malcolm Linton/Polaris
Oscar lived in Tijuana as a young child but moved to Wisconsin at age 8 on a visa with his aunt and a sister. “I went supposedly to go to school, but then I ended up working, and I liked the money, so I forgot about school,” he said. A Greek restaurant hired him as a dishwasher when he was 12, and his aunt could not persuade him to continue with his studies. “I was young and hardheaded. I regret that now.” The restaurant owners treated him like one of their own children, he said, eventually promoting him to crew leader.
Advertisement
Oscar married and had a son, entertained as a female impersonator, and bought a new Ford Focus. “Over there, I used to do cocaine, because I was making a lot of money,” he said. Then in 2010 he had a car accident. He had no license or insurance, which led to his arrest and the discovery that his visa had expired. After six months at a detention center run by the U.S. Immigration and Customs Enforcement, a judge granted Oscar a “voluntary departure,” which meant he would leave the United States but not be barred from re-entering.
Oscar returned to Tijuana without his family, and he started to smoke crystal methamphetamine, a powerful aphrodisiac used by many sex workers.
Photo by Malcolm Linton/Polaris
Oscar had his first HIV test in Wisconsin when he was 18. He was negative. When he took another test on Tijuana’s Gay Pride Day in 2013, he was still uninfected.
Photo by Malcolm Linton/Polaris
Gay men and transgender women have the highest HIV infection rates of any group in Tijuana. Selling sex further increases risk, as does smoking crystal meth. “Alé gets horny and crazy and wild,” said Oscar.
Advertisement
A meta-analysis in 2008 that looked at 25 studies of transgender women in 14 countries found an HIV prevalence of 27.3 percent in those who sold sex and 14.7 percent in those who did not. The overall estimated prevalence of HIV in Tijuana adults is 0.6 percent, about the same as in the United States.
Photo by Malcolm Linton/Polaris
Oscar said Alé scared him: “Alé gets whatever she wants. She is a very spoiled girl. She’s a very dangerous girl. She is very, very strong. She is my boss. I listen to her sometimes. And Oscar, he is kind of like Alé but more mellow, not that aggressive.”
Photo by Malcolm Linton/Polaris
When Oscar was a boy, he regularly returned to Tijuana to see his mother, and he had his first sexual experience there when he was 10. “I was on the streets most of the time playing around, and there were a lot of old guys,” he remembered. “They’d say, ‘Play with my thing, and I’ll give you a candy.’ ” Back in Wisconsin, he continued to trade sex with older men for treats and money. “It was normal to me.”
Photo by Malcolm Linton/Polaris
Oscar was appalled that so many deportees, like onetime gang member from Los Angeles Lesly Zulema Sanchez, ended up living in the Tijuana River Canal. “I think those people are nuts,” said Oscar. “Sometimes people ask me for a coin or whatever, and they say, ‘Oh, I’m deported.’ I’m like, I’m deported, too, and I try to survive in a different way.”
Photo by Malcolm Linton/Polaris
Advertisement
Oscar said Alé mainly had U.S. clients. “They confuse me because they want Alé, but then they want Alé to behave like Oscar. I don’t like that. Why? Why do they do that?”
Often, Oscar explained, the clients wanted Alé to be the insertive partner. “That’s strange, weird. It bothers me when I’m Alé and they try to touch Beto’s things. I don’t like that. They’re supposed to be locked in a safe.”
Male-to-female transgender people often have male clients who like to receive anal sex but do not identify themselves as gay. In their minds, there is no stigma if they are having sex with a woman, even if she has a penis.
Photo by Malcolm Linton/Polaris
In September 2014, the police arrested Oscar, who was coming out of a weeklong meth binge. A search allegedly found that he was carrying half a kilo of crystal meth, and he was locked up at Tijuana’s La Mesa prison, which houses 6,000 inmates. “The police put the drug on me,” he said. “With no witnesses, my lawyer said I’m going to get five years.”
Logically, places like Tijuana would be at the top of the list for campaigns to ending AIDS epidemics.
Advertisement
Centro de Servicios, a nongovernmental organization commonly referred to as SER, is the main HIV prevention program in Tijuana that serves gay men and transgender women. Its staff regularly visits La Mesa prison to conduct rapid tests for the virus and to educate inmates about how to protect themselves. SER’s Kristian Salas tested Oscar and found that he was positive.
The prison moved Oscar to a cell with six other HIV-infected inmates, who received double helpings of food. “Kristian told me I shouldn’t be sad,” said Oscar. “He told me it was like diabetes, and people don’t die as long as they take the medication. I’m not sure about that. Only time will tell me the truth.”
In February, when we last checked in with Oscar, he was still in prison and said his CD4 white blood cells—HIV’s main target—numbered 353 per microliter of blood. (Normal CD4 counts range from 600 to 1,200.) He complained that he was not yet receiving antiretrovirals. Mexico’s guidelines call for treating all HIV-infected people, regardless of their CD4 counts.
* * *
Advertisement
Tijuana as a whole does not have a particularly severe HIV/AIDS problem—especially considering the other health issues it faces. In Tijuana, as in the United States and much of the world outside of sub-Saharan Africa, the virus primarily has made headway in “high-risk” groups, creating what scientists call a “micro hyperepidemic.” This limited spread makes places like Tijuana especially amenable to ending AIDS. It is far more difficult to launch treatment and prevention campaigns for a general population than it is to target smaller and interconnected communities of transgender women, gay men, sex workers, and people who inject drugs, the main groups infected with HIV in Tijuana.
Logically, places like Tijuana would be at the top of the list for campaigns to ending AIDS epidemics. But no coordinated effort exists in Tijuana to test people at the highest risk, the cornerstone of any such effort. When people do test negative, they are not offered PrEP. And for those who are infected, a recent study found that only 3.7 percent were receiving anti-HIV drugs. Treatment as prevention doesn’t have a hope if it’s not used.
The reasons why Tijuana has not jumped on the ending AIDS bandwagon reach far beyond the problem of limited resources—and they are common to other places that have similarly scaled, easier-to-contain micro hyperepidemics. There are two essential obstacles, and they overlap. First, there is no strong leadership. This in part reflects the reality that the communities most affected by HIV have weak political muscle: Tijuana has no advocacy movement loudly chanting in front of government buildings on behalf of gay men, transgender women, drug users, and sex workers. And the rampant stigma and discrimination these groups face is compounded because many are deportees from the United States or migrants from Central America, most are poor, and some are homeless.
In September, the World Health Organization issued guidelines that for the first time recommended treating all HIV-infected people in the world with antiretroviral drugs and offering PrEP to everyone at “substantial risk” of becoming infected. The guidelines explicitly state that if countries adopt these recommendations, they will contribute to “ending the AIDS epidemic as a major public health threat by 2030.” This is a glorious vision that on this World AIDS Day surely will receive more rah-rahism than ever before. ||||| On Oct. 15, 1982, at a White House press briefing, reporter Lester Kinsolving asked Press Secretary Larry Speakes about a horrifying new disease called AIDS that was ravaging the gay community.
“What’s AIDS?” Speakes asked.
Advertisement
“It’s known as the ‘gay plague,’ ” Kinsolving replied.
Everyone laughed.
“I don’t have it,” Speakes replied. “Do you?” The room erupted in laughter again. Speakes continued to parry Kinsolving’s questions with quips, joking that Kinsolving himself might be gay simply because he knew about the disease. The press secretary eventually acknowledged that nobody in the White House, including Reagan, knew anything about the epidemic.
“There has been no personal experience here,” Speakes cracked. The room was in stitches.
On Dec. 1, 2015—World AIDS Day—Vanity Fair debuted a short documentary by Scott Calonico about this now-infamous exchange. Calonico has finally unearthed audio of the galling colloquy between Speakes and Kinsolving. Their exchange—and the accompanying laughter—is as horrifying now as it must have been to those dying of the disease in 1982. ||||| Study in The New England Journal of Medicine concludes improvements to education, lifestyle and health are contributing to decline in overall number of new cases of dementia. MORE
Editorial notes study in the New England Journal of Medicine showing home births present slightly more risk to mother and child than hospital births; holds study's conclusions are potentially more reliable than those of previous studies showing that home births are equally safe. MORE
Study in New England Journal of Medicine finds experimental treatment for Ebola patients in Guinea involving transfusion of blood plasma containing disease's antibodies was ineffective; paper in same issue asserts doctors had some success with alternative malaria treatment among Ebola patients in Liberia; experts suggest both approaches would benefit from more testing. MORE
Study in New England Journal of Medicine finds planned out-of-hospital births carry higher risk of infant death than hospital deliveries, but are less likely to involve cesarean section. MORE
Dr Thomas R Frieden, director of Centers for Disease Control and Prevention, reports in New England Journal of Medicine that United States remains in peril of losing war on AIDS because hundreds of thousands of Americans diagnosed with HIV infection are not receiving medical care or antiretroviral therapies; report calls for radical changes in how disease is fought but notes transition would primarily come from state and local health departments, over which CDC has little power. MORE
Study in The New England Journal of Medicine finds giving progesterone to women who have had three or more miscarriages does not improve chances of carrying pregnancy to term, undermining hopes of researchers. MORE
Study published in The New England Journal of Medicine finds that patients who lower their systolic blood pressure to under 120 are 25 percent less likely to have heart attacks, heart failure or strokes, or to die from heart disease. MORE
Study published in The New England Journal of Medicine finds nonsurgical therapy alone for severe osteoarthritis is far less effective than knee replacement surgery in relieving pain and restoring function. MORE
A study in The New England Journal of Medicine also found that emergency room visits for supplements occurred frequently among young adults. MORE
Study published in The New England Journal of Medicine finds that children who were exposed to cancer treatment during last two trimesters of their mother's pregnancy had normal cognitive and cardiac function. MORE
Studies in New England Journal of Medicine conclude drugs nivolumab and cabozantinib work better than standard drug everolimus in treatment of advanced kidney cancer; findings are likely to lead to changes in patient care. MORE
Controversial 2010 study of oxygen levels for extremely premature babies continues to stir turmoil over risks and ethical consent; Federal District Court Judge Karon O Bowdre rejects lawsuit over whether families were properly warned about risks of participating in trial; New England Journal of Medicine says decision shows clinical trial was solid. MORE
Joe Nocera Op-Ed column praises paper published in The New England Journal of Medicine on decline of cigarette use in Sweden, and corresponding rise in use of smokeless tobacco products snus; welcomes paper's suggestion of increasing taxes on cigarettes while lowering cost of electronic cigarettes as effective means to reduce smoking-related illness and death. MORE
Study in The New England Journal of Medicine finds program that trained first year female students at three Canadian college campuses to avoid rape lowered their risk of sexual assault from 10 percent to 5 percent; many researchers praise potential of program, but some say it is no substitute for dealing with attitudes and behavior of potential rapists. MORE
Two studies in The New England Journal of Medicine show doctors can more effectively treat brain tumors by determining their genetic characteristics, rather than standard procedure of examining tissue under microscope. MORE
Doctors are underscoring need for test that can determine whether expensive immune-boosting drugs like Opdivo and Yervoy will work on cancer patients in advance, as drugs help only minority of patients, but work exceedingly well; challenges of creating test are both ethical and scientific; importance of test has been laid out in study published in New England Journal of Medicine. MORE
Doctors presenting at annual meeting of American Society of Clinical Oncology report on new class of drugs that spur body's immune system to attack tumors, and can prolong lives of people with most common form of lung cancer; separate study, published in New England Journal of Medicine, finds particular genetic signature in tumor can help predict which patients could benefit from these immune-boosting drugs. MORE
Dr Mark Olfson study published in the New England Journal of Medicine finds rates of severe mental illness among children has fallen significantly over generation, countering data from Centers for Disease Control and Prevention and other agencies showing rise in some conditions; study comes at time of debate over incidence and treatment of childhood mental illness and points to need for common rate baseline. MORE
Dr Scott Halpern study in The New England Journal of Medicine finds penalizing those trying to quit smoking for failing to stay off cigarettes is more effective than offering awards for success. MORE
Comprehensive study published in New England Journal of Medicine finds that small minority of acutely premature babies born before 22 weeks, which had been considered point of viability, can survive outside the womb; discovery alters calculus for how aggressively very premature babies should be treated and is certain to have profound impact on abortion debate. MORE
Sir Nilesh Samani study in New England Journal of Medicine concludes that shorter stature increases risk of heart disease, with each additional 2.5 inches of height bringing 13.5 reduction in heart disease risk; surprising finding may lead to discovery of new links to heart disease. MORE
Dr Terrie E Taylor study in The New England Journal of Medicine finds that child deaths from malaria often result from swelling of brain. MORE
Researchers report in New England Journal of Medicine that experimental cholesterol drugs may sharply reduce risk of heart attacks and strokes according to preliminary evidence. MORE
Centers for Disease Control and Prevention study published in The New England Journal of Medicine finds deadly bacterial infection Clostridium difficile is estimated to have afflicted almost half a million Americans and caused 29,000 deaths in 2011. MORE
Study in The New England Journal of Medicine says parents and doctors should weigh how urgently surgery is needed for children younger than three years; cites increasing evidence that general anesthesia may impair brain development in babies and young children. MORE
Dr Craig Spencer publishes essay in the New England Journal of Medicine describing his experience as New York's first and only Ebola patient; contends that he was falsely accused of putting the public at risk and was subject to political and media gamesmanship after being hospitalized. MORE
The finding, in The New England Journal of Medicine, addresses a condition that afflicts 2 percent of American children. MORE
Study published in New England Journal of Medicine finds that drugs Eylea, Lucentis and Avastin are equally effective as treatments for form of vision loss caused by diabetes, although prices for drugs range from $50 to $1,950 per dose; study comes at time of concern over costs of pharmaceuticals. MORE
Editorial highlights study published in New England Journal of Medicine finding that smoking is even more harmful to health than previously thought; calls on government to expand efforts to help smokers quit, particularly state governments in the United States through programs like Medicaid. MORE
Study in The New England Journal of Medicine suggests at least five diseases and 60,000 annual deaths should be linked to tobacco in United States; smoking is already tied to nearly half a million US deaths a year from 21 diseases including 12 forms of cancer. MORE
Study published in New England Journal of Medicine demonstrates efficacy of new treatment in patients who have suffered most severe kind of stroke. MORE
Research published online by New England Journal of Medicine Researchers finds that drugs that free body’s immune system to fight cancer, known as PD-1 inhibitors, have shown strong preliminary results in treating Hodgkin’s lymphoma, shrinking tumors in well over half of patients who have exhausted many other treatment options. MORE
Three-year study published in The New England Journal of Medicine reveals that blood pressure drug found to block the effects of a gene mutation that causes Marfan syndrome, condition that leads to heart problems, is no more effective than standard treatment. MORE
Study published in New England Journal of Medicine points to overdiagnosis of thyroid cancer in South Korea; finds that number of diagnoses escalated as screening became popular, and newly detected cancers were almost all very tiny ones. MORE
Op-Ed article by Dartmouth professor of medicine H Gilbert Welch on findings published in New England Journal of Medicine that there is an epidemic of overdiagnosis of thyroid cancer in South Korea; asserts that having doctors not look too hard for early cancer is in one's interest. MORE
Study published in The New England Journal of Medicine reports that experimental therapy involving genetically programmed T-cells has brought remissions to high proportion of patients who were facing death from advanced leukemia after standard treatments had failed. MORE
National Cancer Institute study published in The New England Journal of Medicine describes efforts to understand exceptional responders, or people who defy expectations by responding dramatically to drug treatment administered as last-ditch effort to treat seemingly incurable cancers; genetics may be at root of many such cases. MORE
Study in New England Journal of Medicine examines results of program in St Louis that counseled teenage girls and provided them with free contraception, including long-acting methods like intrauterine devices and hormonal implants; relates that girls who participated in program had less than a quarter of the annual pregnancy, birth and abortion rates of teenage girls nationally. MORE
World Health Organization publishes figures in New England Journal of Medicine that reveal far worse outlook than it had previously anticipated for Ebola epidemic in West Africa; report raises for first time possibility that epidemic will not be brought under control and that the disease will become endemic in the region. MORE
A high-dose vaccine prevents older adults from catching the flu more effectively than the standard vaccine, researchers say. MORE
Research published in The New England Journal of Medicine finds that new experimental drug has shown striking efficacy in prolonging lives of people with heart failure; developed by Novartis and known by code name LCZ696, drug could replace longstanding treatment for a condition that is the leading cause of hospitalization in United States and Europe. MORE
The Upshot column; studies published in New England Journal of Medicine confirm old cliche that moderation is one's best bet in health, and that applies to sometimes extreme recommendations of doctors; cites low salt diets. MORE
Editorial notes that conflicting studies published in The New England Journal of Medicine have brought new evidence to bear on debate over reducing amount of salt in the American diet; points out one issue now is whether a diet too low in sodium can lead to negative health effects. MORE
Report published in New England Journal of Medicine traces origin of Ebola outbreak in western Africa to Gueckedou, Guinea, where 2-year-old boy died on Dec 6, 2013, of symptoms similar to the disease; eight months later, with 1,779 cases in Guinea, Sierra Leone and Liberia, including 961 deaths, and a small cluster in Nigeria, outbreak is out of control and is getting worse; epidemiologists predict it will take many months to control, and World Health Organization spokesman says thousands more health workers are needed to fight it. MORE
Study in the New England Journal of Medicine finds that mutations in a gene called PALB2 raise the risk of breast cancer in women by almost as much as mutations in BRCA1 and BRCA2, genes that have long been implicated in most inherited cases of the disease. MORE
Dr Nick Baylor study in The New England Journal of Medicine shows that parasite that causes most deadly form of malaria is becoming resistant to artemisinin, most effective drug used to treat it, adding urgency to effort to develop alternative treatments. MORE
Papers published in New England Journal of Medicine find that new drugs that block highly specific parts of immune system are showing promise in treating eczema and psoriasis. MORE
Study published in The New England Journal of Medicine finds steroid injections for spinal stenosis, widely used method of treating common cause of back and leg pain, may provide little benefit for many patients; steroid injections are often tried when physical therapy or anti-inflammatory medication fails, with the aim of avoiding expensive surgery; some insurance companies require injections before approving surgery. MORE
Two studies independently identify mutations in a single gene that protect against heart attacks by keeping levels of triglycerides, confirmed as a cause of heart disease, very low for a lifetime; discovery, published in The New England Journal of Medicine, prompts hopes for a new class of drugs to fight heart disease. MORE ||||| Only 1 left in stock (more on the way).
Ships from and sold by Amazon.com. Gift-wrap available. | – Today marks World AIDS Day, and there's no shortage of fascinating, hopeful, and dour pieces to read. The four things we found most interesting to know: America remains "in danger of losing the war on AIDS." That opinion belongs to CDC Director Thomas Frieden. He makes his case in the New England Journal of Medicine today, but the New York Times' reporting of it is the piece to read. Sample: "While the article's language was dry and academic, some AIDS experts said it amounted to a call for radical changes ... [that] can be made only by state and local health departments, over which the CDC has little control." The name Jeff Schmalz. The "journalistic prodigy" was a New York Times reporter who, as KERA's headline puts it, "reported on AIDS while he fought the disease." He died in 1993, having learned about his diagnosis after having a seizure while in the newsroom on Dec. 21, 1990. "His reporting helped transform coverage of AIDS." More on Schmalz here and here. Oscar Villareal's experience: He's one of two dozen people featured in the new book Tomorrow Is a Long Time: Tijuana’s Unchecked HIV/AIDS Epidemic, but for those looking for an abbreviated version, Slate shares the sex worker's story (with photos). During his days he lived as "Beto, a gay man who cruised the park. At night he became a transgender woman, Alessandra, Alé for short." The piece by book author Jon Cohen "shows the challenges of ending the AIDS epidemic." "Everyone laughed." So writes Slate of an Oct. 15, 1982, exchange in which White House Press Secretary Larry Speakes was asked about AIDS. Speakes was unfamiliar with it, and reporter Lester Kinsolving told him it was referred to as the "gay plague." The laughter erupted, and continued. A documentary short that captures the exchange debuted today on Vanity Fair. The 7.5-minute When AIDS Was Funny contains audio from two other press conferences. |
null | the total synthesis of two key analogues
of vancomycin containing
single - atom exchanges in the binding pocket ( residue 4 amidine and
thioamide ) are disclosed as well as their peripherally modified ( 4-chlorobiphenyl)methyl
( cbp ) derivatives .
their assessment indicates that combined pocket
amidine and cbp peripherally modified analogues exhibit a remarkable
spectrum of antimicrobial activity ( vssa , mrsa , vana and vanb vre )
and impressive potencies ( mic = 0.060.005 g / ml ) against
both vancomycin - sensitive and -resistant bacteria and likely benefit
from two independent and synergistic mechanisms of action . like vancomycin ,
such analogues are likely to display especially durable antibiotic
activity not prone to rapidly acquired clinical resistance . |
allelic variations within a genome of the same species can be classified into three major groups that include differences in the number of tandem repeats at a particular locus [ microsatellites , or simple sequence repeats ( ssrs ) ] , segmental insertions / deletions ( indels ) , and single nucleotide polymorphisms ( snps ) . in order to detect and track these variations in the individuals of a progeny at dna level
although ssrs , indels , and snps are the three major allelic variations discovered so far , a plethora of molecular markers were developed to detect the polymorphisms that resulted from these three types of variation .
evolution of molecular markers has been primarily driven by the throughput and cost of detection method and the level of reproducibility .
depending on detection method and throughput , all molecular markers can be divided into three major groups : ( 1 ) low - throughput , hybridization - based markers such as restriction fragment length polymorphisms ( rflps ) ; ( 2 ) medium - throughput , pcr - based markers that include random amplification of polymorphic dna ( rapd ) , amplified fragment length polymorphism ( aflp ) , ssrs ; ( 3 ) high - throughput ( htp ) sequence - based markers : snps . in late eighties , rflps were the most popular molecular markers that were widely used in plant molecular genetics because they were reproducible and codominant .
however , the detection of rflps was an expensive , labor- and time - consuming process , which made these markers eventually obsolete . moreover , rflp markers were not amenable to automation . invention of pcr technology and the application of this method for the rapid detection of polymorphisms overthrew low - throughput rflp markers , and new generation of pcr - based markers emerged in the beginning of nineties .
rapd , aflp , and ssr markers are the major pcr - based markers that research community has been using in various plant systems .
however , they are anonymous and the level of their reproducibility is very low due to the non - specific binding of short , random primers .
although aflps are anonymous too , the level of their reproducibility and sensitivity is very high owing to the longer + 1 and + 3 selective primers and the presence of discriminatory nucleotides at 3 end of each primer .
that is why aflp markers are still popular in molecular genetics research in crops with little to zero reference genome sequence available .
however , aflp markers did not find widespread application in molecular breeding owing to the lengthy and laborious detection method , which was not amenable to automation either .
therefore , it was not surprising that soon after the discovery of ssr markers in the genome of a plant , they were declared as
markers of choice , because ssrs were able to eliminate all drawbacks of the above - mentioned dna marker technologies .
ssrs were no longer anonymous ; they were highly reproducible , highly polymorphic , and amenable to automation .
despite the cost of detection remaining high , ssr markers had pervaded all areas of plant molecular genetics and breeding in late 90s and the beginning of 21st century .
however , during the last five years , the hegemony of medium - throughput ssrs was eventually broken by snp markers .
first discovered in human genome , snps proved to be universal as well as the most abundant forms of genetic variation among individuals of the same species .
although snps are less polymorphic than ssr markers because of their biallelic nature , they easily compensate this drawback by being abundant , ubiquitous , and amenable to high- and ultra - high - throughput automation .
however , despite these obvious advantages , there were only a limited number of examples of application of snp markers in plant breeding by 2009 . in this paper
, we tried to summarize the recent progress in the utility of snp markers in plant breeding .
while snp discovery in crops with simple genomes is a relatively straightforward process , complex genomes pose serious obstacles for the researchers interested in developing snps .
prior to the emergence of next - generation sequencing ( ngs ) technologies , researchers used to rely on different experimental strategies to avoid repetitive portions of the genome .
these include discovery of snps experimentally by resequencing of unigene - derived amplicons using sanger 's method and in silico snp discovery through the mining of snps within est databases followed by pcr - based validation .
although these approaches allowed the detection of gene - based snps , their frequency is generally low in conserved genic regions , and they were unable to discover snps located in low - copy noncoding regions and intergenic spaces . additionally , amplicon resequencing was an expensive and labor - intensive procedure .
as many crops are ancient tetraploids with mosaics of scattered duplicated regions , in silico and experimental mining of est databases resulted in the discovery of a large number of nonallelic snps that represented paralogous sequences and were suboptimal for application in molecular breeding .
recent emergence of ngs technologies such as 454 life sciences ( roche applied science , indianapolis , in ) , hiseq ( illumina , san diego , ca ) , solid and ion torrent ( life technologies corporation , carlsbad , ca ) has eliminated the problems associated with low throughput and high cost of snp discovery .
transcriptome resequencing using ngs technologies allows rapid and inexpensive snp discovery within genes and avoids highly repetitive regions of a genome .
this methodology was successfully applied in several plant genomes , including maize , canola , eucalyptus , sugarcane , tree species , wheat , avocado , and black currant .
originally developed for human disease diagnostic research , the nimblegen sequence capture technology ( roche applied science , in ) brought the detection of gene - based snps in plants into higher throughput and coverage level .
this technology consists of exon sequence capture and enrichment by microarray followed by ngs for targeted resequencing .
similar in - solution target capture technologies , such as agilent sureselect , are also commercially available for genome / exome mining studies . however , this technology would be efficient only for crops with available reference genome sequence or large transcriptome ( est ) datasets , since the design of capture probes requires these reference resources . despite the attractiveness of snp discovery via transcriptome or exome resequencing ,
it is obvious that the availability of snps within coding sequences is a very powerful tool for molecular geneticists to detect a causative mutation .
however , often qtl are located in noncoding regulatory sequences such as enhancers or locus control regions , which could be located several megabases away from genes within intergenic spaces .
discovery of snps located within those regulatory elements via transcriptome or exon sequencing is limited . in order to discover snps in a genome - wide fashion and avoid repetitive and duplicated dna
, it is very important to employ genome complexity reduction techniques coupled with ngs technologies .
several genome complexity reduction techniques have been developed over the years , including high cot selection , methylation filtering , and microarray - based genomic selection .
these techniques mainly reduce the number of repetitive sequences but lack the power to recognize and eliminate duplicated sequences , which cause the detection of false - positive snps . unlike the above - mentioned techniques , recently developed genome complexity reduction technologies such as complexity reduction of polymorphic sequences ( crops ) ( keygene n.v . ,
wageningen , the netherlands ) and restriction site associated dna ( rad ) ( floragenics , eugene , or , usa ) are computationally well equipped and capable of filtering out duplicated snps .
these systems were successfully applied to discover snps in crops with and without reference genome sequences .
although several complexity reduction approaches are being developed to generate data from ngs platforms , it is often challenging to identify candidate snps in polyploid crops species such as potato , tobacco , cotton , canola , and wheat .
in general , minor allele frequency could be used as a measure to identify candidate snps in diploid species . however ,
in polyploid crops , you often find loci that are polymorphic within a single genotype due to the presence of either homoeologous loci from the individual subgenomes ( homoeologous snps ) or paralogous loci from duplicated regions of the genome .
such false positive snps are not useful for genetic mapping purposes and often lead to a lower validation rate during assays .
use of haplotype information beside the allelic frequency would help to identify homologous snps ( true snps ) from those of homoeologous loci ( false positives ) .
bioinformatic programs such as haplosnper would facilitate identification of candidate loci for assay design purposes in polyploid crops .
such approaches could also be extended to other complex and highly repetitive diploid genomes such as barley .
complexity reduction approaches , combined with sophisticated computational tools , would expedite snp discovery and validation efforts in polyploids .
although crops and rad technologies are powerful tools to detect snps in genome - wide fashion , they can hardly be called htp , because on an average only ~1,000 snps pass stringent quality control .
while these numbers are enough to generate genetic linkage maps of reasonable saturation and carry out preliminary qtl mapping , they are not adequate to implement genome - wide association studies ( gwas ) . depending on the rate of linkage disequilibrium decay ,
discovery of a large number of snps using gbs was demonstrated in maize and sorghum .
gbs not only increases the sequencing throughput by several orders of magnitude but also has multiplexing capabilities . to eliminate a large portion of repetitive sequences , a type ii restriction endonuclease , apeki ,
is applied to digest dna prior to sequencing to generate reduced representation libraries ( genome complexity reduction component ) , which are further subject to sequencing . in polyploid crops ,
gbs might be challenging , but the associated complexity reduction methods could be used for snp discovery . for discovery purposes ,
the availability of a reference genome is not an absolute requirement to implement gbs approach .
however , in organisms that do not have a reference genome , gbs - derived snps must be validated using one of the techniques that are described in the following section , which might dramatically increase per marker price .
validation needs to be done primarily to discard paralogous snps . for organisms with a reference genome sequence ,
the validation step is replaced by in silico mapping of the sequenced fragments to the genome .
although gbs has the potential to discover several million snps , one of the major drawbacks of this technique is large numbers of missing data . to solve this problem , computational biologists developed data imputation models such as beagle v3.0.2 and impute v2 , to bring imputed data as close as possible to the real data [ 50 , 51 ] .
the availability of reference sequence and sophisticated software does not always guarantee that the discovered snp can be converted into a valid marker . in order to insure that the discovered snp is a mendelian locus
the validation of a marker is the process of designing an assay based on the discovered polymorphism and then genotyping a panel of diverse germplasm and segregating population .
compared to the collection of unrelated lines , a segregating population is more informative as a validation panel because it allows the inspection of the discriminatory ability and segregation patterns of a marker which helps the researcher to understand whether it is a mendelian locus or a duplicated / repetitive sequence that escaped the software filter .
the most popular htp assays / chemistries and genotyping platforms that are currently being used for snp validation are illumina 's beadarray technology - based golden gate ( gg ) and infinium assays , life technologies ' taqman assay coupled with openarray platform ( taqman openarray genotyping system , product bulletin ) , and kbiosciences ' competitive allele specific pcr ( kaspar ) combined with the snp line platform ( snp line xl ; http://www.kbioscience.co.uk ) .
these modern genotyping assays and platforms differ from each other in their chemistry , cost , and throughput of samples to genotype and number of snps to validate .
the choice of chemistry and genotyping platform depends on many factors that include the length of snp context sequence , overall number of snps to genotype , and finally the funds available to the researcher , because most of these chemistries still remain cost intensive .
comparative analyses of these four genotyping assays and platforms were described in kumpatla et al . .
though all genotyping chemistries and platforms are applicable to generate genotypic data in polyploid crops , analysis of snp calls is somewhat challenging in polyploids due to multiallele combinations in the genotypes .
snps in polyploid species can be broadly classified as simple snps , hemi - snps , and homoeo - snps . here , we describe simple , hemi- , and homoeo - snps using an example of allele calls in tetraploid and diploid cotton species ( figure 1 ) .
genomes of tetraploid cotton species , gossypium hirsutum ( ad1 ) and g. barbadense ( ad2 ) , consist of two subgenomes a and d , where a genome was derived from diploid progenitors , such as g. herbaceum ( a1 ) and g. arboreum ( a2 ) , and d genome resulted from another diploid progenitor g. raimondii ( d5 ) .
simple , or true snps are markers that detect allelic variation between homologous loci of the same subgenome of two tetraploid samples .
for example , in figure 1(a ) , a snp marker clearly detects polymorphism within a subgenomes of g. hirsutum ( ad1 ) and g. barbadense ( ad2 ) and separates samples into homozygous a ( blue ) and b ( red ) clusters .
this marker does not discriminate polymorphism in d subgenome , because the d genome allele is absent there ( pink dot in g. raimondii ) .
in contrast to simple snps , hemi - snps detect allelic variation in the homozygous state in one sample and the heterozygous state in the other sample . in figure 1(b ) , snp marker detects both alleles ( a and b ) in g. hirsutum ( heterozygous green cluster ) and one allele a in g. barbadense ( a homozygous blue cluster ) and could be vice versa .
homoeo - snps detect homoeologous and possibly paralogous loci both in a and d subgenomes and result in monomorphic loci in tetraploid species ( right image ) . in figure 1(c ) a genome progenitors ( g. herbaceum and g. arboreum ) had allele a ( blue ) and d genome progenitor ( g. raimondii ) had allele b ( red ) , but both tetraploid species ( g. hirsutum and g. barbadense ) were grouped into heterozygous ab ( green ) cluster .
as homoeo - snps can detect paralogous loci , the diploid progenitors both have different alleles .
simple snps as well as hemi - snps are useful markers for genetic mapping and diversity screening studies .
simple snps segregate like the markers in diploids in most of the mapping populations and would account for approximately 1030% of total polymorphic snps in various polyploid crop species .
hemi - snps form a major category ( 3060% ) of polymorphic snps in a polyploid crop species and could be used for genetic mapping purposes in f2 , ril , and dh populations .
homoeo - snps are of lesser value for mapping purposes as most of the genotypes result in heterologous loci due to polymorphism between the homoeologous genomes or duplicated loci within each of the polyploid genotypes .
genetic mapping studies involve genetic linkage analysis , which is based on the concept of genetic recombination during meiosis .
this encompasses developing genetic linkage maps following genotyping of individuals in segregating populations with dna markers covering the genome of that organism . since their discovery in the 1980s ,
dna - based markers have been widely used in developing saturated genetic linkage maps as well as for the mapping and discovery of genes / qtl . with the large - scale availability of the sequence information and development of htp technologies for snp genotyping
this is primarily , because snps are highly abundant in the genomes and , therefore , they can provide the highest map resolution compared to other marker systems [ 58 , 59 ] . a review of the selected examples of qtl and gene discovery using snp markers is presented below .
a recent study on qtl analysis in rice for yield and three - yield - component traits , number of tillers per plant , number of grains per panicle , and grain weight compared a snp - based map to that of a previous rflp / ssr - based qtl map generated using the same mapping population . using the ultra - high - density snp map
, the authors showed that this map had more power and resolution relative to the rflp / ssr map .
this was clearly evident by the analysis of the two main qtl for grain weight , kgw3a ( gs3 ) and kgw5 ( gw5/qsw5 ) . using the snp bin map ,
gw5/qsw5 qtl for grain width was accurately narrowed down to a 123 kb region as compared to the 12.4 mb region based on the rflp / ssr genetic map .
likewise , gs3 qtl for grain length was mapped to a 197 kb interval in comparison to 6 mb region with the rflp / ssr genetic map . beside the power and the resolution , maps based on high - density snp markers
are also highly suitable for fine mapping and cloning of qtl and at times snps on these maps are also functionally associated with the natural variation in the trait . in another qtl mapping project ,
snp and indel markers were used to fine map qsh1 gene , a major qtl of seed shattering trait in rice .
the qtl were initially detected using rflp and rapd markers on f2 plants . using large bc4f2 and bc3f2 populations in fine mapping approach with snp and indel markers
, the authors mapped the functional natural variation to a 612 bp interval between the qtl flanking markers and discovered only one snp .
they further showed that this snp in the 5 regulatory region of the qsh1 gene caused loss of seed shattering .
fine mapping approach was also taken to positionally clone the rice bacterial blight resistance gene xa5 , by isolating the recombination breakpoints to a pair of snps followed by sequencing of the corresponding 5 kb region .
several studies have shown that the snps and indels are highly abundant and present throughout the genome in various species including plants [ 6264 ] .
snp genotyping is a valuable tool for gene mapping , map - based cloning , and marker assisted selection ( mas ) in crops .
a study was conducted to assess the feasibility of snps and indels as dna markers in genetic analysis and marker - assisted breeding in rice by analyzing these sequence polymorphisms in the genomic region containing piz and piz - t rice blast resistance genes and developing pcr - based snp markers .
the authors discovered that snps were abundant in the piz and piz - t ( averaging one snp every 248 bp ) , while indels were much lower .
this dense distribution of snps helped in developing snp markers in the vicinity of these genes .
advancements in rice genomics have led to mapping and cloning of several genes and qtl controlling agronomically important traits , enabled routine use of snp markers for mas , gene pyramiding , and marker - assisted breeding ( mab ) [ 6668 ] .
snp markers have facilitated the dissection of complex traits such as flowering time in maize . using a set of 5000 rils , which represent the nested association mapping ( nam ) population and genotyping with 1,200 snp markers
, the authors discovered that the genetic architecture of flowering time is controlled by small additive qtl rather than a single large - effect qtl .
twenty - nine qtl were discovered and candidate genes were identified with genome - wide nam approach using 1.6 million snps .
proprietary snp markers developed by companies are being predominantly used in their private breeding programs . a study from pioneer
hi - bred international inc . reported identifying a high - oil qtl ( qho6 ) affecting maize seed oil and oleic acid contents .
this qtl encodes an acyl - coa : diacylglycerol acyltransferase ( dgat1 - 2 ) , which catalyzes the final step of oil synthesis .
recent advances in wheat genomics have led to the implementation of high - density snp genotyping in wheat [ 7275 ] .
gene - based snp markers were developed for lr34/yr18/pm38 locus that confers resistance to leaf rust , stripe rust , and powdery mildew diseases .
these markers serve as efficient tools for mas and mab of disease resistant wheat lines . another economically important wheat disease , fusarium head blight ( fhb ) , has been extensively studied . several qtl controlling fhb resistance have been identified , with the most important being fhb1 . recently
these new markers would be useful for mas and fine mapping towards cloning the fhb1 gene .
mas in wheat has been extensively applied for simple traits that are difficult to score . in order to improve the effectiveness of mas and clone soybean aphid resistance gene , rag1 ,
fine mapping was done to accurately position the gene , which was previously mapped to a 12 cm interval .
the authors mapped the gene between two snp markers that corresponded to a physical distance of 115 kb and identified several candidate genes .
similarly , another aphid resistance gene , rag2 , originally mapped to a 10 cm interval , was fine mapped to a 54 kb interval using snp markers that were developed by resequencing of target intervals and sequence - tagged sites . in another study that used a similar approach ,
the authors identified snp markers tightly linked to a qtl conferring resistance to southern root - knot nematode by developing these snp markers from the bacterial artificial chromosome ( bac ) ends and ssr - containing genomic dna clones . in all of these examples the main idea behind the identification of closely linked snp markers was to enhance the efficiency and cost effectiveness through mas and increase the resolution within the target locus . in a study conducted in canola to map the fad2 and fad3 gene , single nucleotide mutations were identified by sequencing the genomic clones of these genes and subsequently snp markers were developed .
allele - specific pcr assays were developed to enable direct selection of desirable fad2 and fad3 alleles in marker - assisted trait introgression and breeding . in barley ,
snp markers were identified that were linked to a covered smut resistance gene , ruh.7h , by using high - resolution melting ( hrm ) technique . in sugar beet ,
an anchored linkage map based on aflp , snp , and rapd markers was developed to map qtl for beet necrotic yellow vein virus resistance genes , rz4 and rz5 .
a consensus genetic map based on est - derived snps was developed for cowpea that would be an important resource for genomic and qtl mapping studies in this crop . in one of the post - genomic era studies in 2002 ,
the fine mapping and map - based cloning approaches were used to clone the vtc2 gene in arabidopsis .
the authors fine mapped the gene interval from ~980 kb region to a 20 kb interval with snp and indel markers .
additional nine candidate genes were identified in that interval and subsequently the underlying mutation was discovered .
although only a few examples that demonstrate the application of snp markers in qtl mapping and genomic studies have been mentioned here , several other studies have been published in this area .
recent advances in htp genotyping technologies and sequence information will further pave the way for rapid identification of causative variations and cloning of qtl of interest for use in mab .
gwas is increasingly becoming a popular tool for dissecting complex traits in plants [ 8992 ] .
the idea behind gwas is to genotype a large number of markers distributed across the genome so that the phenotype or the functional alleles will be in ld with one or few markers that could then be used in the breeding program .
however , due to limited extent of ld , a greater number of markers are required for sufficient power to detect linkage between the marker and the underlying phenotypic variation .
several studies on association mapping in plants have been published and reviewed in the past [ 89 , 90 , 92 , 93 ] .
a few selected examples on the gwas and candidate gene association ( cga ) studies that utilized snp markers are described below .
the successful use and first time demonstration of the power of gwas was through the identification of a putative gene associated with a qtl in maize . in that study ,
a single locus with major effect on oleic acid was mapped to a 4 cm genetic interval by using snp haplotypes at 8,590 loci .
the authors identified a fatty acid desaturase gene , fad2 , at ~2 kb from one of the associated markers , and this was considered a likely causative gene . with the discovery of millions of snps in maize and the availability of tools such as nam populations , gwas was effectively applied to dissect the genetic architecture of leaf traits and it was also shown that variations at the liguleless genes contributed to more upright leaf phenotype .
utility of the gwas approach was demonstrated in barley through the mapping of a qtl for spot blotch disease resistance .
using the diversity array technology ( dart ) and snp markers , the authors identified several qtl , some of which were not identified for this trait earlier .
another variant of the association mapping method is the cga , where the association between one or few gene candidate loci and the trait of interest is tested . using this approach
24 gene candidates were analyzed for association with the field resistance to late blight disease in potato and plant maturity .
nine snps were identified to be associated with maturity corrected resistance , explaining 50% of the genetic variance of this trait .
two snps at the allene oxide synthase 2 ( staos2 ) gene locus were associated with the largest effect on the trait of interest .
a gwas approach was also successfully applied to understand the genetic architecture of complex diseases such as northern and southern corn leaf blights [ 70 , 98 ] .
although the number of papers dedicated to the application of gwas to reveal the genetic basis of agronomic traits is growing , the practical utility of minor qtl in molecular breeding is yet to be shown .
as gwas requires large number of molecular markers , the utility of gwas in dissection of molecular basis of traits in polyploid crops such as canola , wheat , and cotton has been fairly limited due to the insufficient number of polymorphic markers and the absence of reference genome .
however , recently developed associative transcriptomics method has a potential to overcome the above - mentioned shortages .
leveraged differentially expressed transcriptome sequences to develop molecular markers in tetraploid crop brassica napus and associated them with glucosinolate content variation in seeds . due to the precision of this method
, scientists were able to correlate specific deletions in canola genome with two qtl controlling the trait .
annotation of deleted regions revealed the orthologs of the transcription factor hag1 , which controlled aliphatic glucosinolate biosynthesis in a. thaliana .
due to the availability of htp snp detection and validation technologies , the development of snp markers becomes a routine process , especially in crops with reference genome .
how has that influenced the application of snp markers in plant breeding ? in a review article , xu and crouch indicated fairly low number of articles dedicated to the marker assisted selection for the 19862005 period . the combination of three key phrases ( marker - assisted selection and snp and plant breeding ) , indeed , shows only 637 articles at google scholar for that period .
however , similar search for the period , spanning 2006 through 2012 , demonstrates almost sevenfold ( ~4,560 ) increase in the number of articles indicating the application of snps in mas .
a vast majority of those publications are from public sector and primarily describe mapping qtl using snps and state the potential usefulness of those markers in mas without any experimental support for that . for most of those research studies ,
qtl mapping is the final destination and further application of those markers in actual mas leading to the development of varieties seldom happens .
fairly low impact of academic research in the mas - based variety development can be explained by the lack of funding to complete the entire marker development pipeline ( mdp ) , which can be long term and cost intensive .
mdp includes several steps such as ( 1 ) population development , ( 2 ) initial qtl mapping , ( 3 ) qtl validation ( testing in several locations and years and implementing fine mapping ) , and ( 4 ) marker validation ( development of inexpensive but htp and automation amenable assays ) .
every step of the development of markers linked to qtl is associated with numerous constraints , which may take several years and substantial funding to resolve . however , since 2006 , there have been a few success stories about the development of varieties using snps in publications derived from academic research , including the development of submergence - tolerant rice cultivars , rice cultivars with improved eating , cooking , and sensory quality , leaf rust resistant wheat variety patwin , and maize cultivar with low phytic acid .
although the private sector does not normally release details of its breeding methodologies to the public , several papers published by monsanto [ 106 , 107 ] , pioneer hi - bred , syngenta , and dow agrosciences indicate that commercial organizations are the main drivers in the application of snp markers in mas .
current mas strategies fit the breeding programs for the traits that are highly heritable and governed by a single gene or one major qtl that explains a large portion of the phenotypic variability . in reality , most of the agronomic traits such as yield , drought and heat tolerance , nitrogen and water use efficiency , and fiber quality in cotton have complex inheritance that is controlled by multiple qtl with minor effect . use of one of those minor qtl in mas will be inefficient because of its negligible effect on phenotype . the mas scheme using paternity testing has recently been proposed to address challenges associated with selection gains that can be achieved in outbred forage crops .
paternity testing , a nonlinkage - based mas scheme , improves selection gains by increasing parental control in the selection gain equation .
the authors demonstrated paternity testing mas in three red clover breeding populations by using permutation - based truncation selection for a biomass - persistence index trait and achieved paternity - based selection gains that were greater than double the selection gains based on maternity alone .
the paternity was determined by using a small set ( 11 ) of ssr markers .
snp markers can also be used for paternity testing , but one would require a relatively larger number of snp loci .
described a new methodology in plant breeding called genomic selection ( gs ) that was intended to solve problems related to mas of complex traits .
this methodology also applies molecular markers but in a different fashion in both diploid and polyploid crop species . unlike mas , in gs markers
then the comprehensive information on all possible loci , haplotypes , and marker effects across the entire genome is used to calculate genomic estimated breeding value ( gebv ) of a particular line in the breeding population .
gs of superior lines can be carried out within any breeding population . in order to enable successful
the population should not be necessarily derived from bi - parental cross but must be representative of selection candidates in the breeding program to which gs will be applied .
taking into account the low cost of sequencing , the best choice is the gbs implementation , which will yield maximum number of polymorphisms .
the sequence of the two events , that is , phenotypic and genotypic data collection , is arbitrary and can be done in parallel .
when both phenotypic and genotypic data are ready , one can start training molecular markers . in order to train the gs model , the effect of each marker
the effect of a marker is represented by a number with a positive or negative sign that indicates the positive or negative effect , respectively , of a particular locus to the phenotype .
trained and ready to assess any breeding population different from the experimental one for the same trait .
availability of trained gs model does not require the collection of phenotypic data from new breeding populations .
the same set of trained markers will be used to genotype a new breeding population .
based on genotypic data , the known effects of each marker will be summed and gebv of each line will be calculated .
the higher the gebv value of an individual line , the more the chances that this line will be selected and advanced in the breeding cycle .
thus , gs using high - density marker coverage has a potential to capture qtl with major and minor effects and eliminate the need to collect phenotypic data in all breeding cycles .
also , the application of gs was demonstrated to reduce the number of breeding cycles and increase the annual gain .
simulation studies based on simulated and empirical data demonstrated that gebv accuracy could be within 0.620.85 .
used previously reported gebv accuracy of 0.53 and reported three- and twofold annual gain in maize and winter barley , respectively .
the obvious advantages of gs over traditional mas have been successfully proven in animal breeding .
rapid evolution of sequencing technologies and htp snp genotyping systems are enabling generation and validation of millions of markers , giving a cautious optimism for successful application of gs in breeding for complex traits [ 117120 ] .
snp markers have become extremely popular in plant molecular genetics due to their genome - wide abundance and amenability for high- to ultra - high - throughput detection platforms .
unlike earlier marker systems , snps made it possible to create saturated , if not , supersaturated genetic maps , thereby enabling genome - wide tracking , fine mapping of target regions , rapid association of markers with a trait , and accelerated cloning of gene / qtl of interest . on the flip side ,
there are some challenges that need to be addressed or overcome while using snps . for example
, the biallelic nature of snps needs to be compensated by discovering and using a larger number of snps to arrive at the same or higher power as that of earlier - generation molecular markers .
this could be cost prohibitive depending on the crop and the sequence resources available for that genome .
working with polyploid crops is another challenge where useful snps are only a small percentage of the total available polymorphisms .
creative strategies need to be employed to generate a reasonable number of snps in those species .
the use of snp markers in mab programs has been growing at a faster pace and so is the development of technologies and platforms for the discovery and htp screening of snps in many crops .
snp chips are currently available for several crops ; however , one disadvantage is that these readily available chips are made based on snps discovered from certain genotypes and , therefore , may not be ideal for projects utilizing unrelated genotypes .
this necessitates creation of multiple chips or the usage of technologies that permit design flexibility but are economical .
although gbs creates great opportunities to discover a large number of snps at lower per sample cost within the genotypes of interest , the lack of adequate computational capabilities such as reliable data imputation algorithms and powerful computers allowing quick processing and the storage of a large amount of sequencing data becomes a major bottleneck . despite certain disadvantages or challenges ,
it is clear that snp markers , in combination with genomics and other next - generation technologies , have been accelerating the pace and gains of plant breeding . | the use of molecular markers has revolutionized the pace and precision of plant genetic analysis which in turn facilitated the implementation of molecular breeding of crops .
the last three decades have seen tremendous advances in the evolution of marker systems and the respective detection platforms .
markers based on single nucleotide polymorphisms ( snps ) have rapidly gained the center stage of molecular genetics during the recent years due to their abundance in the genomes and their amenability for high - throughput detection formats and platforms .
computational approaches dominate snp discovery methods due to the ever - increasing sequence information in public databases ; however , complex genomes pose special challenges in the identification of informative snps warranting alternative strategies in those crops .
many genotyping platforms and chemistries have become available making the use of snps even more attractive and efficient .
this paper provides a review of historical and current efforts in the development , validation , and application of snp markers in qtl / gene discovery and plant breeding by discussing key experimental strategies and cases exemplifying their impact . |
in strongly coupled plasmas @xcite , the coulomb interaction energy between neighboring particles exceeds the kinetic energy , leading to non - binary collisions that display temporal and spatial correlations between past and future collision events .
such ` non - markovian ' dynamics invalidates traditional theory for collision rates @xcite and calculation of transport coefficients @xcite and frustrates the formulation of a tractable kinetic theory .
this challenging fundamental problem is also one of the major limitations to our ability to accurately model equilibration , transport , and equations of state of dense laboratory and astrophysical plasmas @xcite , which impacts the design of inertial - confinement - fusion experiments @xcite , stellar chronometry based on white dwarf stars @xcite , and models of planet formation @xcite .
molecular dynamics ( md ) simulations have been the principal recourse for obtaining a microscopic understanding of short - time collision dynamics in this regime @xcite , but direct comparison of results with experiment has not been possible . in the experiments described here ,
we isolate the effects of strong coupling on collisional processes by measuring the velocity autocorrelation function ( vaf ) for charges in a strongly coupled plasma .
the vaf , a central quantity in the statistical physics of many - body systems , encodes the influence of correlated collision dynamics and system memory on individual particle trajectories @xcite , and it is defined as @xmath0 here , @xmath1 is the velocity of particle @xmath2 , and brackets indicate an equilibrium , canonical - ensemble average .
remarkably , we obtain this individual - particle quantity from measurement of the bulk relaxation of the average velocity of a tagged subpopulation of particles in an equilibrium plasma .
this contrasts with measurements of macroscopic - particle vafs based on statistical sampling of individual trajectories , which is commonly used in studies of dusty - plasma kinetics @xcite and brownian motion @xcite .
the vaf also provides information on transport processes since its time - integral yields the self - diffusion coefficient through the green - kubo relation , @xmath3 which describes the long - time mean - square displacement of a given particle through @xmath4 @xcite .
our results provide the first experimental measurement of the vaf and of self diffusion in a three - dimensional strongly coupled plasma .
these results are found to be consistent with md simulation to within the experimental uncertainty @xcite .
measurements are performed on ions in an ultracold neutral plasma ( ucnp ) , which is formed by photoionizing a laser - cooled atomic gas @xcite . shortly after plasma creation , ions equilibrate in the strongly coupled regime with coulomb coupling parameter , @xmath5 as large as @xmath6 . here , @xmath7 is the temperature and @xmath8 is the density .
electrons in the plasma provide a neutralizing background and static screening on the ionic time scale with a debye screening length @xmath9 .
this makes ucnps a nearly ideal realization of a yukawa one - component plasma @xcite , a paradigm model of plasma and statistical physics in which particles interact through a pair - wise coulomb potential screened by a factor @xmath10 .
ultracold plasmas are quiescent , near local equilibrium , and ` clean ' in the sense they are composed of a single ion species and free of strong background fields .
strong coupling is obtained at relatively low density , which slows the dynamics and makes short - timescale processes ( compared to the inverse collision rate ) experimentally accessible .
powerful diagnostics exist for dense laboratory plasmas . however , their interpretation is complicated by the transient and often non - equilibrium nature of the plasmas , and they do not provide model - independent information on the effects of particle correlations at short timescales .
these include , for example , measurement of the dynamic structure factor with x - ray thomson - scattering @xcite and measurement of electrical conductivity using a variety of techniques @xcite .
comprehensive studies of self diffusivity @xcite exist for strongly coupled dusty plasmas , but these systems are typically two - dimensional and therefore do not directly illuminate the kinetics of bulk , three - dimensional plasmas .
we perform experiments on ultracold neutral plasmas @xcite , which are created by first laser - cooling @xmath11sr atoms in a magneto - optical trap ( mot ) @xcite .
atoms are then photoionized with one photon from a narrow - band laser resonant with the principal @xmath12 transition at 461 nm and another photon from a tunable 10-ns , pulsed dye laser near 413 nm .
the electron temperature ( @xmath13 ) in the plasma is determined by the excess photon energy above the ionization threshold , which can be tuned to set @xmath14k .
ions initially have very little kinetic energy , but they possess an excess of coulomb potential energy , and they equilibrate on a microsecond timescale to a temperature @xmath15 , determined primarily by the plasma density @xcite .
the ion equilibration process is called disorder - induced heating . disorder - induced heating limits the ions to @xmath16 . to obtain measurements on more weakly coupled systems ( @xmath17 ) , the plasma is heated with ion acoustic waves @xcite .
waves are excited by placing a grating ( 10 cycles / mm ) in the path of the ionization beam , which is then imaged onto the mot for greatest contrast to create a plasma with a striped density modulation .
after sufficiently long time , the waves completely damp , heating the plasma and reducing @xmath18 .
electrons provide a uniform screening background for the ions , with screening parameter @xmath19 in these experiments , for wigner - seitz radius @xmath20 .
the plasma density distribution is gaussian in shape , @xmath21 , with initial size @xmath22 - 2 mm .
however , due to electron pressure forces , the plasma expands with time dependence given by , @xmath23 , where @xmath24 is the expansion time @xcite . , and from the -1/2 to + 1/2 spin state around @xmath25 with two counterpropating , circularly polarized laser beams detuned by @xmath26 rad / s from the @xmath27 transition of the strontium ion ( 421.7 nm ) .
the velocity profiles of the individual spin populations are measured with lif using a tunable circularly - polarized probe beam of variable detuning @xmath28 .
( b ) idealized illustration of a pumped velocity distribution for + 1/2 ions resulting from optical pumping ( red ) , along with an unperturbed gaussian thermal distribution ( blue ) . ]
an optical pump - probe technique @xcite is used to measure @xmath29 , the average velocity of a spin-tagged " subpopulation of ions ( labeled @xmath30 ) relative to the local bulk velocity of all the ions ( @xmath31 ) .
the appendix provides a proof that the normalized vaf @xmath32 is equivalent to the observable @xmath33 as long as the total system is near thermal equilibrium and if terms beyond 2^nd^ order in a hermite - gauss expansion of the initial x - velocity distribution function for the subgroup , @xmath34 , are negligible .
as shown below , our experiment satisfies these conditions , which provides a new technique for measuring the vaf .
the evolution of @xmath29 is measured by first using optical pumping to create electron - spin - tagged ion sub - populations with non - zero average velocity ( fig .
@xmath35 ) .
pumping is accomplished by two counter - propagating , circularly - polarized laser beams each detuned by the same small amount @xmath26 from the @xmath27 transition at 421.7 nm .
taking advantage of the unpaired electron in the @xmath36 ground state , ions are pumped out of the + 1/2 spin state and into the -1/2 state around the negative x - velocity @xmath37 , while ions are pumped from -1/2 to + 1/2 spin around the positive velocity @xmath38 ( the quantization axis is taken to be along the axis of the pump beams , defined as @xmath39 ) .
this creates subpopulations of + 1/2 and -1/2 spin ions having velocity distributions skewed in opposite directions , while the entire plasma itself remains in equilibrium with @xmath40=0 .
the pump detuning , @xmath41 mhz , is resonant for ions with @xmath42 m / s , which is on the order of the thermal velocity , @xmath43 m / s for @xmath44k .
we probe the ion distribution with spatially - resolved laser - induced fluorescence ( lif ) spectroscopy @xcite ( fig .
@xmath35 ) .
a lif probe beam tuned near the @xmath45 transition of the @xmath11sr ion propagates along the x - axis and excites fluorescence that is imaged onto an intensified ccd camera with 1x magnification ( 12.5 @xmath46/per pixel ) , from which the plasma density and x - velocity distribution are extracted . by using a circularly - polarized lif probe beam , propagating nearly along the pump beam axis ,
we selectively probe only the + 1/2 ions .
pumping is applied several plasma periods ( @xmath47 ) after ionization to allow the plasma to approach equilibrium after the disorder - induced heating phase @xcite .
( @xmath48s@xmath49 is the ion plasma oscillation frequency . )
the optical pumping time is 200 ns , and the pump intensity is 200 mw / cm^2^ ( saturation parameter @xmath50 ) .
lif data is taken at least 35 ns after the turn off of the pump to avoid contamination of the signal with light from decay of atoms promoted to the @xmath51 state during the pumping process .
electro - optic modulator ( eom ) pulse - pickers are used to achieve 10 ns time resolution for application of the pump and probe beams .
the pumping and imaging transition is not closed , and about 1/15@xmath52 of the excitations result in an ion decaying to a metastable @xmath53 state that no longer interacts with the lasers . to ensure that larmor precession of the prepared atomic st ates does not contaminate the data , a 4.5 gauss magnetic field is applied along the pump - probe beam axis .
the lif spectra are fit to a convolution of a lorentzian function @xmath54 $ ] of frequency @xmath55 ( hz ) with the one - dimensional ion velocity distribution along the laser axis , @xmath56 , @xmath57 where @xmath9 is the laser wavelength , and @xmath58 is the width of lorentzian spectral broadening . for this system , @xmath59 , where @xmath60 is the laser linewidth ( 5.5 mhz ) and @xmath61 is the natural linewidth ( 20.2 mhz ) .
the width is power - broadened by the laser saturation parameter @xmath62 .
the distribution @xmath34 is modeled with a hermite - gauss expansion , @xmath63 where @xmath64 , and @xmath65 are hermite polynomials . for the analysis , @xmath66 was chosen because the amplitudes of orders 4 and higher were consistent with random noise .
the bulk velocity of the plasma , @xmath67 , arises because of the plasma expansion .
to separate its effect from thermal velocity spread and perturbation due to optical pumping , the plasma is divided into regions that are analyzed independently @xcite .
each region is thin in the direction of the lif beam propagation ( 7 overlapping regions @xmath68 wide , spanning @xmath69 ) . @xmath70 and
@xmath67 for each region are determined from analysis of lif data from an unpumped plasma , and only the expansion coefficients @xmath71 are fit parameters .
the coupling parameter @xmath72 varies by no more than @xmath73 across the entire region analyzed .
sample lif spectra at various times after optical pumping are shown in figs .
@xmath74(a , b , c ) for a plasma with @xmath75 and @xmath76 . figs .
@xmath74(d , e , f ) show the corresponding ion velocity distributions and individual hermite - gauss components of the pumped velocity distributions extracted from fits to the raw spectra . at early times , there is significant amplitude in the @xmath77 term , corresponding to the skew in the velocity distribution .
this decays away as @xmath34 approaches a maxwellian centered around @xmath67 .
higher order terms are small at all times , satisfying an important condition for the proof that @xmath78 . in the frame co - moving with @xmath67 in a given region , the average velocity of + 1/2 ions a time @xmath79 after cessation of optical pumping is @xmath80 . for each plasma , a single time evolution of @xmath29
is calculated by averaging together individual values of @xmath29 from each region
@xmath81(a , b ) show sample data for @xmath75 and @xmath82 . in fig .
@xmath81(a ) , data are plotted versus time , while fig .
@xmath81(b ) plots @xmath33 versus time scaled by @xmath83 , showing that this is a universal timescale for the dynamics @xcite . corresponding data for @xmath84 , @xmath85 are shown in figs .
@xmath81(c , d ) . due to the plasma expansion
, the density and thus the plasma frequency @xmath86 decreases with time . to account for this ,
we show the time evolution of the averaged velocity as a function of the scaled time @xmath87 , where the density evolution is described by the self - similar expansion of a gaussian distribution @xcite . similarly , @xmath88/t_{f}$ ] and @xmath89/t_{f}$ ] , where @xmath90 is the total time of measurement .
the density typically varies by a factor of two during the measurement .
the normalization factor @xmath91 is determined from the fit of the data using a memory - function formalism described below .
data for @xmath33 shows non - exponential decay of the average velocity up to times given by @xmath92 , which is a hallmark of non - markovian dynamics reflecting the strong coupling of the ions .
this is most clearly shown in fig .
@xmath93 , which is an expanded view of the early - time data from velocity relaxation curves .
the early - time behavior of @xmath78 can be described using a memory - function formalism that treats the effects of collisional correlations at the microscopic level @xcite .
it can be derived from a generalized langevin equation describing the motion of a single test particle experiencing memory effects and fluctuating forces , which is familiar from treatments of brownian motion@xcite .
the evolution of the vaf is found to be @xmath94 here , @xmath95 is the memory function describing the influence at time @xmath79 from the state of the system at @xmath96 .
a general , closed - form expression for @xmath95 is lacking , but there are expressions derived from simplifying assumptions that agree well with molecular dynamics simulations for simple fluids @xcite , and yukawa potentials when @xmath97 @xcite .
some formulas introduce a time constant @xmath98 that may be interpreted as the correlation time for fluctuating forces @xcite . if one assumes that collisions are isolated instantaneous events , @xmath99 and the memory function becomes a delta function .
this is the markovian limit in which the evolution of a system is entirely determined by its present state and @xmath100 has purely exponential dependence .
data from a more weakly coupled sample [ fig .
@xmath93(d ) , @xmath101 shows no discernible roll - over in @xmath100 at short times . an often - used approximation for @xmath95 for the non - markovian regime , valid for short times and moderately strong coupling ,
is the gaussian memory function @xcite , @xmath102,\ ] ] which satisfies the condition that memory effects vanish at long time and agrees with a taylor expansion of @xmath95 to second order around @xmath103 relating @xmath98 to frequency moments of the fourier transform of the vaf @xcite . for the yukawa ocp , md simulations have shown that eq .
@xmath104 accurately reproduces the ion vaf for @xmath105 and @xmath106 , and the parameter @xmath107 can be related to a well - defined collision rate @xcite .
figures @xmath81 and @xmath93 show fits of the data in the scaled time range @xmath108 to eq .
@xmath109 with a gaussian memory function , along with exponential fits of data with @xmath110 . at early times ( @xmath111 ) , the memory kernel fit captures the roll - over , which is indicative of non - markovian collisional dynamics .
the values of @xmath98 extracted from the fit are on the order predicted by md simulations of a classic ocp @xcite , although improved experimental accuracy is required before a precise comparison can be made .
using @xmath33 as an approximation for the ion vaf , the self - diffusion coefficient @xmath112 may be calculated from our measurements . as is normally the case with calculations of this type , proper treatment of the upper limit of integration in the green - kubo formula ( eq .
@xmath113 ) is critical for obtaining accurate results .
the behavior of the long - time tail of the vaf for a yukawa system has not been explored in detail for the regime of our experiment , and this is an important area for future study . for concreteness , we will assume a @xmath114 dependence which is well - established as @xmath115 for neutral simple fluids @xcite and is generally accepted as the slowest possible decay @xcite .
we fit the last few data points ( beyond the time @xmath116 where @xmath117 @xmath118 ) to a @xmath119 curve , where @xmath120 is the fit parameter and @xmath121 is the scaled time .
the dimensionless , self - diffusion coefficient , @xmath122 , is thus calculated as @xmath123 where the first term is calculated numerically from the data by linear interpolation and the trapezoidal rule .
the time of the last data point is @xmath124 , and @xmath125 in scaled units . extracted values for @xmath126 , along with theoretical curves for @xmath127 and @xmath128 determined from a fit to molecular dynamics simulations @xcite , are shown in @xmath129 .
the contribution from the unmeasured long - time tail of @xmath100 adds significant uncertainty , which is much greater that our random measurement error . here , we take the contribution from the analytic approximation of the tail in eq . [
eq : normdiff ] as our quoted error bars .
the lower error bar thus assumes no contribution beyond our measured points .
this is conservative given that the vaf is exponential in the weakly - coupled limit .
there are additional significant experimental improvements that can be made in the measurement and important systematic effects that must be investigated .
the latter are scientifically interesting in their own right , such as the timescale for the approach to equilibrium of velocity correlations after plasma creation and the effect of plasma expansion on the microscopic dynamics .
any complications caused by these systematics can be greatly reduced by performing measurements on a larger plasma for which the expansion time scale is much greater than the characteristic collisional timescale ( @xmath130 ) .
other sources of uncertainty include variation of density across the analysis regions , the spread in bulk plasma velocity across and within regions , the time - evolving density , and the uncertainty in the density calibration .
these uncertainties are reflected in the horizontal error bars in fig . @xmath129 and can be significantly reduced in future experiments .
utilizing a spin - tagging technique to measure the average velocity @xmath131 of a subpopulation of ions in an ultracold neutral plasma , and taking advantage of an identification of the time evolution of this quantity with the normalized ion vaf @xmath100 , we have experimentally measured the vaf in a strongly coupled plasma . from this
we have calculated the ion self - diffusion coefficient @xmath112 , which provides an experimental benchmark that has been lacking for molecular dynamics simulations of strongly coupled systems in three dimensions .
the data also display a non - exponential decay of @xmath100 at early times , which has not previously been observed experimentally in a bulk plasma and is indicative of non - markovian collisional dynamics .
this behavior is well described by a memory - function formalism .
overall , these measurements experimentally validate foundational concepts describing how the buildup and decay of ion velocity correlations at the microscopic level determine the dynamics of strongly coupled systems at the macroscopic level , which can not be adequately described by simple analytical methods . because ultracold neutral plasmas offer a clean realization of the commonly used yukawa ocp model , these results are relevant for fundamental kinetic theory and other plasmas for which effects of strong coupling are important .
we show that the quantity @xmath33 measured in these experiments corresponds to the normalized vaf @xmath132 if the initial velocity distribution for @xmath133 ions ( @xmath134 ) is maxwellian in @xmath135 and @xmath136 and well - described by a 2^nd^ order hermite - gauss expansion in @xmath137 , and the optical pumping prepares the subsystem of @xmath133 ions in a non - equilibrium state close to thermodynamics equilibrium .
to prove this , we transform into the frame co - moving with any bulk hydrodynamic velocity of the ensemble , making @xmath138 , which does not invalidate any steps in the proof .
for simplicity we assume that the plasma is spatially homogeneous in a volume @xmath139 , although the following arguments can be readily extended to account for non - uniform spatial distributions .
finally , we assume that the optical pumping occurs instantaneously at some initial time @xmath140 .
let us consider a specified + 1/2 ion labeled `` s '' with position @xmath141 and velocity @xmath142 at time @xmath79 ; for a statistical description of the dynamics , it is useful to define the microscopic phase space density is @xmath143 and its statistical average @xmath144 . before optical pumping , for @xmath145 ,
the system is in thermal equilibrium and @xmath146 , where @xmath147 is the maxwell - boltzmann velocity distribution . after pumping , the * * sub**system is out of thermal equilibrium , @xmath148 we assume @xmath149 , so that @xmath150 satisfies @xmath151 where @xmath152 is the propagator , or retarded green s function , of the equation that governs the temporal evolution of @xmath150 , which is obtained by linearizing the exact evolution equation satisfied by @xmath153 .
remarkably , the propagator @xmath154 , which describes the non - equilibrium dynamics of the system , is related simply to the equilibrium time - correlation function @xmath155 as follows @xmath156 where , in the last equality , we used the initial value @xmath157 .
this relation is an expression of the fluctuation - dissipation theorem @xcite . from eq.([deltaf ] ) , the average particle velocity along the x - direction determined in our experiment is @xmath158 where @xmath159 and @xmath160 .
if , as found experimentally ( see fig .
[ fig : sample_spectra ] ) , @xmath161 is initially well - described by a 2^nd^ order hermite - gauss expansion in @xmath137 , then @xmath162 where we used the fluctuation - dissipation theorem ( [ fluctuationdissipation ] ) to obtain the second equality .
the zeroth and second order terms vanish since @xmath163 , and the previous result simplifies to @xmath164 therefore we obtain the desired result , @xmath165 this work was supported by the air force office of scientific research ( fa9550 - 12 - 1 - 0267 ) , department of energy , fusion energy sciences ( de - sc0014455 ) , and department of defense through the national defense science and engineering graduate fellowship .
the work of j. daligault was supported by the department of energy office of fusion energy sciences .
52ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1''''@noop [ 0]secondoftwosanitize@url [ 0 ]
+ 12$12 & 12#1212_12%12@startlink[1]@endlink[0]@bib@innerbibempty link:\doibase 10.1103/revmodphys.54.1017 [ * * , ( ) ] https://books.google.com/books?id=schraaaamaaj[__ ] , interscience tracts on physics and astronomy ( , ) @noop * * , ( ) link:\doibase 10.1103/physrevlett.109.185008 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.108.225004 [ * * , ( ) ] http://arxiv.org/abs/0902.1997v5 ; http://arxiv.org/pdf/0902.1997v5 [ `` , '' ] ( ) , link:\doibase 10.1063/1.1652853 [ * * , ( ) ] @noop * * , ( ) link:\doibase 10.1063/1.1578638 [ * * , ( ) ] @noop _ _ ( , ) link:\doibase 10.1103/physreve.71.036408 [ * * , ( ) ] link:\doibase 10.1038/nature09045 [ * * , ( ) ] @noop * * , ( ) link:\doibase 10.1103/physreva.15.1274 [ * * , ( ) ] link:\doibase 10.1103/physreva.11.1025 [ * * , ( ) ] link:\doibase 10.1103/physreve.56.4671 [ * * , ( ) ] link:\doibase 10.1103/physreve.85.031202 [ * * , ( ) ] link:\doibase 10.1002/ctpp.201400101 [ * * , ( ) ] https://books.google.com/books?id=jyxwcjo1apqc[__ ] , oxford series on neutron scattering in condensed matter ( , ) @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) link:\doibase 10.1038/nphys1953 [ * * , ( ) ] link:\doibase 10.1126/science.1248091 [ * * , ( ) ] link:\doibase 10.1002/andp.19053220806 [ * * , ( ) ] @noop * * , ( ) link:\doibase 10.1016/j.physrep.2007.04.007 [ * * , ( ) ] @noop ( ) , link:\doibase 10.1103/physrevlett.115.115001 [ * * , ( ) ] link:\doibase 10.1126/science.1161466 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.98.065002 [ * * , ( ) ] @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) link:\doibase 10.1103/physrevlett.61.1278 [ * * , ( ) ] @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) link:\doibase 10.1103/physreva.67.011401 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.87.115003 [ * * , ( ) ] @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) link:\doibase 10.1103/physrevlett.99.155001 [ * * , ( ) ] @noop * * , ( ) http://stacks.iop.org/0741-3335/50/i=12/a=124011 [ * * , ( ) ] @noop _ _ ( , ) link:\doibase 10.1103/physreva.2.2514 [ * * , ( ) ] http://stacks.iop.org/0022-3719/11/i=18/a=012 [ * * , ( ) ] http://stacks.iop.org/0022-3719/10/i=12/a=008 [ * * , ( ) ] link:\doibase 10.1103/physreve.90.023104 [ * * , ( ) ] link:\doibase 10.1063/1.860497 [ * * , ( ) ] | we present a study of the collisional relaxation of ion velocities in a strongly coupled , ultracold neutral plasma on short timescales compared to the inverse collision rate .
non - exponential decay towards equilibrium for the average velocity of a tagged population of ions heralds non - markovian dynamics and a breakdown of assumptions underlying standard kinetic theory .
we prove the equivalence of the average - velocity curve to the velocity autocorrelation function , a fundamental statistical quantity that provides access to equilibrium transport coefficients and aspects of individual particle trajectories in a regime where experimental measurements have been lacking . from our data
, we calculate the ion self - diffusion constant .
this demonstrates the utility of ultracold neutral plasmas for isolating the effects of strong coupling on collisional processes , which is of interest for dense laboratory and astrophysical plasmas . |
Tweet with a location
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more ||||| FILE - In this Nov. 10, 2016 file photo, a man reads a newspaper with the headline that reads "U.S. President-elect Donald Trump delivers a mighty shock to America" at a newsstand in Beijing. With Trump's... (Associated Press)
FILE - In this Nov. 10, 2016 file photo, a man reads a newspaper with the headline that reads "U.S. President-elect Donald Trump delivers a mighty shock to America" at a newsstand in Beijing. With Trump's latest tweets touching on sensitive issues, China must decide how to handle an incoming American... (Associated Press)
BEIJING (AP) — Chinese leaders face a challenge: How to deal with Donald Trump.
Weeks before taking office, the incoming American president is riling Beijing with confrontation and online statements that appear to foreshadow a tougher foreign policy toward China.
China awoke Monday to sharp criticism posted by Trump on Twitter, days after Beijing responded to his telephone conversation with Taiwan's president by accusing the Taiwanese of playing a "small trick" on Trump.
Trump wrote, "Did China ask us if it was OK to devalue their currency (making it hard for our companies to compete), heavily tax our products going into their country (the U.S. doesn't tax them) or to build a massive military complex in the middle of the South China Sea? I don't think so!"
That was apparently prompted by China's response to Trump's talk Friday with Tsai Ing-wen, the first time an American president or president-elect is known to have spoken to a Taiwanese leader since the U.S. broke off formal diplomatic relations in 1979.
So far, China has avoided responding with open hostility. On Monday, Chinese Foreign Ministry spokesman Lu Kang said China would have "no comment on what motivated the Trump team" to make the tweets, but said he believed both sides would continue to support a "sound and a stable bilateral relationship."
"For us, for China, we do not comment on his personality," Lu said. "We focus on his policies, especially his policies toward China."
China's reaction to Trump's call with Tsai was relatively low-key given the sensitivity China places on Taiwan.
The U.S. and Taiwan retain strong unofficial ties, and the U.S. sells weapons to the self-governing island. But American leaders have for decades avoided any official recognition in deference to China, which claims Taiwan as part of its territory, to be captured by force if necessary. Trump's reference in another tweet to Tsai as "the President of Taiwan" was sure to inflame China, which considers any reference to Taiwan having a president as a grave insult.
But China only said it would make a "solemn representation" in Washington, and Lu declined to expand on that statement Monday. Instead, China seemed to offer Trump a face-saving way out of an apparent blunder by blaming the Taiwanese. English-language commentaries then appeared in two state-run newspapers known to be used by China's ruling Communist Party leadership to send messages abroad.
"Trump might be looking for some opportunities by making waves," the Global Times said in a Monday editorial headlined, "Talk to Trump, punish Tsai administration."
"However, he has zero diplomatic experience and is unaware of the repercussions of shaking up Sino-U.S. relations," the newspaper said. "It is certain that Trump doesn't want a showdown with China, because it is not his ambition, and neither was it included in his promise to the electorate. He puts out feelers to sound China out and chalk up some petty benefits."
China's response was characteristically coded. But it now faces an incoming president who deals in outspoken tweets, not communiques.
Trump used a platform banned by censors in mainland China to renew several of his criticisms during the U.S. presidential campaign. Some of his arguments aren't true.
Taiwan's official Central News Agency, citing anonymous sources on Saturday, said Edwin Feulner, founder of the Washington-based Heritage Foundation, was a "crucial figure" in setting up communication channels between the sides.
Vice President-elect Mike Pence said Sunday that the phone call shouldn't necessarily be interpreted as a shift in U.S. policy. He shrugged off the attention to the incident as media hype.
"It was a courtesy call," Pence told NBC's "Meet the Press."
Ned Price, a spokesman for the White House National Security Council, said Trump's conversation does not signal any change to long-standing U.S. policy — although some in Taiwan expressed hopes for strong U.S. support from the incoming administration.
In terms of Trump's criticisms, Chinese imports are taxed at standard U.S. rates, while Washington has recently slapped painful punitive tariffs on Chinese steel, solar panels and other goods.
And while China once kept a tight grip on the value of the yuan, also known as the renminbi, it now allows it to trade within a bandwidth 2 percent above or below a daily target set by the People's Bank of China.
The yuan is currently trading at around a six-year low against the dollar. But economists now conclude that the currency is more or less properly valued in relation to the dollar and other foreign currencies. And with economic growth slowing considerably and more Chinese trying to move money out of the country, the government is now spending massively to hold up the yuan's value rather than depressing it as Trump and other critics accuse it of doing. It has also imposed strict controls on Chinese moving money out of the country.
China has built up its military and constructed man-made islands in the South China Sea, and made sweeping territorial claims over almost the entire critical waterway. Those claims were broadly rejected in June by an international tribunal in The Hague.
Shi Yinhong, a professor of international relations at People's University in Beijing, predicted China would not lash out immediately, but calibrate its response over the next several months after Trump enters the White House.
"Trump's remarks will certainly raise the concerns of Chinese leaders," Shi said. "But at the moment, they will be restrained and watch his moves closely."
___
Associated Press writer Christopher Bodeen and news researcher Liu Zheng contributed to this report. ||||| President-elect sent tweets hours after Mike Pence tried to downplay possibility that Trump could threaten diplomatic rift with Beijing through actions last week
President-elect Donald Trump railed against China on Sunday, only hours after his transition team denied that his call with Taiwan’s president signaled a new US policy toward Pacific power.
“Did China ask us if it was OK to devalue their currency (making it hard for our companies to compete), heavily tax our products going into their country (the US doesn’t tax them) or to build a massive military complex in the middle of the South China Sea?” he tweeted. “I don’t think so!”
'Putting lipstick on a pig': experts on why Obama is lending Donald Trump a hand Read more
Earlier on Sunday the vice-president-elect, Mike Pence, had tried to downplay the possibility that Trump could threaten a diplomatic rift with Beijing through his actions last week. Trump’s 10-minute phone conversation on Friday with Tsai Ing-wen – thought to be the first time a US president or president-elect has spoken to a Taiwanese leader since 1979 – and subsequent reference to Tsai as “president” threatened such a breach, and implied he might be making up policy on the hoof.
In damage control mode, Pence sought to dismiss the row as “a tempest in a teapot”, contrasting it with Barack Obama’s rapprochement with communist Cuba.
“He received a courtesy call from the democratically elected president of Taiwan,” Pence told ABC’s This Week. “They reached out to offer congratulations as leaders around the world have and he took the call, accepted her congratulations and good wishes and it was precisely that.”
Later, in an interview on NBC’s Meet the Press, Pence again used the term “the president of Taiwan”, suggesting it was no slip of the tongue.
China views self-ruling Taiwan as part of its own territory awaiting reunification, and any US move implying support for independence – including use of the word “president” – is likely to offend Beijing.
Chinese state media said Trump’s “inexperience” led him to accept the phone call but warned that any breach of the “one China” stance would “destroy” relations with America.
Asked by ABC host George Stephanopoulos if he understood China’s objections, Pence replied: “Yes, of course.” But he quickly shifted gear to claim that the American people find Trump’s “energy” refreshing.
The Indiana governor was asked directly if there were implications for the “one China” policy. “We’ll deal with policy after 20 January,” he said, referring to the day of Trump’s inauguration.
On NBC, Pence suggested the controversy had been overplayed. “The waters here seem like a little bit of a tempest in a teapot,” he said.
“I mean, it’s striking to me that President Obama would reach out to a murdering dictator in Cuba and be hailed as a hero. And President-elect Donald Trump takes a courtesy call from the democratically elected president of Taiwan and it becomes something of a thing in the media.”
Other Trump surrogates sought to neutralise the issue. Speaking on Fox News Sunday, senior aide Kellyanne Conway said her boss was “well aware” of Washington’s “one China” policy.
“I know China has a perspective on it,” she said. “The White House and state department probably have a perspective on it. Certainly Taiwan has a perspective on it.
“The president-elect’s perspective is he accepted a congratulatory call. When he’s sworn in as commander-in-chief, he’ll make clear the fullness of his plans. But people shouldn’t read too much into it.”
Since his stunning victory over Hillary Clinton on 8 November, Trump has accepted congratulatory calls from dozens of world leaders including the prime ministers or presidents of Israel, Singapore, Japan and China, Conway said.
Speaking at the Brookings Institution in Washington on Sunday afternoon, Secretary of State John Kerry said it would be “valuable” to Trump if he took advice from state officials before such calls. Speaking to reporters at Trump Tower, however, Conway said the president-elect was “not really a talking points kind of guy”.
There were also signs of uncertainty over Trump’s choice of secretary of state. The transition team has previously said the short list was down to four – understood to be Tennessee senator Bob Corker, former New York mayor Rudy Giuliani, former CIA director Gen David Petraeus and Mitt Romney, the former Massachusetts governor who was the 2012 Republican nominee for president.
Pence and Priebus defend Trump's claim that millions voted illegally Read more
On Sunday, Conway told reporters “he’s broadened the search” and the “list is expanding”, and added: “More than four but who knows how many finalists there will be. It’s a big decision and nobody should rush through it.”
Petraeus appeared on ABC in what some observers billed as an audition for the cross-examination he could expect from Congress over his conviction for mishandling classified material.
Petraeus pleaded guilty last year to a misdemeanour charge after sharing intelligence with his biographer, Paula Broadwell, a former army officer with whom he had an extramarital affair.
“What I would say to them is what I’ve acknowledged for a number of years,” he said. “Five years ago, I made a serious mistake. I acknowledged it. I apologised for it. I paid a very heavy price for it and I’ve learned from it.”
Petraeus added that he has given more than 38 years of “in some cases unique service to our country in uniform and then at the CIA”.
Reflecting on his hour-long meeting with Trump last week, Petraeus, speaking from Germany, said he found the president-elect to be “quite pragmatic” and added: “What I enjoyed most, frankly, was the discussion of issues, if you will, or campaign rhetoric, and placing that in a strategic context.”
Petraeus’s viability for the job was put to Pence, who insisted that despite Trump’s attacks on Hillary Clinton over her own carelessness with classified emails while secretary of state, the former military leader was an “American hero” and still in contention.
“He paid a price for mishandling classified information,” he said on NBC. “I think the president-elect will weigh that against the background of an extraordinary career of military service.
“It will be the president-elect’s decision about the totality of Gen Petraeus’s experience and background.”
Stephanopoulos also grilled Pence over Trump’s evidence-free claim that “millions of people voted illegally”, denying him victory in the popular vote in the presidential election, in which Clinton leads by more than 2.5m ballots.
The host made 10 different attempts, through questions or interventions, to make Pence admit the claim was groundless.
Stephanopoulos asked: “It’s his right to make false statements?”
Republican defense community fills Reagan shrine with prayers over Trump Read more
Pence replied: “Well, it’s his right to express his opinion as president-elect of the United States. I think one of the things that’s refreshing about our president-elect and one of the reasons why I think he made such an incredible connection with people all across this country is because he tells you what’s on his mind.”
Stephanopoulos shot back: “But why is it refreshing to make false statements?”
Maintaining his cool, Pence said: “Look, I don’t know that that is a false statement, George, and neither do you. The simple fact is that ...”
Stephanopoulos interrupted: “I know there’s no evidence for it.”
At the end of the exchange, Pence insisted: “He’s going to say what he believes to be true and I know that he’s always going to speak in that way as president.”
In another series of tweets early on Sunday, Trump threatened heavy taxes as retribution for US companies that move their business operations overseas and still try to sell their product to Americans.
He promised a 35% tax on products sold inside the US by any business that fired American workers and built a new factory or plant in another country. ||||| President-elect Donald Trump has revived some of his China-bashing from the campaign trail -- but the misleading claims still puzzle economists.
Trump accused China late Sunday of gaining an unfair advantage over American companies by devaluing its currency and slapping heavy taxes on U.S. products. The attack via Twitter came after a controversial phone call with Taiwan's president on Friday that had already ruffled feathers in Beijing.
Blasting China over its currency, the yuan, was a recurring theme of Trump's presidential campaign as he appealed to voters disillusioned with the effects of globalization. He labeled Beijing "a big abuser," arguing it has given Chinese exports a boost -- and cost America jobs -- by keeping the yuan artificially low.
Economists say that was probably true in the past, but China is now battling to stop its currency from falling too much.
Did China ask us if it was OK to devalue their currency (making it hard for our companies to compete), heavily tax our products going into.. — Donald J. Trump (@realDonaldTrump) December 4, 2016
their country (the U.S. doesn't tax them) or to build a massive military complex in the middle of the South China Sea? I don't think so! — Donald J. Trump (@realDonaldTrump) December 4, 2016
Beijing has been trying to give markets a greater role in determining the value of its currency. Between mid-2005 and early 2014, the yuan rose about 30% against the dollar. But as the Chinese economy has slowed in recent years, the yuan has fallen back.
"The irony of it is they're actually giving the market more say, but the market wants it to be weaker," said Julian Evans-Pritchard, a China expert at Capital Economics.
Related: China just let its currency hit lows not seen since the financial crisis
Chinese leaders want to avoid a repeat of the sharp drops that freaked out investors in August 2015 and January 2016. Beijing has burned through hundreds of billions of dollars since last year in efforts to prop up the yuan as huge sums of money have flowed out of the country.
Its foreign currency war chest, while still substantial, has dwindled to its lowest level in five years.
Trump attacking China for manipulating the yuan is unlikely to improve matters, economists say.
"I think in practical terms, it wouldn't do his voters any favors if he does actually try to get China to stop intervening in its currency," Evans-Pritchard said. "If anything, it's just going to make its exports cheaper because the currency will start falling even faster."
Trump's victory has also helped push the yuan to its lowest levels in about eight years. The U.S. dollar has surged against other currencies on expectations of a rate hike by the U.S. Federal Reserve this month and the potential for higher growth and inflation under Trump policies.
And since the U.S. election, the yuan has actually fallen less sharply against the dollar than many other emerging market currencies. And it has strengthened against the currencies of other major trading partners, like the euro and the Japanese yen.
Related: 3 ways Trump can slap tariffs on China and Mexico
During his campaign, Trump said he would label China a "currency manipulator," which would require the U.S. Treasury Secretary to hold talks with Beijing on the issue.
"I think it's just a good way for Trump to show he's being tough on China -- the symbolism is important -- without doing anything that would affect the trade relationship," Evans-Pritchard said. "It's an easy win."
Of much greater concern is Trump's threat to slap tariffs of as much as 45% on Chinese goods. If he follows through with that, the result could be a trade war that damages both economies.
Related: 8 reasons why starting a trade war with China is a bad idea
The yuan wasn't Trump's only beef on Sunday. He also claimed that Beijing "heavily" taxes U.S. products but that the U.S. doesn't tax Chinese goods.
Economists say it's unclear what he meant. A spokeswoman for Trump didn't respond to a request for more information late Sunday.
In March, fact-checking website Politifact looked into similar comments Trump made to The New York Times about a "tremendous tax" imposed by China. It rated the claim "mostly false."
Trump might be referring to the sales tax that China and many other countries charge but the U.S. doesn't. That tax isn't specific to U.S. products, though. It applies to all goods, whether they're made in China or overseas.
Related: TPP's death hurts America and helps China + Russia
Another possibility is that Trump is talking about import tariffs, which are generally higher in China than in the U.S. But the U.S. nonetheless imposes some degree of charges on goods from Chinese companies.
"It seems to me a lot of the issues he's focusing on now are either outdated or don't hold true," said Evans-Pritchard.
"Maybe he should focus on other grievances," he said, like China's restrictions on foreign investment or subsidies for state-run companies. ||||| With a confused world speculating about Donald Trump’s foreign policy intentions, past statements by the US president-elect are being scrutinised for clues as to which of his radical pledges might come true.
Despite having no track record in diplomacy, Trump has repeatedly made his world view clear, starting long before he entered the presidential race as a political outsider last year.
As long as you are paying attention to the issues you raise, you don’t have to necessarily honour every word
Michael Chertoff
In the past 30 years, some things about Trump have never changed, according to Nicholas Burns, a former undersecretary of state for political affairs under George W. Bush: his denigration of Japan, Saudi Arabia and Nato allies and his praise for strongman leaders.
“You do have to pay attention to what he says. It is indication of how he’s going to go,” said Burns, now professor of the practice of international relations at Harvard Kennedy School. “I take him seriously.”
In the 1980s, the New York real estate developer began to appear on television shows and in print interviews complaining about Japan dumping cheap cars and electronic devices in the United States and defeating America economically. At the time, Japan was America’s major rival for economic pre-eminence and its companies were purchasing trophy assets in the US like the Rockefeller Centre in Manhattan.
On September 2, 1987, Trump published full-page advertisements in The New York Times, The Washington Post and The Boston Globe that were billed as an “open letter from Donald J. Trump”.
“Make Japan, Saudi Arabia, and others pay for the protection we extend as allies,” he said in the advertisements, which cost him a total of US$94,801.
Trump urged America to “‘tax’ these wealthy nations” and allow its economy to grow by relieving itself of “the cost of defending those who can easily afford to pay us for the defence of their freedom”.
“Let’s not let our great country be laughed at any more,” he concluded.
The 70-year-old adopted a similar tone in his presidential campaign, refining the message into the slogan “Make America Great Again” and adding China to the list of those accused of being “currency manipulators” and “job thieves”.
In pursuing protectionism, Donald Trump is taking a leaf from Asia’s book
In the course of 16 months of campaigning, Burns added, Trump also consistently demonstrated his disapproval of America’s trade agreements, alliance system and military overcommitment.
That sounded the death knell for the 12-nation Trans-Pacific Partnership (TPP) free-trade pact, dubbed a potential disaster by Trump , with its demise confirmed last month when the president-elect said in a video posted on YouTube that he would withdraw the US from it on his first day in office.
“It’s a great mistake, but I think he’s going to carry through with this great mistake,” Burns said, referring to an important strategic rationale for the TPP, which was not to let China write the rules of global trade.
John Negroponte, a former director of national intelligence and deputy secretary of state under George W. Bush who is now professor of international affairs at George Washington University, said killing off the TPP would create a “short-term opportunity” for China to promote its own version of a regional free-trade agreement.
In the YouTube video, Trump said he would instead look to negotiate bilateral trade deals, with the intention of creating American jobs.
Trump’s 1987 call for America to tax other nations evolved into a campaign pledge to impose a 45 per cent tariff on Chinese imports.
‘Breathtaking’ resistance to Romney becoming secretary of state, says top Trump aide
Burns said it was inconceivable that the US would begin a trade war with China next year, but any US administration would have to take a tough-minded approach to trade with the world’s biggest exporter given allegations of dumping and Beijing’s lack of compliance with international rules such as those protecting intellectual property rights.
Michael Chertoff, a former secretary of homeland security under George W. Bush and Barack Obama, said some of Trump’s campaign statements suggested an inclination, but not necessarily the exact policies he would implement.
“As long as you are paying attention to the issues you raise, you don’t have to necessarily honour every word. You just have to kind of get the music, even if not the lyrics,” Chertoff said.
Burns compared the proposed imposition of punitive tariffs on China with previous Trump remarks encouraging Japan and South Korea to develop nuclear weapons, suggesting they would used to get President Xi Jinping’s attention in his first meeting with the new US president.
“Trump wrote The Art of the Deal. What did he say in The Art of the Deal? You start with your maximum positions,” Burns said.
In a surprise development, Trump and Taiwanese President Tsai Ing-wen had a telephone conversation on Friday, in which the two noted the close economic, political, and security ties between the island and the US. Taipei said the phone call was initiated by Tsai, and Beijing described the move as a “petty trick”. It is not known whether Trump intends to increase American engagement with Taiwan.
The billionaire is viewed by Chinese leaders as a pragmatic businessman and a willing negotiator, according to Yun Sun, an expert in China’s foreign policy at the Brookings Institution, with many of Trump’s claims possibly opening positions for future negotiations.
Many previous US presidential candidates had also branded China a currency manipulator, Yun said, with the accusation becoming “campaign rhetoric we have developed a fatigue about ... we are not going to take this super seriously”.
Four reasons why Trump will learn a Chinese lesson on how isolationism never works
Burns said one of the most “dangerous things” Trump said during the campaign – a threat to withdraw US troops from South Korea and Japan unless they contributed more financially – also harked back to earlier statements. While it might also be a negotiating gambit, Burns said, by simply saying what he had, Trump had already damaged America’s relationship with two key allies and eroded the foundation of its international strength.
In New York on November 17, Japanese Prime Minister Shinzo Abe became the first foreign leader to meet the president-elect, hoping to “build trust” with America’s next commander in chief.
“I hope [Trump’s opinion on Japan] will change because Japan is a very critical, very valued ally of the United States,” Burns said.
He said that with North Korea’s nuclear weapon and missile capabilities developing rapidly, and posing an urgent security challenge by possibly bringing the US mainland within range in the next few years, Trump should reinforce ties with Japan and South Korea and put pressure on Beijing to exercise its influence on Pyongyang.
The deployment of the US Terminal High Altitude Area Defence (THAAD) anti-missile system in South Korea was a part of America’s leverage when dealing with China, he said.
“For Donald Trump it is a very effective opening gambit – ‘we are going to double the force there if we have to’,” Burns said.
Sun said the likelihood of any negotiation between Trump and Xi on the THAAD deployment would depend on how Trump defined America’s national interests on the Korean Peninsula. But Trump had never given any indication that he viewed North Korea as an issue of significance, simply saying “it’s China’s problem to fix”.
The president-elect showed a similar lack of knowledge about other hot spots in the relationship between US and China such as the South China Sea, said Stephen Nagy, an associate professor of politics and international relations at International Christian University in Tokyo.
Global financial rules: Trump looks determined to rip them up and start again
But he will have a team and a mechanism to advise, check and balance him, and prevent his policies from becoming too subject to personal whim.
Burns said: “Of course he has to rely on the advice of his senior cabinet officials … plus there’s career government, career foreign services, career ambassadors, career military.”
Trump’s statements about putting America first and retreating from its role as the world’s policeman have led to debate about whether he is an isolationist. But Chertoff said it was more likely Trump would move back to the approach adopted by former secretary of state Henry Kissinger in the 1970s, which was focused on the balance of power and spheres of influence.
“Stepping away from what would be regarded as the Russian sphere or Chinese sphere, that’s not isolationism,” Chertoff said.
Jane Harman, president of the Washington-based Wilson Centre think tank, said none of the people named or tipped for top foreign policy posts in the Trump administration were isolationists.
Trump has announced retired general Michael Flynn, a former director of the Defence Intelligence Agency, as his national security adviser, and another retired general, James Mattis, as his secretary of defence. Flynn has taken a tough stance on Islamism and Mattis has described “political Islam” as the major security issue facing the US.
China to keep eye on Trump policies, vows to defend its trading rights
Trump has yet to announce his pick for secretary of state, which will be another key indicator of his administration’s orientation.
Chertoff said Trump would not go back to a classic, conservative, Republican set of policies, but “drive the bus himself in a sense that reflects his world view”.
“The question is, will he have advisers he would listen to?” Harman said. | – The Donald Trump phone conversation that led to an official complaint from Beijing was a calculated move, not an innocent congratulatory call or a blunder made by a man with no knowledge of foreign policy, insiders tell the Washington Post. Instead, the provocative conversation with Taiwanese President Tsai Ing-wen was a deliberate effort to break with the previous 37 years of US policy on Taiwan and had been planned for months, the sources say. They say the move was the product of talks between Trump aides, many of whom are hawkish on China, and their Taiwanese counterparts. In other developments: Adding to the controversy, Trump used his medium of choice to harshly criticize China on Sunday. "Did China ask us if it was OK to devalue their currency (making it hard for our companies to compete), heavily tax our products going into their country (the US doesn't tax them) or to build a massive military complex in the middle of the South China Sea?" he tweeted. "I don't think so!" Hours before Trump's tweetburst, Mike Pence tried to downplay the importance of the Taiwan call, reports the Guardian. "He received a courtesy call from the democratically elected president of Taiwan," the vice president-elect told ABC's This Week. "They reached out to offer congratulations as leaders around the world have and he took the call, accepted her congratulations and good wishes, and it was precisely that." On NBC, Pence described the furor as a "tempest in a teapot" and wondered why Trump is getting a hard time now when Obama was "hailed as a hero" for reaching out to a "murdering dictator in Cuba." Analysts tell CNNMoney that Trump's criticisms of China would have been valid years ago but are now outdated: The country is now fighting to keep its currency from falling further, not to keep the yuan low to boost exports. The South China Morning Post reports that the position Trump has adopted on China is consistent with policies he has been advocating for at least 30 years. In the 1980s, he repeatedly slammed Japan, then America's main economic rival, for allegedly dumping cheap goods on the US market. In 1987, he took out full-page newspaper ads calling for more backbone in US foreign policy. The AP reports that China has responded fairly mildly to Trump's latest comments, with Foreign Minister Lu Kang saying Monday that the government won't comment on Trump's personality or on his tweets, but that it continues to hope for a "sound and a stable bilateral relationship." An editorial in the government-run Global Times blamed Taiwan for the controversy. Trump "has zero diplomatic experience and is unaware of the repercussions of shaking up Sino-US relations," the paper said. "It is certain that Trump doesn't want a showdown with China, because it is not his ambition, and neither was it included in his promise to the electorate. He puts out feelers to sound China out and chalk up some petty benefits." |
we consider development of a multilevel iterative solver for large - scale sparse linear systems corresponding to graph laplacian problems for graphs with balanced vertex degrees .
a typical example is furnished by the matrices corresponding to the ( finite difference)/(finite volume)/(finite element ) discretizations of scalar elliptic equation with mildly varying coefficients on unstructured grids .
multigrid ( mg ) methods have been shown to be very efficient iterative solvers for graph laplacian problems and numerous parallel mg solvers have been developed for such systems .
our aim here is to design an algebraic multigrid ( amg ) method for solving the graph laplacian system and discuss the implementation of such methods on multi - processor parallel architectures , with an emphasis on implementation on graphical processing units ( gpus ) .
the programming environment which we use in this paper is the compute unified device architecture ( cuda ) toolkit introduced in 2006 by nvidia which provides a framework for programming on gpus . using this framework in the last 5 years
several variants of geometric multigrid ( gmg ) methods have been implemented on gpus @xcite and a high level of parallel performance for the gmg algorithms on cuda - enabled gpus has been demonstrated in these works . on the other hand , designing amg methods for massively parallel heterogenous computing platforms , e.g. , for clusters of gpus , is very challenging mainly due to the sequential nature of the coarsening processes ( setup phase ) used in amg methods . in most amg algorithms , coarse - grid points or basis
are selected sequentially using graph theoretical tools ( such as maximal independent sets and graph partitioning algorithms ) .
although extensive research has been devoted to improving the performance of parallel coarsening algorithms , leading to notable improvements on cpu architectures @xcite , on a single gpu @xcite , and on multiple gpus @xcite , the setup phase is still considered a bottleneck in parallel amg methods .
we mention the work in @xcite , where a smoothed aggregation setup is developed in cuda for gpus . in this paper , we describe a parallel amg method based on the un - smoothed aggregation amg ( ua - amg ) method .
the setup algorithm we develop and implement has several notable design features .
a key feature of our parallel aggregation algorithm is that it first chooses coarse vertices using a parallel maximal independent set algorithm @xcite and then forms aggregates by grouping coarse level vertices with their neighboring fine level vertices , which , in turn , avoids ambiguity in choosing fine level vertices to form aggregates .
such a design eliminates both the memory write conflicts and conforms to the cuda programming model .
the triple matrix product needed to compute the coarse - level matrix ( a main bottleneck in parallel amg setup algorithms ) simplifies significantly in the ua - amg setting , reducing to summations of entries in the matrix on the finer level .
the parallel reduction sums available in cuda are quite an efficient tool for this task during the amg setup phase .
additionally , the ua - amg setup typically leads to low grid and operator complexities . in the solve phase of the proposed algorithm ,
a k - cycle @xcite is used to accelerate the convergence rate of the multilevel ua - amg method . such multilevel method optimizes the coarse grid correction and results in an approximate two - level method .
two parallel relaxation schemes considered in our amg implementation are a damped jacobi smoother and a parameter free @xmath0-jacobi smoother introduced in @xcite and its weighted version in @xcite . to further accelerate the convergence rate of the resulting k - cycle method we apply it as a preconditioner to a nonlinear conjugate gradient method .
the remainder of the paper is organized as follows . in section [ sec : ua - amg ] , we review the ua - amg method
. then , in section [ sec : setup ] , a parallel graph aggregation method is introduced , which is our main contribution .
the parallelization of the solve phase is discussed in section [ sec : solve ] . in section [ sec : numerics ] , we present some numerical results to demonstrate the efficiency of the parallel ua - amg method .
the linear system of interest has as coefficient matrix the graph laplacian corresponding to an undirected connected graph @xmath1 . here
, @xmath2 denotes the set of vertices and @xmath3 denotes the set of edges of @xmath4 .
we set @xmath5 ( cardinality of @xmath2 ) . by @xmath6
we denote the inner product in @xmath7 and the superscript @xmath8 denotes the adjoint with respect to this inner product .
graph laplacian _
@xmath9 is then defined via the following bilinear form @xmath10 we assume that the weights @xmath11 , and @xmath12 are strictly positive for all @xmath13 and @xmath14 .
the first summation is over the set of edges @xmath3 ( over @xmath15 connecting the vertices @xmath13 and @xmath14 ) , and @xmath16 and @xmath17 are the @xmath13-th and @xmath14-th coordinate of the vector @xmath18 , respectively .
we also assume that the subset of vertices @xmath19 is such that the resulting matrix @xmath20 is symmetric positive definite ( spd ) .
if the graph is connected @xmath19 could contain only one vertex and @xmath20 will be spd . for matrices corresponding to the discretization scalar elliptic equation on unstructured grids
@xmath19 is the set of vertices near ( one edge away from ) the boundary of the computational domain .
the linear system of interest is then @xmath21 with this system of equation we associate a multilevel hierarchy which consists of spaces @xmath22 , each of the spaces is defined as the range of interpolation / prolongation operator @xmath23 with @xmath24 .
given the @xmath25-th level matrix @xmath26 , the aggregation - based prolongation matrix @xmath27 is defined in terms of a non - overlapping partition of the @xmath28 unknowns at level @xmath25 into the @xmath29 nonempty disjoint sets @xmath30 , @xmath31 , called aggregates .
an algorithm for choosing such aggregates is presented in the next section .
the prolongation @xmath27 is the @xmath32 matrix with columns defined by partitioning the constant vector , @xmath33 , with respect to the aggregates : @xmath34 the resulting coarse - level matrix @xmath35 is then defined by the so called `` triple matrix product '' , namely , @xmath36 note that since we consider ua - amg , the interpolation operators are boolean matrices such that the entries in the coarse - grid matrix @xmath37 can be obtained from a simple summation process : @xmath38 thus , the triple matrix product , typically _ the _ costly procedure in an amg setup , simplifies significantly for ua - amg to reduction sums .
we now introduce a general ua - amg method ( see algorithm [ alg : ua - amg ] ) and in the subsequent sections we describe the implementation of each of the components of algorithm [ alg : ua - amg ] for gpus . given @xmath39 ( size of the coarsest level ) and @xmath40 ( maximum levels ) @xmath41 , construct the aggregation @xmath42 , @xmath43 based on @xmath44 , compute @xmath37 by , @xmath45 , * solve phase : * solve @xmath46 exactly , pre - smoothing : @xmath47 , restriction : compute @xmath48 , coarse grid correction : solve @xmath49 approximately by recursively calling the amg on coarser level @xmath50 and get @xmath51 , prolongation : compute @xmath52 , post - smoothing : @xmath53 .
consider the system of linear equations corresponding to an unweighted graph @xmath54 partitioned into two subgraphs @xmath55 .
further assume that the two subgraphs are stored on separate computes . to implement a jacobi or gauss - seidel smoother for the graph laplacian equation with respect to @xmath4 ,
the communication between the two computers is proportional to the number of edge cuts of such a partitioning , given by @xmath56 therefore , a partition corresponding to the minimal edge cut in the graph results in the fastest implementation of such smoothers .
this in turn gives a heuristic argument , as also suggested in @xcite , @xcite , that when partitioning the graph in subgraphs ( aggregates ) the subgraphs should have a similar number of vertices and have a small `` perimeter . ''
such a partitioning can be constructed by choosing any vertex in the graph , naming it as a coarse vertex , and then aggregating it with its neighboring vertices .
this heuristic motivates our aggregation method .
the algorithm consists of a sequence of two subroutines : first , a parallel maximal independent set algorithm is applied to identify coarse vertices ; then a parallel graph aggregation algorithm follows , so that subgraphs ( aggregates ) centered at the coarse vertices are formed . in the algorithm , to reduce repeated global memory read access and write conflicts , we impose explicit manual scheduling on data caching and flow control in the implementations of both algorithms ; the aim is to achieve the following goals : 1 .
( read access coalescence ) : to store the data that a node uses frequently locally or on a fast connecting neighboring node .
( write conflicting avoidance ) : to reduce , or eliminate the situation that several nodes need to communicate with a center node simultaneously .
the idea behind such algorithm is to simplify the memory coalescence , and design a random aggregation algorithm where there are as many as possible threads loading from a same memory location , while as few as possible threads writing to a same memory location .
therefore , it is natural to have one vertex per thread when choosing the coarse vertices . for vertices that are connected the corresponding processing threads should be wrapped together in a group . by doing so ,
repeated memory loads from the global memory can be avoided .
however , we also need to ensure that no two coarse vertices compete for a fine level point , because either atomic operations as well as inter - thread communication is costly on a gpu .
therefore , the coarse vertices are chosen in a way that any two of them are of distance 3 or more , which is the same as finding a maximal independent set of vertices for the graph corresponding to @xmath57 , where @xmath20 is the graph laplacian of a given graph @xmath4 , so that each fine level vertex can be determined independently which coarse vertex it associates with . given an undirected unweighted graph @xmath54 , we first find a set @xmath58 of coarse vertices such that @xmath59 here , @xmath60 is the graph distance function defined recursively as @xmath61 assume we obtain such set @xmath58 , or even a subset of @xmath58 , we can then form aggregates , by picking up a vertex @xmath13 in @xmath58 and defining an aggregate as a set containing @xmath13 and its neighbors .
the condition guarantees that two distinct vertices in @xmath58 , do not share any neighbors .
the operation of marking the numberings of subgraphs on the fine grid vertices is write conflict free , and the restriction imposed by ensures that aggregates can be formed independently , and simultaneously .
the rationale of the independent set algorithm is as follows : first , a random vector @xmath62 is generated , each component of which corresponds to a vertex in the graph .
then we define the set @xmath58 as the following : @xmath63 if @xmath58 is not empty , then such construction results in a collection of vertices in @xmath58 is of distance 3 or more .
indeed , assume that @xmath64 for @xmath65 , let @xmath66 . from the definition of the set @xmath58
, we immediately conclude that @xmath67 .
of course , more caution is needed when @xmath58 defined above is empty ( a situation that may occur depending on the vector @xmath62 ) . however ,
this can be remedied , by assuming that the vector @xmath62 ( with random entries ) has a global maximum , which is also a local maximum .
the @xmath58 contains at least this vertex .
the same algorithm can be applied then recursively to the remaining graph ( after this vertex is removed ) . in practice
, @xmath58 does not contain one but more vertices .
we here give a description of the parallel aggregation algorithm , running the exact copies of the code on each thread . 1 .
generate a quasi - random number and store it in @xmath68 , as @xmath69 mark vertex @xmath13 as `` unprocessed '' ; wait until all threads complete these operations .
2 . 1 . goto ( 2d ) if @xmath13 is marked `` processed '' , otherwise continue to ( 2b ) .
2 . determine if the vertex @xmath13 is a coarse vertex , by check if the following is true .
@xmath70 if so , continue to ( 2c ) ; if not , goto ( 2d ) .
3 . form an aggregate centered at @xmath13 .
let @xmath71 be a set of vertices defined as @xmath72 define a column vector @xmath73 such that @xmath74 mark vertices @xmath75 `` processed '' and request an atomic operation to update the prolongator @xmath76 as @xmath77 \;.\ ] ] 4 .
synchronize all threads ( meaning : wait until all threads reach this step ) .
stop if @xmath13 is marked `` processed '' , otherwise goto step ( 2a ) . within each pass of the parallel aggregation algorithm ( paa , algorithm [ alg : paa ] ) ,
the following two steps are applied to each vertex @xmath13 . 1 .
construct a set @xmath58 which contains coarse vertices .
2 . construct an aggregate for each vertex in @xmath58 .
note that these two subroutines can be executed in a parallel fashion . indeed ,
step ( a ) does not need to be applied to the whole graph before starting step ( b ) .
even if @xmath58 is partially completed , any operation in step ( b ) will not interfere step ( a ) , running on the neighboring vertices and completing the construction of @xmath58 .
a problem for this approach is that it usually can not give a set of aggregates that cover the vertex set @xmath78 after 1 pass of step ( a ) and step ( b ) .
we thus run several passes and the algorithm terminates when a complete cover is obtained .
the number of passes is reduced if we make the set @xmath58 as large as possible in each pass , therefore the quasi - random vector @xmath62 needs to have a lot of local maximums .
another heuristic argument is that @xmath58 needs to be constructed in a way that every coarse vertex has a large number of neighboring vertices .
numerical experiments suggest that the following is a good way of generating the vector @xmath62 with the desired properties .
@xmath79 where @xmath80 is the degree of the vertex @xmath13 , and @xmath81 generates a random number uniformly distributed on the interval @xmath82 $ ] . to improve the quality of the aggregates
, we can either impose some constrains during the aggregation procedure ( which we call in - line optimization ) , or introduce a post - process an existing aggregation in order to improve it .
one in - line strategy that we use to improve the quality of the aggregation is to limit the number of vertices in an aggregate during the aggregation procedure .
however , such limitations may result in a small coarsening ratio . in such case , and
numerical results suggest that applying aggregation process twice , which is equivalent to in skipping a level in a multilevel hierarchy , can compensate that .
our focus is on a post - processing strategy , which we name `` rank one optimization '' .
it uses an _ a priori _ estimate to adjust the interface ( boundary ) of a pair of aggregates , so that the aggregation based two level method , with a fixed smoother , converges fast locally on those two aggregates .
we consider the connected graph formed by a union of aggregates ( say a pair of them , which will be the case of interest later ) , and let @xmath83 be the dimension of the underlying vector space .
let @xmath84 be a semidefinite weighted graph laplacian ( representing a local sub - problem ) and @xmath85 be a given local smoother . as is usual for semidefinite graph laplacians
, we consider the subspace @xmath86-orthogonal to the null space of @xmath87 and we denote it by @xmath78 . the @xmath86 orthogonal projection on @xmath78
is denoted here by @xmath88 .
let @xmath89 be the error propagation operator for the smoother @xmath90 .
we consider the two level method whose error propagation matrix is @xmath91 here @xmath92 is a subspace and @xmath93 is the @xmath94-orthogonal projection of the elements of @xmath78 onto the coarse space @xmath95 . in
what follows we use the notation @xmath96 when we want to emphasize the dependence on @xmath97 .
we note that @xmath98 is well defined on @xmath78 because @xmath87 is spd on @xmath78 and hence it @xmath99 is an inner product on @xmath78 .
we also have that @xmath98 self - adjoint on @xmath78 and under the assumption @xmath92 , we obtain @xmath100 and @xmath101 . also , @xmath102 is self - adjoint on @xmath78 in the @xmath99 inner product iff @xmath90 is self - adjoint in the @xmath86-inner product on @xmath103 .
we now introduce the operator @xmath104 ( recall that @xmath92 ) @xmath105 and from the definition of @xmath106 for all @xmath107 we have @xmath108 we note the following identities which follow directly from the definitions above and the assumption @xmath92 : @xmath109 with respect to the coarse space @xmath95 , we need to make @xmath110 maximal .
the following lemma quantifies this observation and is instrumental in showing how to optimize locally the convergence rate when the subspaces @xmath95 are one dimensional . in the statement of the lemma we use @xmath111 to denote a subset of minimizers of
a given , not necessarily linear , functional @xmath112 on a space @xmath113 .
more precisely , we set @xmath114 we have similar definition ( with obvious changes ) for the set @xmath115 .
[ the - only - lemma ] let @xmath102 , be the projection of the local smoother on @xmath78 , and @xmath116 be the set of all one dimensional subspaces of @xmath78 .
then we have the following : @xmath117 where @xmath118 and @xmath119 . * proof . * from the identities it follows that we can restrict our considerations on @xmath120 and that we only need to prove the lemma with @xmath121 and @xmath122 .
in order to make the presentation more transparent , we denote @xmath123 , @xmath124 .
let us mention also that by orthogonality in this proof we mean orthogonality in the @xmath99 inner product on @xmath78 .
the proof then proceeds as follows .
let @xmath125 be such that @xmath126 .
we set @xmath127 .
note that for such choice of @xmath128 we have @xmath129 and hence @xmath130 on the other hand , for all @xmath131 we have @xmath132 and we then conclude that @xmath133 in , we conclude the following thus prove .
@xmath134 to prove , we observe that for any @xmath135 the inequalities in become equalities and hence @xmath136 this implies that @xmath137 .
it is also clear that @xmath138 for all @xmath139 , because @xmath128 is one dimensional .
in addition , since @xmath140 is self - adjoint , it follows that @xmath128 is the span of the eigenvector of @xmath140 with eigenvalue of magnitude @xmath141 .
next , for any @xmath142 we have @xmath143 , where @xmath144 is the second largest singular value of @xmath140 and with equality holding iff @xmath145 .
this completes the proof .
we now move on to consider a pair of aggregates .
let @xmath87 be the graph laplacian of a connected positively weighted graph @xmath147 which is union of two aggregates @xmath148 and @xmath149 .
furthermore , let @xmath150 be the characteristic vector for @xmath151 , namely a vector with components equal to @xmath152 at the vertices of @xmath151 and equal to zero at the vertices of @xmath153 .
analogously we have a characteristic vector @xmath154 for @xmath153 .
finally , let @xmath155 be the space of vectors that are linear combinations of @xmath150 and @xmath154 .
more specifically , the subspace @xmath95 is defined as @xmath156 let @xmath116 be the set of subspaces defined above for all possible pairs of @xmath148 and @xmath149 , such that @xmath157 .
note that by the definition above , every pair @xmath158 gives us a space @xmath159 which is orthogonal to the null space of @xmath87 , i.e. orthogonal to @xmath160 .
we now apply the result of lemma [ the - only - lemma ] and show how to improve locally the quality of the partition ( the convergence rate @xmath161 ) by reducing the problem of minimizing the @xmath87-norm of @xmath162 to the problem of finding the maximum of the @xmath87-norm of the rank one transformation @xmath163 . under the assumption that the spaces @xmath95 are orthogonal to the null space of @xmath87 ( which they satisfy by construction ) from lemma [ the - only - lemma ]
we conclude that the spaces @xmath128 which minimize @xmath164 also maximize @xmath110 . for the pair of aggregates , @xmath165 is the largest eigenvalue of @xmath166 , where @xmath167 is the pseudo inverse of @xmath20 . clearly , the matrix @xmath166 , is also a rank one matrix and hence @xmath168 during optimization steps , we calculate the trace using the fact that for any rank one matrix @xmath169 we have @xmath170 where @xmath171 is a nonzero diagonal entry ( any nonzero diagonal entry ) and @xmath172 is the @xmath173-th column of @xmath169 .
the formula is straightforward to prove if we set @xmath174 for two column vectors @xmath175 and @xmath62 , and also suggests a numerical algorithm .
we devise a loop computing @xmath176 , and @xmath177 , for @xmath178 , where @xmath179 is the dimension of @xmath169 ; the loop is terminated whenever @xmath180 , and we compute the trace via for this @xmath173 . in particular for the examples we have tested , @xmath181 is usually a full matrix and we observed that the loop almost always terminated when @xmath182 . the algorithm which traverses all pairs of neighboring aggregates and optimizes their shape is as follows . 1 . _ input _ : two set of vertices , @xmath148 and @xmath149 , corresponding to a pair of neighboring subgraphs .
_ output _ : two sets of vertices , @xmath183 and @xmath184 satisfying that @xmath185 and the subgraphs corresponding to @xmath183 and @xmath184 are both connected .
2 . let @xmath186 , then compute @xmath187 .
3 . run in parallel to generate all partitionings such that the vertices set latexmath:[\[\tilde { \mathcal{v}}_{1 } \cup \tilde { \mathcal{v}}_{2 } = { \mathcal{v}}_{1 } \cup { \mathcal{v}}_{2 } , \qquad @xmath183 and @xmath184 are connected .
4 . run in parallel to compute the norm @xmath165 for all partitionings get from step ( 2 ) , and return the partitioning that results in maximal @xmath165 .
the subgraph reshaping algorithm fits well the programming model of a multicore gpu .
we demonstrate this algorithm on two example problems , and later show its potential as a post - process for the parallel aggregation algorithm ( algorithm [ alg : paa ] ) outlined in the previous section . in the examples that follow next we use the rank one optimization and then measure the quality of the coarse space also by computing the energy norm of the @xmath189 , where @xmath190 is the @xmath191-orthogonal projection to the space @xmath128 .
[ example-1 ] : consider a graph laplacian @xmath94 corresponding to a graph which is a @xmath192 square grid .
the weights on the edges are all equal to @xmath152 .
we start with an obviously non - optimal partitioning as shown on the left of figure [ fig : isotropic ] , of which the resulting two level method , consisting of @xmath0-jacobi pre- and post - smoothers and an exact coarse level solver , has a convergence rate @xmath193 , and @xmath194 . after applying algorithm [ alg : sra ] ,
the refined aggregates have the shapes shown on the right of figure [ fig : isotropic ] , of which the two level method has the same convergence rate @xmath193 but the square of the energy seminorm is reduced to @xmath195
. @xmath196 [ example-2 ] consider a graph laplacian @xmath94 corresponding to a graph which is a @xmath192 square grid , on which all horizontal edges are weighted @xmath152 while all vertical edges are weighted @xmath197 .
such graph laplacian represents anisotropic coefficient elliptic equations with neumann boundary conditions .
start with a non - optimal partitioning as shown on the left of figure [ fig : anisotropic ] , of which the resulting two level method has a convergence rate @xmath198 and @xmath199 . after applying algorithm [ alg : sra ] ,
the refined aggregates have the shapes shown on the right of figure [ fig : anisotropic ] , of which the two level convergence rate is reduced to @xmath200 and the energy of the coarse level projection is also reduced as @xmath195 .
in this section , we discuss the parallelization of the solver phase on gpu .
more precisely , we will focus on the parallel smoother , prolongation / restriction , mg cycle , and sparse matrix - vector multiplication .
an efficient parallel smoother is crucial for the parallel amg method . for the sequential amg method , gauss - seidel relaxation is widely used and has been shown to have a good smoothing property . however the standard gauss - seidel is a sequential procedure that does not allow efficient parallel implementation . to improve the arithmetic intensity of the smoother and
make it work better with simt based gpus , we adopt the well - known jacobi relaxation , and introducing a damping factor to improve the performance of the jacobi smoother . for a matrix @xmath201 and its diagonals
are denoted by @xmath202 , the jacobi smoother can be written in the following matrix form @xmath203 or component - wise @xmath204 this procedure can be implemented efficiently on gpus by assigning one thread to each component , and update the corresponding components locally and simultaneously .
we also consider the so - called @xmath0 jacobi smoother , which is parameter free .
define @xmath205 where @xmath206 with @xmath207 , and the @xmath0 jacobi has the following matrix form @xmath208 or component - wise @xmath209 in @xcite it has been show that if @xmath20 is symmetric positive definite , the smoother is always convergent and has multigrid smoothing properties comparable to full gauss - seidel smoother if @xmath210 and @xmath211 is bounded away from zero .
moreover , because its formula is very similar to the jacobi smoother , it can also be implemented efficiently on gpus by assigning one thread to each component , and update the corresponding the component locally and simultaneously . for ua - amg method ,
the prolongation and restriction matrices are piecewise constant and characterize the aggregates .
therefore , we can preform the prolongation and restriction efficiently in ua - amg method . here
, the output array @xmath212 ( column index of @xmath76 ) , which contains the information of aggregates , plays an important rule . * * prolongation : * let @xmath213 , so that the action @xmath214 can be written component - wise as follows : @xmath215 assign each thread to one element of @xmath216 , and the array ` aggregation ` can be used to obtain information about @xmath217 , i.e. , @xmath218 $ ] , so that prolongation can be efficiently implemented in parallel . *
* restriction : * let @xmath219 , so that the action @xmath220 can be written component - wise as follows : @xmath221 therefore , each thread is assigned to an element of @xmath222 , and the array ` aggregation ` can be used to obtain information about @xmath223 , i.e. , to find all @xmath14 such that @xmath224 = i$ ] . by doing so , the action of restriction can also be implemented in parallel . unfortunately , in general , ua - amg with v - cycle is not an optimal algorithm in terms of convergence rate .
but on the other hand , in many cases , ua - amg using two - grid solver phase gives optimal convergence rate for graph laplacian problems .
this motivated us to use other cycles instead of v - cycle to mimic the two - grid algorithm .
the idea is to invest more works on the coarse grid , and make the method become closer to an exact two - level method , then hopefully , the resulting cycle will have optimal convergence rate .
the particular cycle we will discuss here is the so - called k - cycle ( nonlinear amli - cycle ) and we refer to @xcite for details on its implementation in general .
as the k - cycle will be used as a preconditioner for nonlinear preconditioned conjugate gradient ( npcg ) method , the sparse matrix - vector multiplication ( spmv ) has major contribution to the computational work involved .
an efficient spmv algorithm on gpu requires a suitable sparse matrix storage format .
how different storage formats perform in spmv is extensively studied in @xcite .
this study shows that the need for coalesce accessing of the memory makes ellpack ( ell ) format one of the most efficient sparse matrix storage formats on gpus when each row of the sparse matrix has roughly the same nonzeros . in our study , because our main focus is on the parallel aggregation algorithm and the performance of the ua - amg method , we still use the compressed row storage ( csr ) format , which has been widely used for the iterative linear solvers on cpu .
although this is not an ideal choice for gpu implementation , the numerical results in the next section already show the efficiency of our parallel amg method .
in this section , we present numerical tests using the proposed parallel amg methods .
whenever possible we compare the results with the cusp libraries @xcite .
cusp is an open source c++ library of generic parallel algorithms for sparse linear algebra and graph computations on cuda - enabled gpus .
all cusp s algorithms and implementations have been optimized for gpu by nvidia s research group . to the best of our knowledge ,
the parallel amg method implemented in the cusp package is the state - of - the - art amg method on gpu .
we use as test problems several discretizations of the laplace equation . define @xmath190 , the @xmath86 projection on the piece - wise constant space @xmath225 , as the following : @xmath226 we present several tests showing how the energy norm of this projection changes with respect to different parameters used in the parallel aggregation algorithm , since the convergence rate is an increasing function of @xmath227 .
the tests involving @xmath227 further suggest two additional features necessary to get a multigrid hierarchy with predictable results .
first , the sizes of aggregates need to be limited , and second , the columns of the prolongator @xmath76 need to be ordered in a deterministic way , regardless of the order that aggregates are formed .
the first requirement can be fulfilled simply by limiting the sizes of the aggregates in each pass of the parallel aggregation algorithm .
we make the second requirement more specific .
let @xmath228 to be the index of the coarse vertex of the @xmath173-th aggregate .
we require that @xmath228 should be an increasing sequence and then use the @xmath173-th column of @xmath76 to record the aggregate with the coarse vertex numbered @xmath228 .
this can be done by using a generalized version of the prefix sum algorithm @xcite .
we first show in table [ square - dirichlet ] the coarsening ratios ( in the parenthesis in the table ) and the energy norms @xmath229 of a two grid hierarchy , for a laplace equation with dirichlet boundary conditions on a structured grid containing @xmath230 vertices .
the limit on the size of an aggregate is denoted by @xmath8 , which suggests that , any aggregate can include @xmath8 vertices or less , which directly implies that the resulting coarsening ratio is less or equal to @xmath8 .
m. brezina and p. s. vassilevski . smoothed aggregation spectral element agglomeration amg : sa-@xmath231amge . in
_ proceedings of the 8th international conference on large - scale scientific computing _ , volume 7116 of _ lecture notes in computer science _ , pages 315 .
springer - verlag , 2012 .
e. chow , r. falgout , j. hu , r. tuminaro , and u. yang .
a survey of parallelization techniques for multigrid solvers . in
_ parallel processing for scientific computing _ , volume 20 , pages 179201 .
siam , 2006 .
a. j. cleary , r. d. falgout , v. e. henson , and j. e. jones .
coarse - grid selection for parallel algebraic multigrid . in
_ solving irregularly structured problems in parallel _ , volume 1457 of _ lecture notes in computer science _ , pages 104115 .
springer , 1998 .
z. feng and z. zeng .
parallel multigrid preconditioning on graphics processing units ( gpus ) for robust power grid analysis . in _ proceedings of the 47th design automation conference _ , dac 10 , pages 661666 .
acm , 2010 .
j. kraus and m. frster .
efficient amg on heterogeneous systems . in
_ facing the multicore - challenge ii _ , volume 7174 of _ lecture notes in computer science _ , pages 133146
springer berlin heidelberg , 2012 . | we design and implement a parallel algebraic multigrid method for isotropic graph laplacian problems on multicore graphical processing units ( gpus ) .
the proposed amg method is based on the aggregation framework .
the setup phase of the algorithm uses a parallel maximal independent set algorithm in forming aggregates and the resulting coarse level hierarchy is then used in a k - cycle iteration solve phase with a @xmath0-jacobi smoother .
numerical tests of a parallel implementation of the method for graphics processors are presented to demonstrate its effectiveness . |
WASHINGTON — BY all outward indications, the U.S. Navy Veterans Association was a leader in the charitable community. Founded in 2002 to provide support to Navy veterans in need, the charity recorded astonishing financial success. In its first eight years, it raised around $100 million in charitable contributions, almost all of it through a direct marketing campaign. The organization, headed by Jack L. Nimitz, boasted of 41 state chapters and some 66,000 members.
This would be a great story of charitable success, except for the fact that virtually everything about the association turned out to be false: no state chapters, no members, no leader with the name redolent of naval history. Instead, there was one guy: a man calling himself Bobby Thompson who worked from a duplex across the street from the Cuesta-Rey cigar factory in the Ybor City neighborhood of Tampa.
But the money raised was real enough, generated by a series of for-profit telemarketers. The victims, by and large, were unsuspecting small-money donors who received urgent solicitations asking for support for needy naval veterans. Most of the money raised stayed with the fund-raisers, though plenty apparently dripped through to Mr. Thompson and a succession of Republican lawmakers who received generous contributions from the association’s political arm. But little ever made it to the intended beneficiaries. In 2010, the scheme was unwound by two reporters for what is now The Tampa Bay Times, but not before Mr. Thompson had fled the state of Florida.
From June 2010, Mr. Thompson was on the run, the search for him hamstrung by the fact that no one had any real idea of who he was. Finally, on April 30, 2012, federal marshals tracked him down in Portland, Ore., finding him with a card to a storage unit containing $981,650 in cash and almost two dozen fake identity cards.
Earlier this month in Ohio, where the charity’s registration documents had been filed, the man arrested as Bobby Thompson was convicted on 23 felony counts, including fraud, theft and money laundering. Authorities have identified him as John Donald Cody, a former Army intelligence officer and Harvard Law graduate. Given its sensational facts, the case has drawn more attention than your average matter in Cuyahoga County Common Pleas Court. But the story is worth paying attention to for a more important reason, if we want to prevent more Bobby Thompsons in our future.
The most outrageous aspect of the case is that much of what Mr. Cody did was probably legal, or at least not specifically illegal. The principal beneficiaries were always the association’s for-profit fund-raisers. During the trial, one of them, Thomas Berkenbush of Community Support Inc., testified, apparently without fear of legal repercussions, that his company had kept 90 percent of the donations as a fund-raising charge.
That, in and of itself, isn’t criminal. The alleged fraud was not that very, very little money ever went to Navy veterans. In fact, the fund-raising explicitly stated that a large portion of donations would go to cover telemarketing and other costs. Mr. Cody ran afoul of the law because he filed registration documents that contained false statements, because he stole the identity of the real Bobby Thompson, and because he pulled money from organizational accounts for his personal use. The irony is that he could have accomplished virtually his entire enrichment scheme without ever violating the law — and others have figured that out.
The I.R.S.’s Exempt Organizations Division, which is responsible for supervising the charitable sector, is chronically understaffed. It can’t do much more than process the routine and voluminous reporting of the more than 1.5 million American nonprofits, and keep up with the tens of thousands of applications filed each year to start new charities.
State and local authorities are in no better shape. Joel L. Fleishman, a professor of public policy at Duke, estimates that there are fewer than 100 full-time state charity regulators, far too few to exercise any real oversight.
In the Navy Veterans case, amazingly, the I.R.S. did undertake one of its rare field audits. And yet, despite the fact that the main office was a trailer, its state offices were empty lots or postal drops, and its board of directors and C.E.O. a total fiction, the I.R.S. in 2008 gave the association a “clean bill of health.” It wasn’t until the two reporters came sniffing — first curious about the political contributions and subsequently intrigued by Mr. Thompson’s obvious dissembling — that the real story began to emerge.
When it comes to frauds like these, it is neither the law nor the regulators that are the best line of defense; it will always be the careful application of caveat emptor by potential donors. This isn’t easy: there are approximately 59,000 charities in this country with the word “veterans” in their names. Only a few people can claim the expertise to say which are the best, let alone which are trustworthy.
As we enter the annual giving season, donors should look to sources like the GiveWell website to find organizations with a track record of effectiveness. Seeking them out — instead of donating to charities that are first to call or that sound familiar or that we’ve heard are good — is the only way to ensure that money reaches those in need. ||||| Our Top Charities
We recommend few charities by design, because we see ourselves as a finder of great giving opportunities rather than a charity evaluator. In other words, we're not seeking to classify large numbers of charities as "good" or "bad"; our mission is solely to identify, and thoroughly investigate, the best. Read about our charity selection process and where we currently recommend that donors give. | – This month, the man behind a $100 million "charity" scam was finally convicted on 23 felony counts. But "the most outrageous aspect of the case is that much of what [John Donald] Cody did was probably legal, or at least not specifically illegal," writes Ken Stern in the New York Times. Cody's US Navy Veterans Association purported to help needy Navy vets, but much of the millions raised over eight years stayed with the for-profit telemarketers who brought it in from unsuspecting donors. The rest went to Cody and various lawmakers, and the whole thing unraveled only because two reporters started investigating. Yet, outrageous though it may seem, "the alleged fraud was not that very, very little money ever went to Navy veterans," Stern writes. Rather, it stemmed from things like Cody filing registration documents with false information. "The irony is that he could have accomplished virtually his entire enrichment scheme without ever violating the law—and others have figured that out." The IRS simply doesn't have enough staff to adequately regulate the charitable sector, and neither do state and local authorities. (Indeed, the IRS actually audited the Navy Veterans charity in 2008 and found nothing amiss.) That's why donors need to do their homework before giving, by first checking sources like GiveWell to find worthy organizations. Click for Stern's full column. |
large data sets of points in high - dimension often lie close to a smooth low - dimensional manifold .
a fundamental problem in processing such data sets is the construction of an efficient parameterization that allows for the data to be well represented in fewer dimensions .
such a parameterization may be realized by exploiting the inherent manifold structure of the data .
however , discovering the geometry of an underlying manifold from only noisy samples remains an open topic of research .
the case of data sampled from a linear subspace is well studied ( see @xcite , for example ) .
the optimal parameterization is given by principal component analysis ( pca ) , as the singular value decomposition ( svd ) produces the best low - rank approximation for such data .
however , most interesting manifold - valued data organize on or near a nonlinear manifold .
pca , by projecting data points onto the linear subspace of best fit , is not optimal in this case as curvature may only be accommodated by choosing a subspace of dimension higher than that of the manifold .
algorithms designed to process nonlinear data sets typically proceed in one of two directions .
one approach is to consider the data globally and produce a nonlinear embedding .
alternatively , the data may be considered in a piecewise - linear fashion and linear methods such as pca may be applied locally .
the latter is the subject of this work .
local linear parameterization of manifold - valued data requires the estimation of the local tangent space ( `` tangent plane '' ) from a neighborhood of points .
however , sample points are often corrupted by high - dimensional noise and any local neighborhood deviates from the linear assumptions of pca due to the curvature of the manifold .
therefore , the subspace recovered by local pca is a perturbed version of the true tangent space .
the goal of the present work is to characterize the stability and accuracy of local tangent space estimation using eigenspace perturbation theory .
1ex 1ex the proper neighborhood for local tangent space recovery must be a function of intrinsic ( manifold ) dimension , curvature , and noise level ; these properties often vary as different regions of the manifold are explored .
however , local pca approaches proposed in the data analysis and manifold learning literature often define locality via an _ a priori _ fixed number of neighbors or as the output of an algorithm ( e.g. , @xcite ) .
other methods @xcite adaptively estimate local neighborhood size but are not tuned to the perturbation of the recovered subspace .
our approach studies this perturbation as the size of the neighborhood varies to guide the definition of locality .
on the one hand , a neighborhood must be small enough so that it is approximately linear and avoids curvature . on the other hand ,
a neighborhood must be be large enough to overcome the effects of noise . a simple yet instructive example of these competing criteria is shown in figure [ fig : example ] .
the tangent plane at every point of a noisy 2-dimensional data set embedded in @xmath0 is computed via local pca .
each point is color coded according to the angle formed with the true tangent plane .
three different neighborhood definitions are used : a small , fixed radius ( figure [ fig : example]a ) ; a large , fixed radius ( figure [ fig : example]b ) ; and radii defined adaptively according the analysis presented in this work ( figure [ fig : example]c ) . as small neighborhoods may be within the noise level and large neighborhoods exhibit curvature , the figure shows that neither allows for accurate tangent plane recovery .
in fact , because the curvature varies across the data , only the adaptively defined neighborhoods avoid random orientation due to noise ( as seen in figure [ fig : example]a ) and misalignment due to curvature ( as seen in figure [ fig : example]b ) .
figure [ fig : example]c shows accurate and stable recovery at almost every data point , with misalignment only in the small region of very high curvature that will be troublesome for any method .
the present work quantifies this observed behavior in the high - dimensional setting .
we present a non - asymptotic , eigenspace perturbation analysis to bound , with high probability , the angle between the recovered linear subspace and the true tangent space as the size of the local neighborhood varies .
the analysis accurately tracks the subspace recovery error as a function of neighborhood size , noise , and curvature .
thus , we are able to adaptively select the neighborhood that minimizes this bound , yielding the best estimate to the local tangent space from a large but finite number of noisy manifold samples .
further , the behavior of this bound demonstrates the non - trivial existence of such an optimal scale .
we are also able to accurately and efficiently estimate the curvature and noise level of the local neighborhood .
finally , we introduce a geometric uncertainty principle quantifying the limits of noise - curvature perturbation for tangent space recovery .
our analysis is related to the very recent work of tyagi , _ et al . _
@xcite , in which neighborhood size and sampling density conditions are given to ensure a small angle between the pca subspace and the true tangent space of a noise - free manifold .
results are extended to arbitrary smooth embeddings of the manifold model , which we do not consider .
in contrast , we envision the scenario in which no control is given over the sampling and explore the case of data sampled according to a fixed density and corrupted by high - dimensional noise .
crucial to our results is a careful analysis of the interplay between the perturbation due to noise and the perturbation due to curvature .
nonetheless , our results can be shown to recover those of @xcite in the noise - free setting .
our approach is also similar to the analysis presented by nadler in @xcite , who studies the finite - sample properties of the pca spectrum . through matrix perturbation theory
, @xcite examines the angle between the leading finite - sample - pca eigenvector and that of the leading population - pca eigenvector . as a linear model
is assumed , perturbation results from noise only . despite this difference ,
the two analyses utilize similar techniques to bound the effects of perturbation on the pca subspace and our results recover those of @xcite in the curvature - free setting .
other recent related works include that of singer and wu @xcite , who use local pca to build a tangent plane basis and give an analysis for the neighborhood size to be used in the absence of noise . using the hybrid linear model , zhang , _ et al . _
@xcite assume data are samples from a collection of `` flats '' ( affine subspaces ) and choose an optimal neighborhood size from which to recover each flat by studying the least squares approximation error in the form of jones @xmath1-number ( see @xcite and also @xcite in which this idea is used for curve denoising ) .
an analysis of noise and curvature for normal estimation of smooth curves and surfaces in @xmath2 and @xmath3 is presented by mitra , _ et al .
_ @xcite with application to computer graphics .
finally , chen , _ et al . _
@xcite present a multiscale pca framework with which to analyze the intrinsic dimensionality of a data set .
we consider the problem of recovering the best approximation to a local tangent space of a nonlinear @xmath4-dimensional riemannian manifold @xmath5 from noisy samples presented in dimension @xmath6 .
working about a reference point @xmath7 , an approximation to the tangent space of @xmath5 at @xmath7 is given by the span of the top @xmath4 eigenvectors of the centered data covariance matrix ( where `` top '' refers to the @xmath4 eigenvectors or singular vectors associated with the @xmath4 largest eigenvalues or singular values ) .
the question becomes : how many neighbors of @xmath7 should be used ( or in how large of a radius about @xmath7 should we work ) to recover the best approximation ?
we will often use the term `` scale '' to refer to this neighborhood size or radius . to answer this question
, we consider the perturbation of the eigenvectors spanning the estimated tangent space in the context of the `` noise - curvature trade - off . '' to balance the effects of noise and curvature ( as observed in the example of the previous subsection , figure [ fig : example ] ) , we seek a scale large enough to be above the noise level but still small enough to avoid curvature .
this scale reveals a linear structure that is sufficiently decoupled from both the noise and the curvature to be well approximated by a tangent plane . at this scale ,
the recovered eigenvectors span a subspace corresponding very closely to the true tangent space of the manifold at @xmath7 .
we note that the concept of noise - curvature trade - off has been a subject of interest for decades in dynamical systems theory @xcite . the main result of this work is a bound on the angle between the computed and true tangent spaces .
define @xmath8 to be the orthogonal projector onto the true tangent space and let @xmath9 be the orthogonal projector constructed from the @xmath4-dimensional eigenspace of the neighborhood covariance matrix .
then the distance @xmath10 corresponds to the sum of the squared sines of the principal angles between the computed and true tangent spaces and we use eigenspace perturbation theory to bound this norm .
momentarily neglecting probability - dependent constants to ease the presentation , the first - order approximation of this bound has the following form : + * informal main result .
* @xmath11}{\frac{r^2}{d+2 } - \frac{k^2r^4}{2(d+2)^2 } - \sigma^2\left(\sqrt{d } + \sqrt{d - d}\right)},\ ] ] where @xmath12 is the radius ( measured in the tangent plane ) of the neighborhood containing @xmath13 points , @xmath14 is the noise level , and @xmath15 is the ( rescaled ) norm of the vector of mean curvatures . +
the quantities @xmath13 , @xmath12 , @xmath14 , and @xmath15 as well as the sampling assumptions are more formally defined in section [ sec : prelim ] and the formal result is presented in section [ sec : mainresult ] .
the denominator of this bound , denoted here by @xmath16 , @xmath17 quantifies the separation between the spectrum of the linear subspace ( @xmath18 ) and the perturbation due to curvature ( @xmath19 ) and noise ( @xmath20 ) .
clearly , we must have @xmath21 to approximate the appropriate linear subspace , a requirement made formal by theorem [ thm : mainresult1 ] in section [ sec : mainresult ] .
in general , when @xmath16 is zero ( or negative ) , the bound becomes infinite ( or negative ) and is not useful for subspace recovery .
however , the geometric information encoded by offers more insight .
for example , we observe that a small @xmath16 indicates that the estimated subspace contains a direction orthogonal to the true tangent space ( due to the curvature or noise ) .
we therefore consider @xmath16 to be the condition number for subspace recovery and use it to develop our geometric interpretation for the bound .
the noise - curvature trade - off is readily apparent from .
the linear and curvature contributions are small for small values of @xmath12 .
thus for a small neighborhood ( @xmath12 small ) , the denominator is either negative or ill conditioned for most values of @xmath14 and the bound becomes large .
this matches our intuition as we have not yet encountered much curvature but the linear structure has also not been explored .
therefore , the noise dominates the early behavior of this bound and an approximating subspace may not be recovered from noise .
as the neighborhood radius @xmath12 increases , the conditioning of the denominator improves , and the bound is controlled by the @xmath22 behavior of the numerator .
this again corresponds with our intuition : the addition of more points serves to overcome the effects of noise as the linear structure is explored .
thus , when @xmath23 is well conditioned , the bound on the angle may become smaller with the inclusion of more points .
eventually @xmath12 becomes large enough such that the curvature contribution approaches the size of the linear contribution and @xmath23 becomes large .
the @xmath22 term is overtaken by the ill conditioning of the denominator and the bound is again forced to become large .
the noise - curvature trade - off , seen analytically here in and , will be demonstrated numerically in section [ sec : numerical ] . enforcing a well conditioned recovery bound yields a geometric uncertainty principle quantifying the amount of curvature and noise we may tolerate . to recover an approximating subspace
, we must have : + * geometric uncertainty principle .
* @xmath24 by preventing the curvature and noise level from simultaneously becoming large , this requirement ensures that the linear structure of the data is recoverable . with high probability ,
the noise component normal to the tangent plane concentrates on a sphere with mean curvature @xmath25 .
as will be shown , this uncertainty principle expresses the intuitive notion that the curvature of the manifold must be less than the curvature of this noise - ball .
otherwise , the combined effects of noise and curvature perturbation prevent an accurate estimate of the local tangent space .
the remainder of the paper is organized as follows .
section [ sec : prelim ] provides the notation , geometric model , and necessary mathematical formulations used throughout this work .
eigenspace perturbation theory is reviewed in this section .
the main results are stated formally in section [ sec : mainresult ] .
we demonstrate the accuracy of our results and test the sensitivity to errors in parameter estimation in section [ sec : numerical ] .
methods for recovering neighborhood curvature and noise level are also demonstrated in this section .
we conclude in section [ sec : discussion ] with a discussion of the relationship to previously established results and algorithmic considerations .
technical results and proofs are presented in the appendices .
a @xmath4-dimensional riemannian manifold of codimension 1 may be described locally by the surface @xmath26 , where @xmath27 is a coordinate in the tangent plane . after translating the origin
, a rotation of the coordinate system can align the coordinate axes with the principal directions associated with the principal curvatures at the given reference point @xmath7 .
aligning the coordinate axes with the plane tangent to @xmath5 at @xmath7 gives a local quadratic approximation to the manifold . using this choice of coordinates ,
the manifold may be described locally @xcite by the taylor series of @xmath28 at the origin @xmath7 : @xmath29 where @xmath30 are the principal curvatures of @xmath5 at @xmath7 . in this coordinate system
, @xmath7 has the form @xmath31^t,\ ] ] and points in a local neighborhood of @xmath7 have similar coordinates .
generalizing to a @xmath4-dimensional manifold of arbitrary codimension in @xmath32 , there exist @xmath33 functions @xmath34 for @xmath35 , with @xmath36 representing the principal curvatures in codimension @xmath37 at @xmath7 .
then , given the coordinate system aligned with the principal directions , a point in a neighborhood of @xmath7 has coordinates @xmath38 $ ] .
we truncate the taylor expansion and use the quadratic approximation @xmath39 @xmath40 , to describe the manifold locally .
consider now discrete samples from @xmath5 obtained by uniformly sampling the first @xmath4 coordinates ( @xmath41 in the tangent space inside @xmath42 , the @xmath4-dimensional ball of radius @xmath12 centered at @xmath7 , with the remaining @xmath33 coordinates given by . because we are sampling from a noise - free linear subspace ,
the number of points @xmath13 captured inside @xmath42 is a function of the sampling density @xmath43 : @xmath44 where @xmath45 is the volume of the @xmath4-dimensional unit ball .
of course it is unrealistic for the data to be observed in the described coordinate system .
as noted , we may use a rotation to align the coordinate axes with the principal directions associated with the principal curvatures .
doing so allows us to write as well as . because we will ultimately quantify the norm of each matrix using the unitarily - invariant frobenius norm
, this rotation will not affect our analysis .
we therefore proceed by assuming that the coordinate axes align with the principal directions .
equation represents an exact quadratic embedding of @xmath5 . while it may be interesting to consider more general embeddings , as is done for the noise - free case in @xcite , a taylor expansion followed by rotation and translation will result in an embedding of the form . noting that the numerical results of @xcite indicate no loss in accuracy when truncating higher - order terms , proceeding with an analysis of remains sufficiently general .
the true tangent space we wish to recover is given by the pca of @xmath56 . because we do not have direct access to @xmath56
, we work with @xmath60 as a proxy , and instead recover a subspace spanned by the corresponding eigenvectors of @xmath61 .
we will study how close this recovered invariant subspace of @xmath61 is to the corresponding invariant subspace of @xmath62 as a function of scale . throughout this work
, scale refers to the number of points @xmath13 in the local neighborhood within which we perform pca .
given a fixed density of points , scale may be equivalently quantified as the radius @xmath12 about the reference point @xmath7 defining the local neighborhood .
given the decomposition of the data , we have @xmath63 we introduce some notation to account for the centering required by pca .
define the sample mean of @xmath13 realizations of random vector @xmath64 as @xmath65 where @xmath66 denotes the @xmath37th realization . letting @xmath67
represent the column vector of @xmath13 ones , define @xmath68 to be the matrix with @xmath13 copies of @xmath69 as its columns .
finally , let @xmath70 denote the centered version of @xmath71 : @xmath72 then we have @xmath73 the problem may be posed as a perturbation analysis of invariant subspaces . rewrite as @xmath74 where @xmath75 is the perturbation that prevents us from working directly with @xmath76 .
the dominant eigenspace of @xmath77 is therefore a perturbed version of the dominant eigenspace of @xmath76 . seeking to minimize the effect of this perturbation
, we look for the scale @xmath78 ( equivalently @xmath79 ) at which the dominant eigenspace of @xmath77 is closest to that of @xmath76 . before proceeding , we review material on the perturbation of eigenspaces relevant to our analysis .
the reader familiar with this topic is invited to skip directly to theorem [ thm : stewart ] .
the distance between two subspaces of @xmath32 can be defined as the spectral norm of the difference between their respective orthogonal projectors @xcite .
as we will always be considering two equidimensional subspaces , this distance is equal to the sine of the largest principal angle between the subspaces . to control all such principal angles , we state our results using the frobenius norm of this difference .
our goal is therefore to control the behavior of @xmath80 , where @xmath8 and @xmath9 are the orthogonal projectors onto the subspaces computed from @xmath56 and @xmath60 , respectively .
the norm @xmath80 may be bounded by the classic @xmath81 theorem of davis and kahan @xcite .
we will use a version of this theorem presented by stewart ( theorem v.2.7 of @xcite ) , modified for our specific purpose .
first , we establish some notation , following closely that found in @xcite .
consider the eigendecompositions @xmath82~\begin{bmatrix } \lambda_1 & \\ & \lambda_2 \end{bmatrix } ~[u_1~u_2]^t , \label{eq : eigdecompl } \\
\frac{1}{n}\widetilde{x}\widetilde{x}^t & = \widehat{u } \widehat{\lambda } \widehat{u}^t = [ \widehat{u}_1~\widehat{u}_2]~\begin{bmatrix } \widehat{\lambda}_1 & \\ & \widehat{\lambda}_2 \end{bmatrix}~[\widehat{u}_1~\widehat{u}_2]^t , \label{eq : eigdecompa}\end{aligned}\ ] ] such that the columns of @xmath83 are the eigenvectors of @xmath84 and the columns of @xmath85 are the eigenvectors of @xmath86 .
the eigenvalues of @xmath84 are arranged in descending order as the entries of diagonal matrix @xmath87 .
the eigenvalues are also partitioned such that diagonal matrices @xmath88 and @xmath89 contain the @xmath4 largest entries of @xmath87 and the @xmath33 smallest entires of @xmath87 , respectively .
the columns of @xmath90 are those eigenvectors associated with the @xmath4 eigenvalues in @xmath88 , the columns of @xmath91 are those eigenvectors associated with the @xmath33 eigenvalues in @xmath89 , and the eigendecomposition of @xmath86 is similarly partitioned .
the subspace we recover is spanned by the columns of @xmath92 and we wish to have this subspace as close as possible to the tangent space spanned by the columns of @xmath90 .
the orthogonal projectors onto the tangent and computed subspaces , @xmath8 and @xmath9 respectively , are given by @xmath93 define @xmath94 to be the @xmath4th largest eigenvalue of @xmath84 , or the last entry on the diagonal of @xmath88 .
this eigenvalue corresponds to variance in a tangent space direction .
we are now in position to state the theorem .
note that we have made use of the fact that the columns of @xmath83 are the eigenvectors of @xmath76 , that @xmath95 are hermitian ( diagonal ) matrices , and that the frobenius norm is used to measure distances .
the reader is referred to @xcite for the theorem in its original form .
[ thm : stewart ] let @xmath96 and consider * ( condition 1 ) @xmath97 * ( condition 2 ) @xmath98 .
then , provided that conditions 1 and 2 hold , @xmath99 2ex it is instructive to consider the perturbation @xmath100 as an operator with range in @xmath101 .
ideally , the perturbation would have very little effect on the tangent space ; @xmath100 would map points from the column space of @xmath90 to the column space of @xmath90 . as this will not be the case in general
, we expect @xmath102 will have a component that is normal to the tangent space .
the numerator @xmath103 of measures this normal component , thereby quantifying the effect of the perturbation on the tangent space .
then @xmath104 measures the component that remains in the tangent space after the action of @xmath100 .
as this component does not contain curvature , @xmath104 corresponds to the spectrum of the noise projected in the tangent space .
similarly , @xmath105 measures the spectrum of the curvature and noise perturbation normal to the tangent space .
thus , when @xmath100 leaves the column space of @xmath90 mostly unperturbed ( i.e. , @xmath103 is small ) and the spectrum of the tangent space is well separated from that of the noise and curvature , the estimated subspace will form only a small angle with the true tangent space . in the next section
, we use the machinery of this classic result to bound the angle caused by the perturbation @xmath100 and develop an interpretation of the conditions of theorem [ thm : stewart ] suited to the noise - curvature trade - off .
given the framework for analysis developed above , the terms appearing in the statement of theorem [ thm : stewart ] ( @xmath106 , @xmath107 , @xmath108 , @xmath109 , and @xmath94 ) must be controlled .
we notice that @xmath100 is a symmetric matrix , so that @xmath110 . using the triangle inequality and the geometric constraints @xmath111 the norms
may be controlled by bounding the contribution of each term in the perturbation @xmath100 : @xmath112 importantly , we seek control over each ( right - hand side ) term in the finite - sample regime , as we assume a possibly large but finite number of sample points @xmath13 .
therefore , bounds are derived through a careful analysis employing concentration results and techniques from non - asymptotic random matrix theory .
the technical analysis is presented in the appendix and proceeds by analyzing three distinct cases : the covariance of bounded random matrices , unbounded random matrices , and the interaction of bounded and unbounded random matrices .
the eigenvalue @xmath94 is bounded again using random matrix theory . in all cases , care is taken to ensure that bounds hold with high probability that is independent of the ambient dimension @xmath49 .
other , possibly tighter , avenues of analysis may be possible for some of the bounds presented in the appendix .
however , the presented analysis avoids large union bounds and dependence on the ambient dimension to state results holding with high probability .
alternative analyses are possible , often sacrificing probability to exhibit sharper concentration .
we proceed with a theoretical analysis holding with the highest probability while maintaining accurate results .
we are now in position to apply theorem [ thm : stewart ] and state our main result .
first , we make the following definitions involving the principal curvatures : @xmath113 @xmath114 the constant @xmath115 is the mean curvature ( rescaled by a factor of @xmath4 ) in codimension @xmath37 , for @xmath116 .
the curvature of the local model is quantified by @xmath15 and is a natural result of our use of the frobenius norm .
note that @xmath117 .
we also define the constants @xmath118 to be used when strictly positive curvature terms are required .
the main result is formulated in the appendix and makes the following benign assumptions on the number of sample points @xmath13 and the probability constants @xmath119 and @xmath120 : @xmath121 we note that these assumptions are easily satisfied for any reasonable sampling density .
further , the assumptions are not crucial to the result but allow for a more compact presentation .
[ thm : mainresult1 ] define @xmath96 and let the following conditions hold ( in addition to the benign assumptions stated above ) : * ( condition 1 ) @xmath97 , * ( condition 2 ) @xmath122 .
then , with probability greater than @xmath123 over the joint random selection of the sample points and random realization of the noise , @xmath124 } { \frac{r^2}{d+2 } - \frac{\mathcal{k}r^4}{2(d+2)^2(d+4 ) } - \sigma^2\left(\sqrt{d } + \sqrt{d- d}\right ) - \frac{1}{\sqrt{n}}\zeta_{\text{denom}}(\xi,\xi_{\lambda})},\ ] ] where the following definitions have been made to ease the presentation : * geometric constants @xmath125 ^ 2\right]^{\frac{1}{2 } } , & \text{(curvature ) } \\
\nu(\xi ) & = \frac{1}{2}\frac{(d+3)}{(d+2)}p_1(\xi ) , & \text{(linear -- curvature ) } \\
\eta_1 & = \sigma , & \text{(noise ) } \\ \eta_2(\xi_{\lambda } ) & = \frac{r}{\sqrt{d+2}}p_2(\xi_{\lambda } ) , & \text{(linear -- noise)}\\ \eta_3(\xi ) & = \frac{\mathcal{k}^{1/2}r^2}{(d+2)\sqrt{2(d+4)}}p_5(\xi ) , & \text{(curvature -- noise ) } \\ \eta(\xi,\xi_{\lambda } ) & = p_3(\xi,\sqrt{d(d - d ) } ) \bigg[\eta_1 + \eta_2(\xi_{\lambda } ) + \eta_3(\xi)\bigg ] , \end{aligned}\ ] ] * finite sample correction terms ( numerator ) @xmath126 * finite sample correction terms ( denominator ) @xmath127 , & \text{(linear)}\\ \zeta_4(\xi ) & = \frac{(k^{(+)})^2r^4}{4}\left(p_1(\xi ) + \frac{1}{\sqrt{n}}p_1 ^ 2(\xi)\right ) , & \text{(curvature)}\\ \zeta_5(\xi,\xi_{\lambda } ) & = 2 r \sigma \frac{d}{\sqrt{d+2 } } p_2(\xi_{\lambda } ) p_3(\xi , d ) , & \text{(linear -- noise)}\\ \zeta_6(\xi ) & = 2 \mathcal{k}^{\frac{1}{2 } } r^2 \sigma \frac{(d - d)}{(d+2)\sqrt{2(d+4)}}p_3(\xi , d - d)p_5(\xi ) , & \text{(curvature -- noise ) } \\
\zeta_7(\xi ) & = \frac{5}{2}\sigma^2\left[\sqrt{d}p_4(\xi,\sqrt{d } ) + \sqrt{d - d}p_4(\xi,\sqrt{d - d } ) \right ] , & \text{(noise)}\\ \zeta_{\text{denom}}(\xi,\xi_{\lambda } ) & = \zeta_3(\xi ) + \zeta_4(\xi ) + \zeta_5(\xi,\xi_{\lambda } ) + \zeta_6(\xi ) + \zeta_7(\xi ) , \end{aligned}\ ] ] + and * probability - dependent terms ( i.e. , terms depending on the probability constants ) + @xmath128 @xmath129 @xmath130 condition 2 is simplified from its original statement in theorem [ thm : stewart ] by noticing that @xmath100 is a symmetric matrix so that @xmath110 .
then , applying the norm bounds computed in the appendix to theorem [ thm : stewart ] and choosing the probability constants @xmath131 yields the result .
2ex the bound will be demonstrated in section [ sec : numerical ] to accurately track the angle between the true and computed tangent spaces at all scales .
as the bound is either monotonically decreasing ( for the curvature - free case ) , monotonically increasing ( for the noise - free case ) , or decreasing at small scales and increasing at large scales ( for the general case ) , we expect that it has a unique minimizer .
the optimal scale , @xmath78 , for tangent space recovery may therefore be selected as the @xmath13 for which is minimized ( an equivalent notion of the optimal scale may be given in terms of the neighborhood radius @xmath12 ) .
note that the constants @xmath119 and @xmath120 need to be selected to ensure that this bound holds with high probability .
for example , setting @xmath132 and @xmath133 yields probabilities of 0.81 , 0.80 , and 0.76 when @xmath134 and @xmath135 , respectively .
we also note that the probability given by is more pessimistic than we expect in practice . as introduced in section [ sec : overview ] , we may interpret @xmath136 as the condition number for tangent space recovery . noting that the denominator in is a lower bound on @xmath137
, we analyze the condition number via the bounds for @xmath94 , @xmath138 , and @xmath139 . using these bounds in the main result
, we see that when @xmath136 is small , we recover a tight approximation to the true tangent space .
likewise , when @xmath136 becomes large , the angle between the computed and true subspaces becomes large .
the notion of an angle loses meaning as @xmath136 tends to infinity , and we are unable to recover an approximating subspace .
condition 1 , requiring that the denominator be bounded away from zero , has an important geometric interpretation .
as noted above , the conditioning of the subspace recovery problem improves as @xmath137 becomes large .
condition 1 imposes that the spectrum corresponding to the linear subspace ( @xmath94 ) be well separated from the spectra of the noise and curvature perturbations encoded by @xmath140 . in this way
, condition 1 quantifies our requirement that there exists a scale such that the linear subspace is sufficiently decoupled from the effects of curvature and noise .
when the spectra are not well separated , the angle between the subspaces becomes ill defined . in this case , the approximating subspace contains an eigenvector corresponding to a direction orthogonal to the true tangent space .
condition 2 is a technical requirement of theorem [ thm : stewart ] .
provided that condition 1 is satisfied , we observe that a sufficient sampling density will ensure that condition 2 is met .
further , we numerically observe that the main result accurately tracks the subspace recovery error even in the case when condition 2 is violated . in such a case
, the bound may not remain as tight as desired but its behavior at all scales remains consistent with the subspace recovery error tracked in our experiments . before numerically demonstrating our main result
, we quantify the separation needed between the linear structure and the noise and curvature with a geometric uncertainty principle .
condition 1 indeed imposes a geometric requirement for tangent space recovery .
solving for the range of scales for which condition 1 is satisfied and requiring the solution be real yields the geometric uncertainty principle stated in section [ sec : overview ] .
we note that this result is derived using @xmath16 , defined in equation , as the full expression for @xmath137 does not allow for an algebraic solution .
the geometric uncertainty principle expresses a natural requirement for the subspace recovery problem , ensuring that the perturbation to the tangent space is not too large .
recall that , with high probability , the noise orthogonal to the tangent plane concentrates on a sphere with mean curvature @xmath25 .
we therefore expect to require that the curvature of the manifold be less than the curvature of this noise - ball .
recalling the definitions of @xmath115 and @xmath15 from equation , @xmath141 is the mean curvature in codimension @xmath37 .
the quadratic mean of the @xmath33 mean curvatures is given by @xmath142 and we denote this normalized version of curvature as @xmath143 .
then requires that @xmath144 noting that @xmath145 , the uncertainty principle indeed requires that the mean curvature of the manifold be less than that of the perturbing noise - ball . as we expected to enforce @xmath146 ,
the uncertainty principle is in fact more restrictive than our intuition .
however , as only finite - sample corrections have been neglected in @xmath16 , the @xmath147 restriction of is of the correct order .
interestingly , this more restrictive requirement for tangent space recovery is only accessible through the careful perturbation analysis presented above and an estimate obtained by a more naive analysis would be too lax .
in this section we present an experimental study of the tangent space perturbation results given above . in particular , we demonstrate that the bound presented in the main result ( theorem [ thm : mainresult1 ] ) accurately tracks the subspace recovery error at all scales . as this analytic result requires no decompositions of the data matrix , our analysis provides an efficient means for obtaining the optimal scale for tangent space recovery .
we first present a practical use of the main result , demonstrating its accuracy when the intrinsic dimensionality , curvature , and noise level are known .
we then experimentally test the stability of the bound when these parameters are only imprecisely available , as is the case when they must be estimated from the data .
finally , we demonstrate the accurate estimation of the noise level and local curvature .
we generate a data set sampled from a 3-dimensional manifold embedded in @xmath148 according to the local model by uniformly sampling @xmath149 points inside a ball of radius @xmath150 in the tangent plane .
curvature and the standard deviation @xmath14 of the added gaussian noise will be specified in each experiment .
we compare our bound with the true subspace recovery error .
the tangent plane at reference point @xmath7 is computed at each scale @xmath13 via pca of the @xmath13 nearest neighbors of @xmath7 .
the true subspace recovery error @xmath80 is then computed at each scale . note that computing the true error requires @xmath13 svds .
a `` true bound '' is computed by applying theorem [ thm : stewart ] after measuring each perturbation norm directly from the data .
while no svds are required , this true bound utilizes information that is not practically available and represents the best possible bound that we can hope to achieve .
we will compare the mean of the true error and mean of the true bound over 10 trials ( with error bars indicating one standard deviation ) to the bound given by our main result in theorem [ thm : mainresult1 ] , holding with probability greater than 0.8 .
for the experiments in this section , the bound is computed with full knowledge of the necessary parameters .
in fact , as knowledge of @xmath4 provides an exact expression for @xmath94 , ( i.e. , no additional geometric information is encoded in @xmath94 that is not already encoded in @xmath4 ) we compute @xmath94 exactly .
further , as the principle curvatures are known , we compute a tighter bound for @xmath151 using @xmath152 in place of @xmath153 .
doing so only affects the height of the curve ; its trend as a function of scale is unchanged . in practice ,
the important information is captured by tracking the trend of the true error regardless of whether it provides an upper bound to any random fluctuation of the data . in fact , the numerical results indicate that an accurate tracking of error is possible even when condition 2 of theorem [ thm : mainresult1 ] is violated .
.principal curvatures of the manifold for figures [ fig : results]b and [ fig : results]c .
[ cols="^,^,^,^",options="header " , ] while it is unrealistic for data to be observed in the desired coordinate system aligned with the principal directions , tracking the trajectory of the center in each dimension yields the rotation necessary to transform to this coordinate system .
further , tracking the trajectory may yield a clean estimate of the reference point of the local model in the presence of noise .
while the noise leaves this trajectory unstable at small scales , it is very stable at scales above the noise level .
using the stability of the trajectory at large scales may allow us to extrapolate back and accurately recover the trajectory at small scales , yielding an estimate of the `` denoised '' reference point .
local pca of manifold - valued data has received attention in several recent works ( for example , those referenced in section [ sec : intro ] ) . in particular , the analyses of @xcite and @xcite , after suitable translation of notation and assumptions , demonstrate growth rates for the pca spectrum that match those computed in the present work .
the focus of our analysis is the perturbation of the eigenspace recovered from the local data covariance matrix .
we therefore confirm our results with those most similar from the literature .
two closely related results are those of @xcite , in which matrix perturbation theory is used to study the pca spectrum , and @xcite , where neighborhood size and sampling conditions are given to ensure an accurate tangent space estimate from noise - free manifold valued data . in @xcite , a finite - sample pca analysis assuming a linear model is presented .
keeping @xmath13 and @xmath49 fixed , the noise level @xmath14 is considered to be a small parameter . much like the analysis of the present paper , the results are derived in the non - asymptotic setting .
however , the bound on the angle between the finite - sample and population eigenvectors is summarized in @xcite for the asymptotic regime where @xmath13 and @xmath49 become large .
the result , restated here in our notation , takes the form : @xmath154 we note that the main results of @xcite are stated for @xmath155 and that our analysis expects the opposite in general , although it is not explicitly required .
nonetheless , by setting curvature terms to zero , our results recover the reported leading behavior following the same asymptotic regime as @xcite , where terms @xmath156 are neglected and @xmath14 is treated as a small parameter . after setting all curvature terms to zero
, we assume condition 1 holds such that the denominator @xmath137 is sufficiently well conditioned and we may drop all terms other than @xmath94 . then our main result has the form : @xmath157 = \frac{\sigma}{\sqrt{\lambda_d}}\frac{\sqrt{d(d - d)}}{\sqrt{n } } + \mathcal{o}(\sigma^2).\ ] ] setting @xmath158 to match the analysis in @xcite recovers its curvature - free result .
next , @xcite presents an analysis of local pca differing from ours in two crucial ways .
first , the analysis of @xcite does not include high - dimensional noise perturbation and the data points are assumed to be sampled directly from the manifold .
second , the sampling density is not fixed , whereas the neighborhood size determines the number of sample points in our analysis .
in fact , a goal of the analysis in @xcite is to determine a sampling density that will yield an accurate tangent space estimate .
allowing for a variable sampling density has the effect of decoupling the condition number @xmath136 from the norm @xmath159 measuring the amount of `` lift '' in directions normal to the tangent space due to the perturbation . the analysis of @xcite proceeds by first determining the optimal neighborhood radius @xmath79 in the asymptotic limit of infinite sampling , @xmath160 .
this approach yields the requirement that the spectra associated with the tangent space and curvature be sufficiently separated .
translating to our notation , setting noise terms to zero , and assuming the asymptotic regime of @xcite such that we may neglect finite - sample correction terms , we recover condition 1 of our main result theorem [ thm : mainresult1 ] : @xmath161 thus , theorem 1 of @xcite requires that @xmath12 be chosen such that the subspace recovery problem is well conditioned in the same sense that we require by condition 1 . substituting the expectations for each term in yields @xmath162 implying the choice @xmath163 ( for a constant @xmath164 ) , in agreement with the analysis of @xcite .
once the proper neighborhood size has been selected , the decoupling assumed in @xcite allows a choice of sampling density large enough to ensure a small angle .
again translating to our result , once @xmath12 is selected so that the denominator @xmath137 is well conditioned , the density may be chosen such that the @xmath22 decay of the numerator @xmath159 allows for a small recovery angle .
thus , we see that in the limit of infinite sampling and absence of noise , our results are consistent with those of @xcite in the fixed density setting .
as explored in the experimental section , practical methods must be developed to recover parameters such as dimension , curvature , and noise .
such parameters are necessary for any analysis or algorithm and should be recovered directly from the data rather than estimated by _ a priori _ fixed values . in section [ sec : numerical ] , we demonstrate methods for recovery of noise level and curvature .
the accuracy and stability of such schemes remain to be tested .
there exist other statistical methods for estimating the noise level present in a data set that should be useful in this context ( see , for example , @xcite ) . in @xcite , the smallest multiscale singular values are used as an estimate for the noise level and a scale - dependent estimate of noise variance is suggested in @xcite for curve - denoising .
methods for estimating curvature ( e.g. , @xcite ) have been developed for application to computer vision and extensions to the high - dimensional setting should be explored .
further , if one is willing to perform many svds of large matrices , our method presented in section [ sec : numerical ] combined with the analysis of @xcite might yield the individual principal curvatures .
the experimental results presented above suggest the particular importance of accurately estimating the intrinsic dimension @xmath4 , for which there exist several algorithms .
fukunaga introduced a local pca - based approach for estimating @xmath4 in @xcite .
the recent work in @xcite presents a multiscale approach that estimates @xmath4 in a pointwise fashion . performing an svd at each scale ,
@xmath4 is determined by examining growth rate of the multiscale singular values .
it would be interesting to investigate if this approach remains robust if only a coarse exploration of the scales is performed , as it may be possible to reduce the computational cost through an svd - update scheme .
another scale - based approach is presented in @xcite and the problem was studied from a dynamical systems perspective in @xcite . for a tractable analysis
, assumptions about sampling must be made . in this work
we have assumed uniform sampling in the tangent plane .
this is merely one choice and we have conducted initial experiments uniformly sampling the manifold rather than the tangent plane .
results suggest that for a given radius , sampling the manifold yields a smaller curvature perturbation than that from sampling the tangent plane .
while more rigorous analysis and experimentation is needed , it is clear that consideration must be given to the sampling assumptions for any practical algorithm .
the tangent plane recovered by our approach may not provide the best approximation over the entire neighborhood from which it was derived . depending on a user - defined error tolerance
, a smaller or larger sized neighborhood may be parameterized by the local chart .
if high accuracy is required , one might only parameterize a neighborhood of size @xmath165 to ensure the accuracy requirement is met . similarly ,
if an application requires only modest accuracy , one may be able to parameterize a larger neighborhood than that given by @xmath78 .
finally , we may wish to use tangent planes recovered from different neighborhoods to construct a covering of a data set .
there exist methods for aligning local charts into a global coordinate system ( for example @xcite , to name a few ) .
care should be taken to define neighborhoods such that a data set may be optimally covered .
this work was supported by the national science foundation [ dms-0941476 to f.g.m . and d.n.k .
, dge-0801680 to d.n.k . ] ; and the department of energy [ de - sc0004096 to f.g.m . ]
. 99 brand , m. ( 2003 ) charting a manifold . in _ adv .
neural inf . process .
15 _ , pp . 961968 . mit press .
broomhead , d. & king , g. ( 1986 ) extracting qualitative dynamics from experimental data .
d _ , * 20*(2 - 3 ) , 217236 .
chen , g. , little , a. , maggioni , m. & rosasco , l. ( 2011 ) some recent advances in multiscale geometric analysis of point clouds . in _ wavelets and
multiscale analysis : theory and applications _
, ed . by j.
cohen , & a. zayed , pp .
springer .
davis , c. & kahan , w. ( 1970 ) the rotation of eigenvectors by a perturbation iii .
_ siam j. numer .
_ , * 7 * , 146 .
donoho , d. & johnstone , i. ( 1995 ) adapting to unknown smoothness via wavelet shrinkage .
_ j. amer .
_ , * 90 * , 12001224 .
edelman , a. ( 1988 ) eigenvalues and condition numbers of random matrices .
_ siam j. matrix anal .
_ , * 9*(4 ) , 543560 .
feiszli , m. & jones , p. ( 2011 ) curve denoising by multiscale singularity detection and geometric shrinkage .
_ , * 31 * , 392409
. froehling , h. , crutchfield , j. , farmer , d. , packard , n. & shaw , r. ( 1981 ) on determining the dimension of chaotic flows .
d _ , * 3 * , 605617 .
fukunaga , k. & olsen , d. ( 1971 ) an algorithm for finding intrinsic dimensionality of data .
_ ieee trans .
_ , * c-20*(2 ) , 176183 .
giaquinta , m. & modica , g. ( 2009 ) _ mathematical analysis : an introduction to functions of several variables_. springer .
golub , g. & loan , c. v. ( 1996 ) _ matrix computations_. jhu press . johnstone , i. ( 2001 ) on the distribution of the largest eigenvalue in principal component analysis . _
_ , * 29 * , 295327 .
jones , p. ( 1990 ) rectifiable sets and the traveling salesman problem . _
_ , * 102 * , 115 .
jung , s. & marron , j. ( 2009 ) consistency in high dimension , low sample size context .
_ , * 27 * , 41044130 .
kambhatla , n. & leen , t. ( 1997 ) dimension reduction by local principal component analysis .
_ neural comput .
_ , * 9 * , 14931516 .
krsek , p. , lukcs , g. & martin , r. r. ( 1998 ) algorithms for computing curvatures from range data . in _ the mathematics of surfaces
viii , information geometers _ , pp .
laurant , b. & massart , p. ( 2000 ) adaptive estimation of a quadratic functional by model selection .
_ , * 28*(5 ) , 13021338 .
lin , t. & zha , h. ( 2008 ) riemannian manifold learning .
_ ieee trans . pattern anal .
_ , * 30 * , 796809 .
mitra , n. , nguyen , a. & guibas , l. ( 2004 ) estimating surface normals in noisy point cloud data .
_ internat .
j. comput .
_ , * 14*(45 ) , 261276 .
muirhead , r. ( 1982 ) _ aspects of multivariate statistical theory_. wiley .
nadler , b. ( 2008 ) finite sample approximation results for principal component analysis : a matrix perturbation approach .
_ , * 36 * , 27922817 .
ohtake , y. , belyaev , a. & seidel , h .-
( 2006 ) a composite approach to meshing scattered data . _ graph . models _ , * 68 * , 255267 .
roweis , s. & saul , l. ( 2000 ) nonlinear dimensionality reduction by locally linear embedding .
_ science _ , * 290 * , 23232326 .
roweis , s. , saul , l. & hinton , g. ( 2002 ) global coordination of locally linear models . in _ adv .
neural inf . process .
14 _ , pp . 889896 . mit press .
shawe - taylor , j. & cristianini , n. ( 2003 ) estimating the moments of a random vector with applications . in _ proc . of gretsi 2003 conference _ ,
. 4752 .
singer , a. & wu , h .-
( 2012 ) vector diffusion maps and the connection laplacian .
pure appl .
_ , * 64 * , 10671144 .
stewart , g. & sun , j. ( 1990 ) _ matrix perturbation theory_. academic press .
tropp , j. ( 2011 ) user - friendly tail bounds for sums of random matrices .
_ , * 12*(4 ) , 389434 .
tyagi , h. , vural , e. & frossard , p. ( 2012 ) tangent space estimation for smooth embeddings of iemannian manifolds .
_ arxiv preprint , available at http://arxiv.org / abs/1208.1065_. vershynin , r. ( 2012 ) introduction to the non - asymptotic analysis of random matrices . in _ compressed sensing ,
theory and applications _
, ed . by y.
eldar , & g. kutyniok , pp .
cambridge .
wang , x. & marron , j. ( 2008 ) a scale - based approach to finding effective dimensionality in manifold learning . _ electron .
_ , * 2 * , 127148 . williams , d. & shah , m. ( 1992 ) a fast algorithm for active contours and curvature estimation .
image und .
_ , * 55*(1 ) , 1426 .
yang , l. ( 2008 ) alignment of overlapping locally scaled patches for multidimensional scaling and dimensionality reduction .
_ ieee trans . pattern anal .
_ , * 30 * , 438450 .
zhang , t. , szlam , a. , wang , y. & lerman , g. ( 2010 ) randomized hybrid linear modeling by local best - fit flats . in _
cvpr _ , pp .
19271934 .
zhang , z. & zha , h. ( 2004 ) principal manifolds and nonlinear dimensionality reduction via tangent space alignment .
_ siam j. sci .
_ , * 26 * , 313338 .
technical calculations are presented in this appendix . in particular , the norm of each random matrix contributing to the perturbation term @xmath100 , defined in equation , is bounded with high probability . the analysis is divided between three cases : ( 1 ) norms of products of bounded random matrices ; ( 2 ) norms of products of unbounded random matrices ; and ( 3 ) norms of products of bounded and unbounded random matrices .
each case requires careful attention to derive a tight result that avoids large union bounds and ensures high probability that is independent of the ambient dimension @xmath49 .
the analysis proceeds by bounding the eigenvalues of the matrices @xmath166 , @xmath167 , and @xmath168 using results from random matrix theory and properties of the spectral norm . a detailed analysis of each of the three cases follows . before we start the proofs , one last comment is in order .
the reader will notice that we sometimes introduce benign assumptions about the number of samples @xmath13 or the dimensions @xmath4 or @xmath49 in order to provide bounds that are simpler to interpret .
these assumptions are not needed to derive any of the results ; they are merely introduced to help us simplify a complicated expression , and introduce upper bounds that hold under these fairly benign assumptions .
this should help the reader interpret the size of the different terms .
we often vectorize matrices by concatenating the columns of a matrix .
if @xmath169 $ ] , then we define @xmath170 we denote the largest and smallest eigenvalue of a matrix @xmath71 by @xmath171 respectively . in the main body of the paper , we use the standard notation @xmath172 to denote the sample mean of @xmath13 columns from the matrix @xmath60 . in this appendix , we introduce a second notation to denote the same concept , @xmath173 } = \overline{x } = \frac{1}{n } \sum_{n=1}^n x^{(n)}.\ ] ] finally , we denote by @xmath174 $ ] the expectation of the random matrix @xmath60 and by @xmath175 the probability of event @xmath176 .
we seek a bound on the maximum and minimum ( nonzero ) eigenvalue of the matrix @xmath177 as only the nonzero eigenvalues are of interest , we proceed by considering only the nonzero upper - left @xmath178 block of the matrix in , or equivalently , by ignoring the trailing zeros of each realization @xmath179 .
thus , momentarily abusing notation , we consider the matrix in to be of dimension @xmath178 .
the analysis utilizes the following theorem found in @xcite .
[ thm : tropp ] consider a finite sequence @xmath180 of independent , random , self adjoint matrices that satisfy @xmath181 compute the minimum and maximum eigenvalues of the sum of expectations , @xmath182 then @xmath183 \leq d\left[\frac{e^{-\delta}}{(1-\delta)^{(1-\delta)}}\right]^{\mu_{\min}/\lambda_{\infty } } , \quad \text{for } \delta \in [ 0,1 ] , \text { and } \\ & \mathbb{p}\left[\lambda_{\max}\left(\sum_{k=1}^n x_k\right ) \geq ( 1+\delta)\mu_{\max } \right ] \leq d\left[\frac{e^{\delta}}{(1+\delta)^{(1+\delta)}}\right]^{\mu_{\max}/\lambda_{\infty } } , \quad \text{for } \delta \geq 0
. \end{aligned}\ ] ] we apply this result to @xmath184 clearly @xmath185 is a symmetric positive - definite matrix and we have @xmath186 .
next , @xmath187 and we set @xmath188 .
simple computations yield @xmath189\right ) = \frac{r^2}{d+2}\left[1-\frac{1}{n}\right]^2 ,
\quad \text{and}\quad \lambda_{\min}\left(\sum_{k=1}^n\operatorname{\mathbb{e}}[x_k]\right ) = \frac{r^2}{d+2}\left[1-\frac{1}{n}\right]^2,\ ] ] and we set @xmath190 ^ 2.\ ] ] by theorem [ thm : tropp ] and using standard manipulations , we have the following result bound for the smallest eigenvalue , @xmath94 in our notation , @xmath191 ^ 2\left[1 - \xi_{\lambda_d } \frac{1}{\sqrt{n}}\frac{\sqrt{8(d+2)}}{(1-\frac{1}{n } ) } \right]\ ] ] with probability greater than @xmath192 .
similarly , the following result holds for the largest eigenvalue , @xmath193 in our notation : @xmath194\ ] ] with probability greater than @xmath195 , as soon as @xmath196 .
we define the last upper bound as @xmath197 , \label{eig - linear - bound}\ ] ] and we can use this bound to control the size of all the eigenvalues of the matrix @xmath198 ,
@xmath199 \ge 1-de^{-\xi^2}. \label{eig - linear - prob}\ ] ] now that we have computed the necessary bounds for all nonzero linear eigenvalues , we return to our standard notation for the remainder of the analysis : each @xmath179 is of length @xmath49 with @xmath200 for @xmath201 and @xmath202 $ ] is a @xmath203 matrix . to bound the largest eigenvalue , @xmath204 , of @xmath205 we note that the spectral norm is bounded by the frobenius norm and we use the bound on the frobenius norm derived in section [ sec : pure - curvature ] .
we can use this bound to control the size of all the eigenvalues of the matrix @xmath206 , @xmath207 \ge 1 - 2e^{-\xi^2}. \label{eig - curvature - prob}\ ] ] where @xmath208 ^ 2 } + \frac{(k^{(+)})^2 r^4}{4 \sqrt{n}}\left[(2+\xi\sqrt{2 } ) + \frac{(2+\xi\sqrt{2})^2}{\sqrt{n } } \right ] .
\label{eq : eigcc^t - bound}\ ] ] the proof of the bound on the frobenius norm is delayed until section [ sec : pure - curvature ] .
+ a different ( possibly tighter ) bound may be derived using theorem [ thm : tropp ] .
however , such a bound would hold with a probability that becomes small when the ambient dimension @xmath49 is large .
we therefore proceed with the bound above , noting that we sacrifice no additional probability by using it here since it is required for the analysis in section [ sec : pure - curvature ] .
we may control the eigenvalues of @xmath209 using standard results from random matrix theory . in particular ,
let @xmath210 and @xmath211 denote the smallest and largest singular value of matrix @xmath58 , respectively .
the following result ( corollary 5.35 of @xcite ) gives a tight control on the size of @xmath210 and @xmath211 when @xmath58 has gaussian entries .
[ thm : gaussiansvs ] let @xmath212 be a @xmath203 matrix whose entries are independent standard normal random variables . then for every @xmath213 , with probability at least @xmath214 one has @xmath215 define @xmath216 and note that the entries of @xmath217 are independent standard normal random variables . let us partition the gaussian vector @xmath46 into the first @xmath4 coordinates , @xmath218 , and last @xmath219 coordinates , @xmath220 , @xmath221 and observe that the matrix @xmath222 only depends on the realizations of @xmath218 .
similarly , the matrix @xmath223 only depends on the realizations of @xmath220 . by theorem [ thm : gaussiansvs ]
, we have @xmath224 with probability at least @xmath225 over the random realization of @xmath218 , as soon as @xmath226 , a condition easily satisfied for any reasonable sampling density .
similarly , @xmath227 with probability at least @xmath228 over the random realization of @xmath220 , as soon as @xmath229 .
begin by recalling the notation used for the curvature constants , @xmath231 the constant @xmath115 quantifies the curvature in codimension @xmath37 , for @xmath232 @xmath233 .
the overall compounded curvature of the local model is quantified by @xmath15 and is a natural result of our use of the frobenius norm .
we note that @xmath117 .
we also recall the positive constants @xmath234 our strategy for bounding the matrix norm @xmath235 begins with the observation that @xmath205 is a sample mean of @xmath13 covariance matrices of the vectors @xmath236 , @xmath237 .
that is , @xmath238})(c-{\widehat{\operatorname{\mathbb{e}}}[c]})^t]}.\ ] ] we therefore expect that @xmath205 converges toward the centered covariance matrix of @xmath52 .
we will use the following result of shawe - taylor and cristianini @xcite to bound , with high probability , the norm of the difference between this sample mean and its expectation .
[ thm : shawe ] _ ( shawe - taylor & cristianini , @xcite)_.given @xmath13 realizations of a random matrix @xmath239 distributed with probability distribution @xmath240 , we have @xmath241 - { \widehat{\operatorname{\mathbb{e}}}[y]}\right\|_f \leq \frac{r}{\sqrt{n}}\left(2+\xi\sqrt{2}\right ) \right\ } \ge 1-e^{-\xi^2}. \label{eq : shawe - taylor}\ ] ] the constant @xmath242 , where @xmath243 is the support of distribution @xmath240 .
we note that the original formulation of the result involves only random vectors , but since the frobenius norm of a matrix is merely the euclidean norm of its vectorized version , we formulate the theorem in terms of matrices .
we also note that the choice of @xmath244 in need not be unique .
our analysis will proceed by using upper bounds for @xmath245 which may not be suprema .
let @xmath246 using theorem [ thm : shawe ] and modifying slightly the proof of corollary 6 in @xcite , which uses standard inequalities , we arrive at @xmath247)(c-\operatorname{\mathbb{e}}[c])^tu_2]\right\|_f - \left\|{\widehat{\operatorname{\mathbb{e}}}[u_2^t(c-{\widehat{\operatorname{\mathbb{e}}}[c]})(c-{\widehat{\operatorname{\mathbb{e}}}[c]})^tu_2]}\right\|_f \bigg| & \nonumber \\
\leq \left\|\operatorname{\mathbb{e}}[u_2^tcc^tu_2 ] - { \widehat{\operatorname{\mathbb{e}}}[u_2^tcc^tu_2]}\right\|_f \negthickspace + \left\|\operatorname{\mathbb{e}}[u_2^tc ] - { \widehat{\operatorname{\mathbb{e}}}[u_2^tc]}\right\|_f^2 \leq \frac{r_{c}^2}{\sqrt{n}}\left(2+\xi_{c}\sqrt{2}\right ) & + \frac{r_{c}^2}{n}\left(2+\xi_{c}\sqrt{2}\right)^2\end{aligned}\ ] ] with probability greater than @xmath248 over the random selection of the sample points . to complete the bound
we must compute @xmath249 and @xmath250)(c-\operatorname{\mathbb{e}}[c])^tu_2]\right\|_f$ ] .
a simple norm calculation shows @xmath251 and we set @xmath252 .
next , the expectation takes the form @xmath253)(c-\operatorname{\mathbb{e}}[c])^tu_2]\big\|_f ~=~ \big\|\operatorname{\mathbb{e}}[u_2^tcc^tu_2 ] - \operatorname{\mathbb{e}}[u_2^tc]\operatorname{\mathbb{e}}[c^tu_2 ] \big\|_f.\ ] ] noting that @xmath254 for @xmath255 , we have @xmath256 = \frac{\left[3k_{nn}^{ij } + k_{mn}^{ij}\right]r^4}{4(d+2)(d+4)},\quad\text{and}\quad \operatorname{\mathbb{e}}[c_i]\operatorname{\mathbb{e}}[c_j ] = \frac{\left[k_{nn}^{ij } + k_{mn}^{ij}\right]r^4}{4(d+2)^2}.\ ] ] computing the norm , @xmath257)(c-\operatorname{\mathbb{e}}[c])^tu_2]\big\|_f = \frac{r^4}{2(d+2)^2(d+4 ) } \sqrt{\sum_{i , j = d+1}^d\left[(d+1)k_{nn}^{ij}-k_{mn}^{ij}\right]^2}.\end{aligned}\ ] ] finally , putting it all together , we conclude that @xmath258 ^ 2 } \nonumber \\ & + \frac{1}{\sqrt{n}}\frac{(k^{(+)})^2 r^4}{4}\left[\left(2+\xi_{c}\sqrt{2}\right ) + \frac{1}{\sqrt{n}}\left(2+\xi_{c}\sqrt{2}\right)^2 \right ] \end{aligned}\ ] ] with probability greater than @xmath248 over the random selection of the sample points . our approach for bounding the matrix norm @xmath260 mirrors that of section [ sec : pure - curvature ] .
here , we use that @xmath261 = 0 $ ] for @xmath262 and proceed as follows .
we have @xmath263 reasoning as in the previous section , we have @xmath247)(\ell-\operatorname{\mathbb{e}}[\ell])^tu_1]\right\|_f - \left\|{\widehat{\operatorname{\mathbb{e}}}[u_2^t(c-{\widehat{\operatorname{\mathbb{e}}}[c]})(\ell-{\widehat{\operatorname{\mathbb{e}}}[\ell]})^tu_1]}\right\|_f \bigg| & \nonumber \\ \leq \left\|\operatorname{\mathbb{e}}[u_2^tc\ell^tu_1 ] - { \widehat{\operatorname{\mathbb{e}}}[u_2^tc\ell^tu_1]}\right\|_f + \left\|{\widehat{\operatorname{\mathbb{e}}}[\ell^tu_1 ] } - \operatorname{\mathbb{e}}[\ell^tu_1]\right\|_f \bigg(\left\|{\widehat{\operatorname{\mathbb{e}}}[u_2^tc ] } - \operatorname{\mathbb{e}}[u_2^tc]\right\|_f + & \left\|\operatorname{\mathbb{e}}[u_2^tc]\right\|_f\bigg ) \nonumber \\ \leq \frac{r_{c}r_{\ell}}{\sqrt{n}}\left(2+\xi_{c\ell}\sqrt{2}\right ) +
\frac{r_{\ell}}{\sqrt{n}}\left(2+\xi_{\ell}\sqrt{2}\right ) \bigg[\frac{r_{c}}{\sqrt{n}}\left(2+\xi_{c}\sqrt{2}\right ) + \left\|\operatorname{\mathbb{e}}[u_2^tc]\right\|_f \bigg ] & \end{aligned}\ ] ] with probability greater than @xmath264 over the random selection of the sample points .
finally , we set @xmath265 and conclude @xmath266\end{aligned}\ ] ] with probability greater than @xmath267 over the random selection of the sample points .
we seek bounds for the matrix norms of the form @xmath269 because @xmath58 is composed of @xmath13 columns of independent realizations of a @xmath49-dimensional gaussian vector , the matrix @xmath212 defined by @xmath270 is wishart @xmath271 , where @xmath272 . as a result
, we can quickly compute bounds on the terms ( [ noise - noise ] ) since they can be expressed as the norm of blocks of @xmath212 .
indeed , let us partition @xmath212 as follows @xmath273 where @xmath274 is @xmath178 , @xmath275 is @xmath276 .
we now observe that @xmath277 is not equal to @xmath278 , but both matrices have the same frobenius norm .
precisely , the two matrices differ only by a left and a right rotation , as explained in the next few lines .
since only the first @xmath4 entries of each column in @xmath90 are nonzero , we can define two matrices @xmath279 and @xmath280 that extract the first @xmath4 entries and apply the rotation associated with @xmath90 , respectively , as follows @xmath281 we define similar matrices @xmath282 and @xmath283 such that @xmath284 . we conclude that @xmath285 in summary , we can control the size of the norms ( [ noise - noise ] ) by controlling the norm of the sub - matrices of a wishart matrix .
we first estimate the size of @xmath286 and @xmath287 .
this is a straightforward affair , since we can apply theorem [ thm : gaussiansvs ] with @xmath288 and @xmath289 , respectively , to get the spectral norm of @xmath274 and @xmath275 .
we then apply a standard inequality between the spectral and the frobenius norm of a matrix @xmath71 , @xmath290 this bound is usually quite loose and equality is achieved only for the case where all singular values of matrix @xmath212 are equal .
it turns out that this special case holds in expectation for the matrices in the analysis to follow , and thus provides a tight estimate of the frobenius norm . using , and we have the following bound @xmath291\ ] ] with probability greater than @xmath225 over the random realization of the noise . by
, we also have @xmath292\ ] ] with probability greater than @xmath228 over the random realization of the noise .
it remains to bound @xmath293 .
here we proceed by conditioning on the realization of the last @xmath219 coordinates of the noise vectors in the matrix @xmath58 ; in other words , we freeze @xmath294 . rather than working with gaussian matrices , we prefer to vectorize the matrix @xmath295 and define @xmath296 note that here we unroll the matrix @xmath295 row by row to build @xmath297 . because the frobenius norm of @xmath295 is the euclidean norm of @xmath297 , we need to find a bound on @xmath298 .
conditioning on the realization of @xmath294 , we know ( theorem 3.2.10 of @xcite ) that the distribution of @xmath297 is a multivariate gaussian variable @xmath299 , where @xmath300 is the zero vector of dimension @xmath301 and @xmath302 is the @xmath303 block diagonal matrix containing @xmath4 copies of @xmath304 @xmath305 let @xmath306 be a generalized inverse of @xmath302 ( such that @xmath307 ) , then ( see e.g. theorem 1.4.4 of @xcite ) @xmath308 now , because of theorem [ thm : gaussiansvs ] , @xmath275 has full rank , @xmath33 , with probability @xmath309 , and therefore @xmath302 has full rank , @xmath301 , with the same probability . in the following ,
we derive an upper bound on the size of @xmath298 when @xmath275 has full rank . a similar but tighter
bound can be derived when @xmath302 is rank deficient ; we only need to replace @xmath33 by the rank of @xmath275 in the bound that follows .
because the bound derived when @xmath275 is full rank will hold when @xmath275 is rank deficient ( an event which happens with very small probability , anyway ) , we only worry about this case in the following . in this case , @xmath310 and @xmath311 finally , using a corollary of laurant and massart ( immediately following lemma 1 of @xcite ) , we get that , @xmath312 with probability greater than @xmath313 . in the following , we assume that @xmath314 , which happens as soon as @xmath4 or @xmath49 have a moderate size . under this mild assumption
we have @xmath315 in order to compare @xmath316 to @xmath317 , we compute the eigendecomposition of @xmath302 , @xmath318 where @xmath319 is a unitary matrix and @xmath320 contains the eigenvalues of @xmath304 , repeated @xmath4 times .
letting @xmath321 be the largest eigenvalue of @xmath322 , we get the following upper bound , @xmath323 we conclude that , conditioned on a realization of the last @xmath219 entries of @xmath58 , we have @xmath324 | e_2\right\ } \ge 1-e^{-\xi_{e_3}^2}. \label{conditioned - wishart}\ ] ] to derive a bound on @xmath298 that holds with high probability , we consider the event @xmath325 \right\}.\ ] ] as we will see in the following , the event @xmath326 happens with high probability .
this event depends on the random realization of the top @xmath4 coordinates , @xmath218 , of the gaussian vector @xmath46 ( see ) .
let us define a second likely event , which depends only on @xmath220 ( the last @xmath219 coordinates of @xmath46 ) , @xmath327 theorem [ thm : gaussiansvs ] tells us that the event @xmath328 is very likely , and @xmath329 .
we now show that the probability of @xmath330 is also very small , @xmath331 in order to bound the first term , we condition on @xmath220 , @xmath332 now the two conditions , @xmath333 \\ \frac{1}{\sqrt{n } } \left(1 + \frac{\sqrt{d - d } + \varepsilon}{\sqrt{n } } \right ) \ge \sqrt{\lambda_{\max}\left(\frac{1}{n}a_{22}\right ) } \end{cases}\ ] ] imply that @xmath334 , \ ] ] and thus @xmath335 \lvert e_2 \right).\ ] ] because of ( [ conditioned - wishart ] ) the probability on the right - hand side is less than @xmath336 , which does not depend on @xmath220 .
we conclude that @xmath337 finally , since @xmath338 we have @xmath339\ ] ] with probability greater than @xmath340 over the realization of the noise .
our goal is to bound the matrix norm @xmath342 , with high probability , for @xmath343 .
we detail the analysis for the case where @xmath344 and note that the analysis for @xmath345 is identical up to the difference in dimension . using the decomposition of the matrix @xmath346 defined in the previous section , we have @xmath347 before proceeding with a detailed analysis of this term , let us a derive a bound , which will proved to be very precise , using a back of the envelope analysis .
the entry @xmath348 in the matrix @xmath349 is given by @xmath350 and it measures the average correlation between coordinate @xmath351 of the ( centered ) noise term and coordinate @xmath352 of the linear tangent term .
clearly , this empirical correlation has zero mean , and an upper bound on its variance is given by @xmath353 where the top eigenvalue @xmath193 measures the largest variance of the random variable @xmath51 , measured along the first column of @xmath90 . since the matrix @xmath354 is @xmath178 , we expect @xmath355 we now proceed with the rigorous analysis .
the singular value decomposition of @xmath356 is given by @xmath357 where @xmath358 is the @xmath178 matrix of the singular values , and @xmath359 is a matrix composed of @xmath4 orthonormal column vectors of size @xmath13 . injecting the svd of @xmath360 we have @xmath361
define @xmath362 each row of @xmath363 is formed by the projections of the corresponding row of @xmath364 onto the @xmath4-dimensional subspace of @xmath365 formed by the columns of @xmath359 . as such , the projected row is a @xmath4-dimensional gaussian vector , the norm of which scales like @xmath366 with high probability .
the only technical difficulty involves the fact that the columns of @xmath359 change with the different realizations of @xmath56 .
we need to check that this random rotation of the vectors in @xmath359 does not affect the size of the norm of @xmath363 .
proceeding in two steps , we first freeze a realization of @xmath56 , and compute a bound on @xmath367 that does not depend of @xmath56 .
we then remove the conditioning on @xmath56 , and compute the probability that @xmath368 be very close to @xmath4 . instead of working with @xmath363
, we define the @xmath369-dimensional vector @xmath370 consider the @xmath371-dimensional gaussian vector @xmath372 in the next few lines , we construct an orthogonal projector @xmath373 such that @xmath374 . as a result
, we will have that @xmath375 , and using standard results on the concentration of the gaussian measure , we will get an estimate of @xmath376 . + first , consider the following @xmath377 matrix @xmath378 formed by stacking @xmath4 copies of @xmath379 in a block diagonal fashion with no overlap ( note that @xmath379 is not a square matrix ) .
we observe that because no overlap exists between the blocks , the rows of @xmath380 are orthonormal and @xmath380 is an orthogonal projector from @xmath381 to @xmath382 . now , we consider the @xmath383 permutation matrix @xmath384 constructed as follows .
we first construct the @xmath385 matrix @xmath386 by interleaving blocks of zeros of size @xmath387 between the columns vectors of the @xmath178 identity matrix , @xmath388 now consider the matrix @xmath389 obtained by performing a circular shift of the columns @xmath386 to the right by one index , @xmath390 we can iterate this process @xmath391 times and construct @xmath13 such matrices , @xmath392 .
finally , we stack these @xmath13 matrices to construct the @xmath393 permutation matrix @xmath394 by construction , @xmath384 only contains a single nonzero entry , equal to one , in every row and every column , and therefore is a permutation matrix .
finally , the matrix @xmath384 allows to move the action of @xmath359 from the right of @xmath58 to the left , and we have @xmath395 putting everything together , we conclude that the matrix defined by @xmath396 is an orthogonal projector , and therefore @xmath375 . using again the previous bound ( [ norm - chi ] ) on the norm of a gaussian vector
, we have @xmath397 to conclude the proof , we remove the conditioning on @xmath56 , and using ( [ z1-conditioned ] ) we have @xmath398 since @xmath399 , we have @xmath400 finally , combining ( [ eig - linear - bound ] ) , ( [ eig - linear - prob ] ) , ( [ u1q1 ] ) , ( [ le - ev ] ) , and ( [ norm - z1 ] ) we conclude that @xmath401 which implies @xmath402 \left(d + \frac{6}{5}\varepsilon\right)\right ) \nonumber \\ & \ge ( 1 - e^{-\varepsilon^2/2 } ) ( 1 - d e^{-\xi^2}).\end{aligned}\ ] ] a similar bound holds for @xmath403 .
indeed , we define @xmath404 again , we can construct an orthogonal projector @xmath405 and a permutation @xmath406 with sizes @xmath407 and @xmath408 , respectively , so that @xmath409 by combining ( [ orthoproj1 ] ) and ( [ orthoproj2 ] ) , we can control the concatenated vector @xmath410 by estimating the norm of @xmath411 . we conclude that @xmath412 \begin{bmatrix } \\ d & + & \frac{6}{5 } \xi_{e\ell}\\ \\ \sqrt{d(d - d ) } & + & \frac{6}{5 } \xi_{e\ell } \\ \\ \end{bmatrix } \label{le - all}\ ] ] with probability greater than @xmath413 over the joint random selection of the sample points and random realization of the noise . the analysis to bound the matrix norm @xmath415 for @xmath343 proceeds in an identical manner to that for the bound on @xmath416 .
we therefore give only a brief outline here . mimicking the reasoning that leads to ( [ le - lambda1 ] )
, we get @xmath417 where @xmath418 is the bound on all the eigenvalues of @xmath419 defined in ( [ eq : eigcc^t - bound ] ) .
this leads to a bound similar to ( [ le - all ] ) for the tangential and curvature components of the noise , @xmath420 with probability greater than @xmath421 over the joint random selection of the sample points and random realization of the noise . | constructing an efficient parameterization of a large , noisy data set of points lying close to a smooth manifold in high dimension remains a fundamental problem .
one approach consists in recovering a local parameterization using the local tangent plane .
principal component analysis ( pca ) is often the tool of choice , as it returns an optimal basis in the case of noise - free samples from a linear subspace . to process noisy data samples from a nonlinear manifold
, pca must be applied locally , at a scale small enough such that the manifold is approximately linear , but at a scale large enough such that structure may be discerned from noise . using eigenspace perturbation theory and non - asymptotic random matrix theory , we study the stability of the subspace estimated by pca as a function of scale , and bound ( with high probability ) the angle it forms with the true tangent space . by
adaptively selecting the scale that minimizes this bound , our analysis reveals an appropriate scale for local tangent plane recovery .
we also introduce a geometric uncertainty principle quantifying the limits of noise - curvature perturbation for stable recovery .
manifold - valued data , tangent space , principal component analysis , subspace perturbation , local linear models , curvature , noise .
+ 2000 math subject classification : 62h25 , 15a42 , 60b20 |
this condition ranks second among the complaints in the united states , following upper respiratory tract complaints .
herniated discs ( hd ) account for 4 % of the total cases of mechanical low back pain , and occur in approximately 2.8 million patients annually .
patients with lumbar hd experience acute onset unilateral or bilateral lower extremity pain and numbness associated with the low back pain .
however , 70 % of lumbar hd patients recover from sciatica within 6 weeks of its onset .
thus , considering the natural history of hd , the overall patient prognosis is good .
however , an estimated 10 % of patients will experience continued pain and neurological deficits , and surgical intervention should be considered for these patients .
a systematic review comparing surgical intervention and conservative management indicated that surgical intervention enables faster pain relief , compared to prolonged conservative treatment , during short - term follow - up , although no marked differences are noted during long - term follow - up [ 4 , 5 ] .
lumbar hd patients are primarily between 20 and 40 years of age , employed , and play an active role in society . to reduce the quality of life issues caused by surgical intervention , including microdiscectomy , more effective and less invasive treatments
several previous studies have described the potential gene markers for lumbar disc diseases , including collagen 9a2 [ 6 , 7 ] , vitamin d receptor , matrix metalloproteinase ( mmp)-3 , cartilage intermediate layer protein , collagen 11a1 , thrombospondin ( thbs2 ) , sickle tail ( skt ) , mmp-9 , asporin ( aspn ) , and carbohydrate sulfotransferase .
systematic reviews demonstrated that there is moderate evidence of correlation of aspn , colxia1 , skt , thbs2 , and mmp-9 with hd .
however , further studies are needed to identify the gene that is strongly correlated with the disease .
assessment of the upregulation and downregulation patterns of specific candidate genes in animal models may facilitate the identification their precise roles in disc degeneration .
regenerative medicine techniques for the treatment of disc degeneration have recently been developed for clinical use .
co - culture of autologous mesenchymal stem cells with the patient s nucleus pulposus cells or annulus fibrosus cells may be a good alternative for regenerating the degenerated disc matrix .
in addition , the administration of autologous platelet - rich plasma may promote a reparative effect on degenerative disc tissues .
these methods suggest the potential for regeneration of degenerated disc tissues in the near future .
the resorption process of hd was demonstrated using sequential magnetic resonance imaging ( mri ) , and this resorption process may be the reason for the relatively good prognosis in cases of hd ( fig
vroomen demonstrated that 70 % of patients with hd indicated the disappearance of sciatica within 6 weeks of its onset .
in addition , the non - contained classification types of hd , such as transligamentous extrusion and sequestration , as well as the enhanced contrast noted around hd , indicated a high tendency for resorption , thus suggesting that vascularization around the hd would be an important factor for hd resorption .
this hd resorption phenomenon was demonstrated in the lumbar , thoracic , and cervical regions of the spine [ 22 , 23].fig .
1sequential magnetic resonance imaging of a 66 year - old man demonstrating resorption of the herniated disc sequential magnetic resonance imaging of a 66 year - old man demonstrating resorption of the herniated disc a recent study indicated that it was impossible to assess good or unfavorable outcomes using mri at the 1-year follow - up for patients who had been treated for lumbar hd .
however , with recent developments in mri , the observation of tract fibers with diffusion tensor tract images as well as the identification of symptomatic nerve roots ( due to spinal disorders ) with diffusion - weighted imaging , is currently possible . through these new technological advancements in mri ,
the identification of symptomatic nerve tissues with hd will be possible in the near future , thus facilitating more accurate investigations for hd patients .
a study on hd due to disc rupture or cartilaginous tumor was first reported in 1934 . the study concluded that ruptures of the disc were more common than tumors .
the authors recommended that the primary mode of treatment should be surgery . in 1939 , love described a surgical technique that involved the identification of an hd mass through myelography and subsequent removal of the hd via partial laminectomy .
a study on the long - term outcomes of discectomy , including a follow - up period of more than 10 years , showed relatively favorable results with an average improvement rate of 73.5 % .
a prospective randomized multicenter study was performed on 1,244 cases at 13 medical institutes in the united states to compare whether surgical or conservative treatment showed favorable outcomes .
when the short form 36 and oswestry disability index ( odi ) assessment was performed at 3 months , 1 year , and 2 years after intervention , surgical treatment was found to be more effective .
however , the findings of that study should be cautiously interpreted , as the crossover rates between the surgery and conservative group were 40 and 45 % , respectively .
the first case of the use of the microendoscopic discectomy system in japan involved a patient with lumbar hd in 1998 .
since then , the use of minimally invasive spinal endoscopic surgery has increased rapidly , according to an annual report by the japanese orthopaedic association .
a prospective study was performed on 120 consecutive patients with lumbar hd treated by the lumbar med system , with a 5-year follow - up period ; the authors indicated that this less invasive and efficient approach using a small skin incision ensured minimal tissue damage and a shorter length of hospital stay , as well as excellent clinical outcomes ( odi and lumbar visual analogue scale scoring ) , as compared to conventional discectomy .
cases of intraoperative dural tears , surgery at the wrong disc level , and contralateral symptoms owing to the lack of complete decompression of the nerve root were mainly reported for operations performed by inexperienced surgeons , although the frequency decreased with increasing experience of the operator .
however , cases of postoperative epidural hematomas , nerve root anomalies , and mechanical failures of surgical instruments were noted even in operations performed by experienced surgeons .
the classical definition of hd is a protrusion or extrusion of a degenerated disc , which results in the presence of a low amount of disc matrix components such as proteoglycan and type ii collagen in the spinal canal or neural foramen , where there is abundant vascular supply .
surgically removed tissues indicated tissue granulation , along with the marked infiltration of macrophages and some lymphocytes , concomitant with neovascularization .
we developed a co - culture model of disc chondrocytes and peritoneum macrophages derived from mmp-3- or mmp-7-deficient mice to reproduce the acute phase of hd and investigate the role in mmps in hd resorption .
the mmp-3 derived from disc chondrocytes plays a crucial role in the generation of a macrophage chemoattractant , which results in the subsequent infiltration of active macrophages into disc tissues .
in addition , the mmp-7 released by macrophages is essential for the release of soluble tnf- from these macrophages .
thus , extensive communication between disc chondrocytes and extrinsic infiltrated macrophages is important for hd resorption .
vascular endothelial growth factor ( vegf ) , a potent angiogenic factor , was found to be strongly expressed in surgical samples of lumbar hd .
we previously reported that the upregulation of vegf under co - culture conditions strongly induced neovascularization .
tnf- induces the release of vegf from disc cells through nf - kb signaling , which results in cd31 expression on endothelial cells and formation of an anastomosing network .
interestingly , the degree of angiogenic activity was found to be closely related to aging .
monocyte chemotactic protein ( mcp)-1 is a cc chemokine that plays an important role in the activation and recruitment of macrophages .
mcp-1 was found to be expressed in both infiltrated macrophages and disc cells from surgical hd samples .
moreover , tnf- acts as the initiator of inflammation , following contact between macrophages and disc chondrocytes .
tnf- induces the release of thymic stromal lymphopoietin ( tslp ) from disc cells through the nf - kb pathway .
the disc cells then express the tslp receptor and produce mcp-1 through the phosphatidyl - inositol 3-kinase / akt pathway .
interestingly , mcp-1 expression in murine intervertebral discs showed age - related decreases , whereas its response to inflammation showed age - related changes . tumor necrosis factor - like weak inducer of apoptosis ( tweak ) is a member of the tnf- superfamily of cytokines .
the tweak and its receptor , fn14 , were expressed in disc tissues . the tweak - induced disc cells to generate mmp-3 via a c - jun n - terminal kinase , resulting in disc matrix degradation .
2 ) [ 40 , 41].fig . 2schematic model demonstrating the mechanism of herniated disc resorption .
activated macrophages generate tumor necrosis factor ( tnf)-. matrix metalloproteinase ( mmp)-7 derived from macrophages releases the soluble tnf- from these macrophages , which induces disc cells to generate monocyte chemotactic protein ( mcp)-1 and mmp-3 .
. vascular endothelial growth factor induces neovascularization schematic model demonstrating the mechanism of herniated disc resorption .
activated macrophages generate tumor necrosis factor ( tnf)-. matrix metalloproteinase ( mmp)-7 derived from macrophages releases the soluble tnf- from these macrophages , which induces disc cells to generate monocyte chemotactic protein ( mcp)-1 and mmp-3 .
these patients show acute onset of low back and/or lower extremity pain , which results in absenteeism from daily work .
thus , a less invasive treatment compared to microdiscectomy with microscopy or endoscopy is required .
chemonucleolysis is a treatment that involves the administration of enzymes into the hd , and has been proposed as an alternative and less invasive approach to avoid surgery .
chemonucleolysis causes the degradation of aggrecan and/or collagens , which results in the decrease of the hd matrix and water content , as well as a reduction in the pressure exerted by the hd on nerve tissues .
smith previously reported on the administration of chymopapain , derived from carica papaya , into lumbar hd . although its therapeutic effects have been well documented , chymopapain contains a wide range of substrates , such as aggrecan and collagens , which can lead to certain adverse side effects , including anaphylactic shock , subarachnoid hemorrhage , transverse myelitis , and discitis .
purified bacterial collagenase , derived from clostridium histolyticum , has also been used for chemonucleolysis . a prospective randomized study using chymopapain and
collagenase showed good and excellent outcomes 5 years after treatment in 72 % of the chymopapain group and in 52 % of the collagenase group .
in addition , a prospective randomized trial of 100 consecutive lumbar hd patients was performed to determine whether chemonucleolysis with chymopapain or standard discectomy showed better or improved outcomes .
it was found that there were no differences between the two treatments at 1 , 1013 , and 2427 years after the treatment .
thus , chymopapain which had been recognized as a favorable chemonucleolysis reagent was subsequently withdrawn as a treatment option owing to the associated complications .
we demonstrated that mmp-7 , which is strongly expressed in human hd material , plays a crucial role in the hd resorption process .
therefore , we developed recombinant human ( rh ) mmp-7 , which may be an ideal candidate as a chemonucleolysis drug .
rhmmp-7 degraded human surgical samples of hd in a concentration - dependent manner . moreover , this effect is not significantly correlated with patient age , hd degeneration grade , and interval between the onset of symptoms and surgery .
the aggrecan cleavage rates of rhmmp-7 exhibited a 1,000-fold increase when compared to that of type 1 or type 2 collagens .
the intradiscal administration of rhmmp-7 was found to decrease the proteoglycan and water content in a canine vivo model .
moreover , epidural injections of rhmmp-7 did not show any adverse effects at both the injection site and nerve tissues .
we are currently performing clinical trials using rhmmp-7 on lumbar hd patients in the united states and are carefully monitoring the patients conditions.fig . | lumbar herniated discs commonly occur in patients 2040 years of age , and result in acute symptoms of shooting and intractable pain in the low back and/or lower extremities .
however , the prognosis of these patients is considered to be very good . moreover , 70 % of these patients have been reported to be free from sciatica at approximately 6 months after the first onset . magnetic resonance imaging ( mri )
studies have described the spontaneous resorption process of herniated discs , which is a major cause of the reduction of symptoms in patients .
new advancements in mri have recently been developed that have facilitated the examination of nerve tract fibers and identification of symptomatic nerve tissue .
furthermore , the mechanism underlying the resorption process of a herniated disc has been determined .
inflammatory cytokines such as tnf ( tumor necrosis factor)- , angiogenic factors such as vascular endothelial growth factor , and enzymes such as matrix metalloproteinases are intricately related to each other . in our previous studies , matrix metalloproteinase-7 ( mmp-7 ) has been shown to play a crucial role in the initiation of herniated disc resorption .
therefore , we developed recombinant human mmp-7 for intradiscal therapy through an industry university joint research program .
we have already performed in vitro and in vivo experiments to confirm its efficacy ; this therapy avoids the side effects associated with surgery , such as nerve tissue damage .
moreover , the phase 1/2 studies of recombinant human ( rh ) mmp-7 are currently ongoing in the united states , and careful monitoring is required for these clinical trials . in conclusion , patients with lumbar herniated discs may benefit from the development of a less invasive treatment for disc herniation , which can be applied even immediately after the onset of disease symptoms . |
we complied with the institutional review boards of the centers for disease control and prevention ( cdc ) ( protocol 4797 ) and the broad institute of mit and harvard .
denv was obtained from human serum received through the passive surveillance system administered by cdc .
each sample was accompanied by a form that captured geographic and clinical information maintained for this study without patient identifiers . primary or secondary
status of infection was inferred by absence or presence of serum immunoglobulin g ( 15 ) .
selection of 3 isolates per year in the 5 municipalities with the highest reporting of denv-2 cases resulted in 253 isolates , of which 140 were successfully sequenced and are representative of our virus repository with respect to patient age ( 27.7 vs. 22.6 years ) , sex ( 54.4% vs. 47.4% male ) , and history of infection ( 84.6% vs. 77% secondary infections ) .
we extracted rna from tissue culture supernatant using the m48 or mdx biorobot ( qiagen , valencia , ca , usa ) .
cdna was generated by using sensiscript rt ( qiagen ) with random hexamers ( applied biosciences , foster city , ca , usa ) .
presence of cdna was confirmed by pcr by using pfuultraii ( stratagene , la jolla , ca , usa ) or itaq ( bio - rad , hercules , ca , usa ) dna polymerase and specific oligonucleotides ( cdc , atlanta , ga , usa ) .
pcr at cdc ( san juan , pr ) and sequenced at the broad institute ( cambridge , ma , usa ) by bidirectional sanger by using an abi 3730 after pcr with 96 m13-tailed serotype - specific primers .
resulting reads were trimmed of the primer sequences , filtered for high quality , and assembled by using algorithms developed by the broad institute .
all coding sequences for the poliproteins ( 10,173 nt ) and parts of the 5 and 3 noncoding regions were deposited in genbank .
coding sequences for the unprocessed polyprotein ( 5 and 3 noncoding regions excluded ) were aligned by clustalw software ( www.ebi.ac.uk/tools/clustalw/index.html ) in mega 4 ( www.megasoftware.net ) . maximum likelihood analysis and bootstrapping tests
were performed in paup * ( 16 ) under the best - fit substitution model estimated by modeltest v3.07 ( 14 ) ( parameters available on request ) .
mean rates of nucleotide substitution and relative genetic diversity ( net , where t is the generation time ) were estimated by using bayesian markov chain monte carlo ( mcmc ) from beast v1.4.7 ( http://mbe.oxfordjournals.org/content/25/7/1459 ) .
general time reversible substitution model with strict and relaxed molecular clocks and constant population size or bayesian skyline coalescent analysis was used .
all mcmc chains were run for sufficient length ensuring stationary parameters , with statistical error reflected in values of the 95% highest probability density .
amino acid differences were mapped by using parsimony methods in macclade v4.08 ( 17 ) .
we determined dn / ds ratios with the single likelihood ancestor counting method using hyphy and accessed through the datamonkey server ( 13 ) .
associations between phylogeny and geographic data were investigated by using bayesian tip - association significance testing ( http://evolve.zoo.ox.ac.uk/evolve/bats.html ) with the posterior sample of trees calculated by beast . for the parsimony score , association index , and monophyletic clade size
we complied with the institutional review boards of the centers for disease control and prevention ( cdc ) ( protocol 4797 ) and the broad institute of mit and harvard .
denv was obtained from human serum received through the passive surveillance system administered by cdc .
each sample was accompanied by a form that captured geographic and clinical information maintained for this study without patient identifiers . primary or secondary
status of infection was inferred by absence or presence of serum immunoglobulin g ( 15 ) .
selection of 3 isolates per year in the 5 municipalities with the highest reporting of denv-2 cases resulted in 253 isolates , of which 140 were successfully sequenced and are representative of our virus repository with respect to patient age ( 27.7 vs. 22.6 years ) , sex ( 54.4% vs. 47.4% male ) , and history of infection ( 84.6% vs. 77% secondary infections ) .
we extracted rna from tissue culture supernatant using the m48 or mdx biorobot ( qiagen , valencia , ca , usa ) .
cdna was generated by using sensiscript rt ( qiagen ) with random hexamers ( applied biosciences , foster city , ca , usa ) .
presence of cdna was confirmed by pcr by using pfuultraii ( stratagene , la jolla , ca , usa ) or itaq ( bio - rad , hercules , ca , usa ) dna polymerase and specific oligonucleotides ( cdc , atlanta , ga , usa ) .
pcr at cdc ( san juan , pr ) and sequenced at the broad institute ( cambridge , ma , usa ) by bidirectional sanger by using an abi 3730 after pcr with 96 m13-tailed serotype - specific primers .
resulting reads were trimmed of the primer sequences , filtered for high quality , and assembled by using algorithms developed by the broad institute .
all coding sequences for the poliproteins ( 10,173 nt ) and parts of the 5 and 3 noncoding regions were deposited in genbank .
coding sequences for the unprocessed polyprotein ( 5 and 3 noncoding regions excluded ) were aligned by clustalw software ( www.ebi.ac.uk/tools/clustalw/index.html ) in mega 4 ( www.megasoftware.net ) . maximum likelihood analysis and bootstrapping tests
were performed in paup * ( 16 ) under the best - fit substitution model estimated by modeltest v3.07 ( 14 ) ( parameters available on request ) .
mean rates of nucleotide substitution and relative genetic diversity ( net , where t is the generation time ) were estimated by using bayesian markov chain monte carlo ( mcmc ) from beast v1.4.7 ( http://mbe.oxfordjournals.org/content/25/7/1459 ) .
general time reversible substitution model with strict and relaxed molecular clocks and constant population size or bayesian skyline coalescent analysis was used .
all mcmc chains were run for sufficient length ensuring stationary parameters , with statistical error reflected in values of the 95% highest probability density .
amino acid differences were mapped by using parsimony methods in macclade v4.08 ( 17 ) .
we determined dn / ds ratios with the single likelihood ancestor counting method using hyphy and accessed through the datamonkey server ( 13 ) .
associations between phylogeny and geographic data were investigated by using bayesian tip - association significance testing ( http://evolve.zoo.ox.ac.uk/evolve/bats.html ) with the posterior sample of trees calculated by beast . for the parsimony score , association index , and monophyletic clade size
during 19862007 , dengue cases in puerto rico ranged from 2,000 to 16,000 per year ( figure 1 , panel a ) , with major epidemics ( > 8,000 cases ) reported in 1986 , 1992 , 1994 , 1998 , and 2007 ( 24,18,19 ) . despite major fluctuations in serotype circulation ,
denv-2 circulated predominantly for 10 years ( figure 1 , panel b ) , alternating with denv-1 through 2 periods of resurgence during the 1990s and cocirculation of denv-4 ( figure 1 , panel b ) .
denv-2 declined markedly after the 1998 epidemic and the dissemination of denv-3 concomitant to the disappearance of denv-1 and -4 .
however , denv-2 continued to cause a low number of cases during 19992003 and reemerged in 20042007 ( figure 1 , panel b ) .
samples from every year of the 22-year study period ( figure 1 , panel c ) comprised our analysis .
a ) number of suspected , clinically defined cases of dengue fever / dengue hemorrhagic fever by year reported to the centers for disease control and prevention s dengue branch .
b ) percentage of identifications of each serotype relative to the total of positive serotype identifications by using tissue culture isolation or reverse transcription pcr per year .
numbers in parenthesis indicate numbers of dengue virus ( denv ) serotype 2 identifications each year .
black , denv-1 ; blue , denv-2 ; white , denv-3 ; red , denv-4 .
c ) number of partially sequenced ( e gene ) autochthonous puerto rican isolates reported by previous studies ( 14,20 ) and whole genome sequences obtained in the present study by year of their corresponding case presentation .
d ) bayesian coalescent inference of population dynamics and genetic diversity by using the bayesian skyline plot .
sampling procedures were used to estimate posterior distribution of denv-2 genetic diversity in an effective population through the study period on the basis of full genome sequence data .
x axis , time in years through the study period ; y axis , product of the effective population size ( relative genetic diversity ) and generation length in years ; black line , median estimate ; blue shadow , 95% highest probability density .
the bayesian skyline analysis ( figure 1 , panel d ) of the autochthonous viral sequences ( figure 2 ; clades ib and ii figure a1 ) showed a gradual increase in the genetic diversity of denv-2 during 19871991 that corresponds to a period of high transmission and dominance ( figure 1 , panel b ) .
this increase was followed by 9 years of high genetic diversity that coincided with a period of denv-1 and denv-4 cocirculation .
the genetic diversity of denv-2 declined sharply during 19992003 , coinciding with a period of minimal denv-2 transmission .
maximum likelihood phylogeny of the 140 puerto rico and 20 international isolates of denv-2 ( see number of isolates by year below ) .
names of clades ( i , ii , and iii ) and subclades ( ia , ib ) are shown at the base of their respective branches on the phylogeny tree .
clade ii ( dark blue ) circulated during 19861996 and clade i ( light blue ) during 19942007 .
subclade ia and clade iii represent foreign , transient reintroductions throughout the 22-year study period .
black dots indicate 18 isolates from puerto rico with phylogenetic associations closer to foreign isolates than to other puerto rico viruses .
two - letter geographic codes indicate origin ( pr , puerto rico ; jm , jamaica ; sj , saint john ; sh , saint thomas ; ve , venezuela ; cl , colombia ; nc , nicaragua ; mx , mexico ; cu , cuba ; si , saint kits ; dr , dominican republic ; sx , saint croix ; br , brazil ) . pr code is accompanied by a 2-digit number that indicates geographic location ( municipality ) on the island .
the 2-digit numbers between the geographic codes and the genbank accession numbers indicate the year of the corresponding case of each isolate .
bootstrap values are shown for all clades ( i , ii , and ii ) , subclades ( ia , ib ) , and most immediate lineages .
twelve amino acid changes associated with ib / ii differences are shown on the table ; 6 other selected changes across subclade ib are shown on the left at the base of the branches exhibiting the respective changes .
relevant epidemiologic events are highlighted to the right with pie charts showing the relative levels of each serotype isolated for that period black , denv-1 ; blue , denv-2 ; white , denv-3 ; red , denv-4 .
the densely populated island of puerto rico ( 3,808,610 population ; 3,508 square miles ) is divided into 78 municipalities grouped in 8 regions .
the 140 denv-2 genomes from 37 municipalities represented all 8 regions and ranged from 220 isolates per year ( figure 1 , panel c ; figure 2 ) .
the number of puerto rico sequences is proportional to the epidemic level or the relative proportion of denv-2 identifications in the municipalities with highest denv-2 reporting per year .
the phylogeny of the 160 denv-2 genomes showed 2 major clades ( i and ii ) and a smaller clade ( iii ) ( figure 2 ) .
subclade ia ( 19982007 ) contains 13 puerto rico and 12 caribbean sequences , including 1 from st .
subclade ib ( 19942007 ) contains 90 sequences mostly of local origin , but the presence of 6 foreign and 1 local basal sequences confirms its foreign origin .
these genetically distinct isolates do not fit in clade i or ii , but a separate analysis with publically available envelope gene sequences pointed to possible caribbean origin ( k.l .
first , a mixture of foreign and local strains at the base of subclades ia and ib provides evidence of multiple introductions .
these years also are associated with a distinct subgroup basal to subclade ib concomitant with the extinction of clade ii in 1997 .
third , a period of limited circulation of denv-2 reflected in low levels of genetic diversity ( 19992003 ) coincided with the expansion of denv-3 and decline of denv-1 and -4 .
forty - nine amino acid differences mapped to the phylogeny were detected across the major internal branches of the tree .
twenty of these comprise major differences between clades i and ii and between subclades ia and ib , as well as substitutions that arose during the continuous evolution of subclade ib ( figure 2 ) . only 1 aa substitution distinguished isolates in clades i / ii from iii : a hydrophilic glutamine to a hydrophobic leucine at position 131 in the e protein . excluding pr79_1995_eu569708 as a possible foreign introduction ,
18 aa differences distinguish isolates across clade i , 12 of which separate subclade ib from clade ii and potentially could have been involved in the 19941997 lineage turnover ( figure 2 ) .
the remaining differences between isolates in subclades ib and ii were present in nonstructural ( ns ) genes and are preponderantly conservative mutations , with the exception of position 31 in ns3 , which was nonconservative . among the additional changes , the only nonconservative mutation was a hydrophobic alanine to hydrophilic threonine at position 137 in ns4b that originated with pr40_1999 eu482730 , and most changes were found in the ns genes .
using bayesian mcmc and dn / ds analyses , we estimated the mean substitution rates for the full genomes at 9 10 to 1.1 10 for all clades , consistent with previously published rates ( 20,21 ) .
the low dn / ds ratios ( 0.070.08 ) provide evidence of a low percentage of substitutions that have been fixed along independent lineages , possibly indicating purifying , negative selection .
bats analysis shows that lineages often correlated with the corresponding region of origin of the isolates .
seven of the 8 regions had > 4 isolates in subclade ib or clade ii .
the most significant geographic correlation of lineages were found in the san juan ( 19861990 and 19941996 ) , ponce ( 19871989 ) , and mayaguez ( 1989 and 1993 ) ( figure 3 , panel a ) .
in addition , isolates clustered geographically for san juan ( 19971999 and 20012006 ) , caguas ( 19982001 , 2004 , and 2005 ) , ponce ( 19951997 and 2005 ) , mayaguez ( 19961998 and 2006 ) , aguadilla ( 19961998 ) , and arecibo ( 19941995 , 2004 , and 2006 ) ( figure 3 , panel b ) .
considering the denv-2 historical data , we recognize that high - reporting municipalities usually are located in regions where we identified significant phylogenetic clustering ( figure a1 ) .
for example , in 1987 , most denv-2 cases originated from the ponce and san juan regions , where we identified lineages of clade i. for19941996 , denv-2 cases in san juan , ponce , mayaguez , and arecibo regions may reflect the coexistance of subclades ib and clade ii . * correlation estimated by using bats .
bats , bayesian tip - association significance testing ; hpd , highest probability density ; ci , confidence interval .
b ) maximum - likelihood phylogeny of subclade ib shows isolates by year and genbank accession numbers .
six regions had > 5 isolates ( san juan , caguas , ponce , mayaguez , aguadilla , and arecibo ) .
c ) eight regions of puerto rico with colors corresponding to isolates in panel a and year for the 3 regions with more isolates of that clade : san juan , mayaguez , and ponce .
d ) eight regions of puerto rico , showing colors and years corresponding to isolates in panel b. correlation between phylogeny and geographic location of isolation for the isolates in this study was estimated by using bayesian tip - association significance testing .
association index 6.51 ( 95% confidence interval 6.037.22 ) ; parsimony score statistic 52.27 ( 95% confidence interval 5154 ) .
monophyletic clade size bayesian tip - association significance estimates are shown for 6 regions of puerto rico with > 5 isolates represented in at least 1 subclade and statistically representative geographic associations ( p<0.05 ) .
we investigated other possible associations with the denv-2 phylogeny , including age and df / dhf status , but found none .
most denv-2 infections were secondary ( 84.6% and 77% of denv-2 infections in the cdc collection and this study , respectively ) .
however , we found no relationship between phylogeny and incidence of primary or secondary infection in patients .
the year 1999 began a period of low circulation and low genetic diversity of the caguas lineage of subclade ib ( figure 1 , panel d ; figure 2 ; figure 3 , panel b ) that lasted until 2003 . during these 4 years , most denv-2 cases originated from only 4 municipalities in eastern puerto rico ( figure 4 , panel a ) ; < 20 additional denv-2 cases were reported during that period in 12 other neighboring municipalities ( figure 4 , panel a ) . because phylogenetic lineages are geographically and temporally clustered , ( figure 2 ) , we illustrated these associations on the map of puerto rico ( figure 4 ) .
this map shows that denv-2 descendants from western puerto rico emerged in san juan in 19971998 ( figure 4 , panel b , top ) , then appeared and persisted within the refuge in 19992002 ( figure 4 , panel c , middle ) to then disseminate across the island in 20032005 ( figure 4 , panel d ) . in the 4 municipalities with uninterrupted denv-2 transmission , denv-2 incidence increased 2 years after the islandwide increase ( figure 5 ) .
denv-3 incidence within this denv-2 refuge was minimal during the period of high denv-2 incidence but peaked 2 years later concomitant with an increase across the rest of the island epidemiology of dengue virus ( denv ) serotype 2 in puerto rico , 19972006 .
a ) municipalities with persistent denv-2 transmission ( caguas , juncos , las piedras , carolina ) versus those with discontinuous transmission ( morovis , toa alta , toa baja , catao , guaynabo , cidra , san lorenzo , canvanas , humacao , naguabo , ceiba , fajardo ) , 19982002 .
inset shows satellite view ; red dot indicates national capital ( san juan ) , and yellow box indicates region where denv-2 took refuge during 20002002 .
white pins point to specific geographic locations where denv-2 isolates were collected during the specified time period .
b ) denv-2 traveled to the san juan region from the west during 19971999 ; c ) denv-2 transmission retracted to the eastern , refuge region with restricted dispersion patterns during 20002002 ; d ) denv-2 reemerged focused on the san juan region and later dispersed throughout the island during 20032006 .
incidence of dengue virus ( denv ) serotypes 2 and 3 in puerto rico , 19962005 .
solid blue line , incidence of denv-2 within the refuge region ; dashed blue line , incidence of denv-2 in the rest of the island outside the refuge reason ; solid black line , incidence of denv-3 within the denv-2 refuge region ; dashed black line ) incidence of denv-3 in the rest of the island outside the refuge region .
incidence was calculated as number of confirmed , positive cases of each serotype per thousand residents .
puerto rico is a model for fine - scale studies on denv evolution in the americas .
the long - term persistence of denv-2 and its ability to reemerge after transient periods of low circulation is a remarkable aspect of the epidemiology of dengue in the region .
the fact that 13% of denv-2 isolates represent importations or close descendants from importations brings new insights to our understanding of denv long - term circulation .
foreign viruses were identified in 8 years ( 1987 , 1989 , 1991 , 1995 , 1998 , 1999 , 2005 , and 2007 ) , of which only 1991 and 1998 had been previously sampled ( 14 ) .
ten of the 18 introductions occurred during periods of high denv-2 predominance : 19871991 , 1995 , and 20052007 ( figure 1 , panel b ; figure 2 ) .
the other 8 introductions originated from the 1998 epidemic or shortly thereafter ( 1999 ) . therefore , denv-2 seems to be introduced mainly during periods of favorable preponderance , not necessarily epidemic transmission of this serotype .
these assessments showed a previously unknown feature of denv-2 persistence : the endemic strain is recalcitrant to influences from frequent foreign introductions .
denv-2 to persist in the presence of the dominant subclade ib viruses is not well understood .
the puerto rico strain might be highly adapted and thus have a fitness advantage , the frequently introduced strains might be simply underrepresented , or introduced strains may have disappeared through genetic drift .
isolate pr76_1995_eu569708 , which lies basal to this subclade in the phylogeny ( figure 2 ) , is more closely related to south american denv-2 viruses than to other puerto rico viruses , and this lineage does not appear to have progressed , supporting the foreign origin of subclade ib .
our findings then show that subclade ib resulted from an introduced strain , as previously suggested by bennett et al .
( 14 ) , and successfully penetrated during a period of proportionally high incidence of foreign introductions .
interestingly , this clade replacement was completed in 1997 , less than a year before the finding of denv-3 and the concomitant decline of denv-1 , -2 , and -4 .
the early portion of subclade ib is seen as a period of short - lived lineages ending in 1997 , therefore , the rise and expansion of this subclade mainly occurs in coexistence with denv-3 , a different epidemiologic scenario from that of the now extinct clade ii a decade earlier .
the dominance of conservative amino acid changes that segregated the viruses by clade hinders the assessment of phenotypic changes .
compensatory mutations might have conferred replicative advantages that could have influenced the displacement of clade ii or the persistence of subclade ib in puerto rico ; however this hypothesis has not been tested .
others have not detected positive selection and attribute lineage extinctions or clade replacements to stochastic events rather than natural selection ( 25 ) .
more analysis to detect site - specific selection is needed to corroborate whether positive selection is not at play in these populations of viruses .
the period 19992003 represents historically low rates of denv-2 circulation ( figures 1 , 2 , 4 ) , and the epidemiologic and phylogenetic aspects of this transient retrieval had not been studied previously .
we show that the genetic variability of denv-2 decreased during these 4 years when the virus was transmitted in only a subset of municipalities .
denv-2 represented 29% of the cases in this area but only 5% island - wide .
the reason this region became a refuge of denv-2 for 4 years remains unclear , but the low incidence of denv-2 in prior years compared with the rest of the island suggests susceptibility for infection in this population ( figure 5 ) .
studies in thailand showed serotype displacement affecting population diversity and lineage turnover ( 26 ) .
short - term serotype cross - protection has been suggested to contribute to serotype displacements ( 2729 ) , implying that as denv-3 infected a large susceptible population , cross - protective antibodies momentarily impeded transmission of other serotypes and dissemination of denv-2 outside the eastern refuge .
our study confirms the utility of systematic sampling and genome sequencing in large - scale surveillance systems as ways to understand the dynamics of dengue transmission and endemicity . | to study the evolution of dengue virus ( denv ) serotype 2 in puerto rico , we examined the genetic composition and diversity of 160 denv-2 genomes obtained through 22 consecutive years of sampling .
a clade replacement took place in 19941997 during a period of high incidence of autochthonous denv-2 and frequent , short - lived reintroductions of foreign denv-2 .
this unique clade replacement was complete just before denv-3 emerged . by temporally and geographically defining denv-2 lineages , we describe a refuge of this virus through 4 years of low genome diversity .
our analyses may explain the long - term endurance of denv-2 despite great epidemiologic changes in disease incidence and serotype distribution . |
autism is a neurodevelopmental condition characterized by impairments in social interaction , impaired communication and restricted , repetitive , or stereotyped behaviors .
autism specifically affects brain function in the areas responsible for the development of communication and social interaction skills .
early diagnosis , early intensive remedial education and behavioral therapy significantly enhance the child 's social functioning .
autism was first described in 1943 by us psychologist , leo kanner , affecting boys 3 - 4 times more often than girls .
common etiological factors proposed include post - encephalitic infection or sepsis , genetic and autoimmune factors and vitamin d deficiency .
family income , education , and lifestyle do not seem to affect the risk of autism .
the risk of dental caries and gingivitis is expected to be higher in these patients due to improper brushing and flossing because of the difficulties the trainers and parents encounter when they brush the children 's teeth .
it could also be due to a lack of necessary manual dexterity of autistic children .
in general , children with autism prefer soft and sweetened foods , and they tend to pouch food inside the mouth instead of swallowing it due to poor tongue coordination , thereby increasing the susceptibility to caries . also , children diagnosed with autism spectrum disorder ( asd ) are prescribed psychoactive drugs or anticonvulsants , and the presence of generalized gingivitis might be the side - effects of these medications .
children with autism have multiple medical and behavioral problems , making their dental treatment extremely difficult .
several studies show that autistic children demonstrate self - injurious behavior ( sib ) , aggression , odd responses to sensory stimuli , unusual food likes or dislikes .
they also have abnormalities of mood and excessive fear , causing injuries to their head , neck or mouth . in general , relatively little has been written about autism in developing countries as compared with north america and europe .
while this disorder is not rare , majority of people in india have not been diagnosed and do not receive the services they need due to lack of awareness among medical professionals .
information on the patterns of development of the disease in the population is important because it acts as a foundation for the planing of public oral health policies .
the present study would attempt to explore the oral hygiene practices and oral health status in such patients as compared to the control group .
one hundred and seventeen patients diagnosed with autism were screened for the study from academy for severely handicapped and autistics , ranging from the age 5 to 22 years .
a total of 126 healthy individuals of the same age group were screened from a school of our town to form the control group .
the selection criteria for the sample were : patients diagnosed to have autism for the test group / cases , normal patients of similar age group as controls .
exclusion criteria were dental treatment in the last 6 months , any other systemic disease known to cause dental problems and uncooperative patient .
the test and control groups were divided into three categories , based on the type of dentition present as primary dentition ( category 1 ) , mixed dentition ( category 2 ) and permanent dentition ( category 3 ) . in test group , category 1 consisted of 11 patients , category 2 of 48 patients and category 3 of 58 patients . in the control group category 1 consisted of 13 patients , category 2 of 58 patients and category 3 of 53 patients . following a complete medical history and drug history if any and personal history regarding oral hygiene practices , all patients were examined by one examiner using a dental mirror , explorer , straight explorer and a cpitn probe under artificial light .
patients were questioned about their frequency of brushing , method of brushing , if any other dental aids were used and whether they could brush themselves or needed assistance in brushing .
plaque and gingival status was recorded using plaque index ( loe , 1967 ) and gingival index ( loe , 1967 ) respectively for the entire dentition .
periodontal status was assessed by community periodontal index of treatment needs and dental caries by dmft / def index .
the periodontal status was recorded using the cpitn . due to the evidences of higher periodontal disease in autistics ,
the cpitn was slightly modified and pocket depths were also recorded in children below 15 years of age in both cases and control groups . however , probing was not done on erupting teeth and they were excluded . according to the cpitn , the dentition was divided into six sections ( left / right maxillary / mandibular posterior teeth , maxillary / mandibular anterior teeth ) . each section was examined only if two or more teeth were present and not scheduled for extraction .
code zero for indicates healthy periodontium , code one indicates bleeding on probing , code two indicates plaque or other retentive factors , code three indicates pathological pocket 4 - 5 mm in depth and code four indicates periodontal pocket 6 mm or more in depth .
tn1 indicates a need for improving the oral hygiene only , tn2 indicates scaling and root planing and tn3 indicates complex periodontal treatment .
the dental caries was measured using the decayed , missing , and filled teeth index ( dmft ) for primary ( 0 - 5 years ) and early - mixed dentitions ( 6 - 10 years ) .
decayed , missing , and filled teeth index ( dmft ) was used for late - mixed ( 11 - 15 years ) and permanent dentitions ( 16 years and older ) . according to the codes and criteria established by the world health organization , a tooth was considered decayed when there was frank carious cavitation , as missing if it was extracted due to caries and as filled if it had a restoration for a carious lesion . exfoliated teeth in the primary and mixed dentition , unerupted , and those extracted for other reasons apart from caries were not included in the indices .
it was done using descriptive statistics , independent sample t - test , contingency coefficient test and one way anova test using spss 14 software .
it was done using descriptive statistics , independent sample t - test , contingency coefficient test and one way anova test using spss 14 software .
table 1 shows that around 66% ( 65.81% ) of the cases and around 68% ( 68.25% ) of the controls brushed their teeth once daily .
hence , there was no statistically significant difference in the brushing habits between cases and controls ( p = 0.573 ) .
table 2 shows that on intra group comparison , no statistically significant difference in the brushing habits was seen in between categories in both the cases ( p = 0.235 ) and controls ( p = 0.265 ) .
figure 1 shows that autistic children require assistance in brushing their teeth , and self - brushing by these children increased with an increase in age which is statistically significant ( p = 0.001 ) . while the controls showed self - brushing in all the categories .
intergroup comparison of brushing habits intragroup comparison of brushing habits method of brushing in autistic group the mean dmft score in cases was 1.2966 and in controls 3.736 [ figure 2 ] .
the prevalence of caries was lower in autistic patients with a statistical significance of p = 0.000 , the incidence of caries was increasing with age in both cases and controls [ table 3 ] .
the mean pi and gi scores in cases were 1.3039 and 1.0015 , respectively , and in controls were 1.0015 and 0.8542 , respectively [ figure 2 ] .
the scores were statistically higher in cases than in controls ( p = 0.000 and p = 0.000 , respectively ) , with the incidence increasing with age in both cases and controls [ table 3 ] .
intergroup distribution of oral diseases intragroup distribution of oral diseases cpitn was recorded only in category 2 and category 3 , which showed that prevalence of periodontal disease was significantly higher in autistic patients ( p = 0.000 ) [ figure 3 ] and greater number of autistic patients required professional scaling and root planing ( p = 0.000 ) [ figure 4 ] .
the incidence of periodontal disease and need for periodontal treatment increases with an increase in age in both cases and controls which is statistically significant [ table 4 ] .
intergroup comparison of cpi scores intergroup comparison of tn scores intragroup comparison of cpi and tn scores
table 1 shows that around 66% ( 65.81% ) of the cases and around 68% ( 68.25% ) of the controls brushed their teeth once daily .
hence , there was no statistically significant difference in the brushing habits between cases and controls ( p = 0.573 ) .
table 2 shows that on intra group comparison , no statistically significant difference in the brushing habits was seen in between categories in both the cases ( p = 0.235 ) and controls ( p = 0.265 ) .
figure 1 shows that autistic children require assistance in brushing their teeth , and self - brushing by these children increased with an increase in age which is statistically significant ( p = 0.001 ) . while the controls showed self - brushing in all the categories .
intergroup comparison of brushing habits intragroup comparison of brushing habits method of brushing in autistic group
the mean dmft score in cases was 1.2966 and in controls 3.736 [ figure 2 ] .
the prevalence of caries was lower in autistic patients with a statistical significance of p = 0.000 , the incidence of caries was increasing with age in both cases and controls [ table 3 ] .
the mean pi and gi scores in cases were 1.3039 and 1.0015 , respectively , and in controls were 1.0015 and 0.8542 , respectively [ figure 2 ] .
the scores were statistically higher in cases than in controls ( p = 0.000 and p = 0.000 , respectively ) , with the incidence increasing with age in both cases and controls [ table 3 ] .
cpitn was recorded only in category 2 and category 3 , which showed that prevalence of periodontal disease was significantly higher in autistic patients ( p = 0.000 ) [ figure 3 ] and greater number of autistic patients required professional scaling and root planing ( p = 0.000 ) [ figure 4 ] .
the incidence of periodontal disease and need for periodontal treatment increases with an increase in age in both cases and controls which is statistically significant [ table 4 ] .
intergroup comparison of cpi scores intergroup comparison of tn scores intragroup comparison of cpi and tn scores
over the past decade , autism has emerged as a major public health concern in many countries , characterized by a complex , behaviorally defined , static immature brain disorder .
providing oral care to children with autism requires patience and a thorough understanding of the patient 's degree of mental disability .
thus effective oral health promotion strategies need to be implemented to improve the oral health status of autistic children .
a total of 117 autistic patients and 126 healthy individuals were examined for the study in the age group of 5 - 22 years . in cases
male - female ratio of 3.6:1 was observed , which was consistent with previously reported sex ratios of 2.8:1 , 4:1 , 3.7:1 and 4.3:1 , reflecting the higher prevalence of autism in males .
the mean age among cases was 12.8 years , while in controls it was 12.3 years .
on examining the oral hygiene practices it was observed that the majority of the subjects from both groups brushed their teeth once daily .
patients with autism probably lacked manual dexterity and hence frequently needed assistance in brushing whereas in the controls all brushed by themselves . in autistics , as age increased more children could brush without assistance .
occasional use of mouthwash was seen in the autistic patients under supervision of care takers , flossing was not done in either of the groups .
the authors did not come across many studies which recorded the oral hygiene practices in autistics .
however , in a study by pilebro et al . , he quoted that parents usually assisted their autistic children in brushing their teeth at least once daily . in the present study ,
prevalence of caries was lower in autistics compared to , controls which was statistically significant .
however , studies by tharapiwattananon et al . have shown a higher caries incidence in autistics .
lower caries in autistics can be attributed to the good supervision by the parents and school teachers in the child 's daily life activities like tooth brushing and lack of in - between snacking .
the subjects in the case group for the present study were selected from a school that had a strict schedule for meals with lack of in between snacking . in a study with similar result shapira et al . concluded that the lower caries in autistics was due to less cariogenic diet , regular behavior at meals , and the autistics being less partial to sweets .
bassoukou et al . in a study to evaluate saliva flow rate , buffer capacity and dental caries experience in autistics compared to healthy
neither a higher flow rate nor a better buffer capacity of saliva was found as compared to controls . in our study , even though overall the caries incidence was lower in autistics compared to controls , on intra group comparison it was found that dmft score increased with increase in age in autistics whereas in control group no statistically significant increase in dmft was seen with growing age . on examining the periodontal status , we found that autistics had statistically significant increase in pi , gi , cpi and tn scores which are in accordance with shapira et al . and luppanapornlap et al .
it was very perplexing to know that autistics had a lower caries incidence and higher incidence of periodontal disease .
shapira et al.15 also quote a high periodontal disease and low caries incidence in autistic patients , which was unexplained . in our study , about 43% of the autistic patients required professional scaling and 20.6% required scaling and root planing whereas in the control group only 26% required professional scaling and 0.9% required professional scaling and root planing .
this can be explained by the fact that autistic patients can not brush as effectively as their normal counter parts . in a study with similar result
, luppanapornlap et al.5 stated that poor hand coordination leads to difficulty in maintaining good oral hygiene in autistics , thus increasing gingival diseases .
medina et al.17 has stated that self - injurious habits can also be the reason for increased gingival diseases .
it was found in our study that around 28% of the autistic children had a history of bruxism and lip biting .
friedlander et al.18 stated that the changes in gingiva can be due to the side effects of medications given to autism patients .
all the cases examined in the present study were under medication : allopathic , ayurvedic or homeopathic .
around half of the autistic patients were on ayurvedic and homeopathic treatment , and the rest on allopathic medicines which included antidepressants and anticonvulsants .
however , a complete , detailed drug history could not be recorded due to the complexity of the treatment and non - compliance of the parents , which we consider a drawback in our study . in the case group ,
two patients gave history of drug - induced gingival enlargement for which dental treatment was taken and the medication was substituted ; the drug history , however , could not be elicited .
one patient was a 9-year - old male patient on homeopathic treatment and the other was a 14-year - old boy on resperidone 0.5 mg twice daily .
none in the control group had gingival enlargement . in a recent study by rashid et al.19 who assessed the serum and salivary oxidative stress biomarkers and evaluated the oral health status in autistics , it was shown that autistics had higher levels of serum and salivary melanodialdehyde ( mdh ) and lower levels of serum and salivary superoxide dismutase ( sod ) .
sod is an antioxidant , which plays a vital role in the protective mechanism of periodontium .
thus ; these factors can be related to the increased periodontal disease in the autistics .
few studies over the past show a higher rate of oral diseases among the autistic patients . however , the results of present study are that autistics have a lower caries rate and higher rate of periodontal diseases .
this study also evaluates the oral hygiene practices and finds that autistic patients usually require assistance in brushing their teeth .
the present study has a larger sample size compared to the previous studies and also the first among indian population .
however , we could not elaborate the reason for higher periodontal disease and lower caries .
thus , the future studies should be directed toward the reasoning of this along with a larger sample size and long - term follow - up .
the present study suggests that autistic patients have a higher rate of periodontal disease and lower caries compared to controls .
thus , children with autism require special dental management to improve their oral health by maintaining efficient oral hygiene .
attempts should be made by parents , general dentists and periodontists to teach oral hygiene methods to these patients by constant repetition and patience , as autistic individuals can develop skills over a period of time and lead a more productive and independent life . | aim : the present study attempts to explore the oral hygiene practices and oral health status in autistic patients as compared to nonaffected , same aged healthy individuals.materials and methods : the oral hygiene practices , prevalence of caries and periodontal status were evaluated in 117 autistic patients and 126 healthy individuals .
the test and control groups were divided into three categories , based on the type of dentition as primary dentition ( category 1 ) , mixed dentition ( category 2 ) and permanent dentition ( category 3 ) . plaque and gingival status was recorded by plaque index ( loe , 1967 ) and gingival index ( loe , 1967 ) , periodontal status by community periodontal index of treatment needs and dental caries by dmft / def index .
statistical analysis was done using descriptive statistics , independent sample t - test , contingency coefficient test and one - way anova test by spss 14 software.results:there was no statistically significant difference in the brushing habits between autistics and controls ( p = 0.573 ) ; however , autistics required assistance in brushing .
prevalence of caries was significantly lower in autistic patients ( p = 0.000 ) .
plaque and gingival scores were significantly higher in autistic patients ( p = 0.000 ) and prevalence of periodontal disease was significantly higher in autistic patients ( p = 0.000 ) .
greater number of autistic patients required professional scaling and root planing ( p = 0.000).conclusion : the present study suggests that autistic patients have a higher rate of periodontal disease and lower caries compared to controls .
attempts should be made by parents , general dentists and periodontists to teach oral hygiene methods to these patients by constant repetition and patience , as autistic individuals can develop skills over a period of time and lead a more productive and independent life . |
With a West Wing office but no specific title – and no precedent for adult children whose father is president – the rules Trump is subject to are under dispute
After months of attending meetings of world leaders and visiting factories with her father, the role of first daughter Ivanka Trump is officially expanding – creating new ethical issues for an administration that has been heavily criticized over its potential conflicts of interest.
Ivanka Trump has West Wing office and will get access to classified information Read more
She will not have a specific title, but Trump will have an office in the West Wing, a government-issued phone and computer and security clearance to access classified information, and she will advise her father.
“While there is no modern precedent for an adult child of the president, I will voluntarily follow all of the ethics rules placed on government employees,” she told Politico in a statement.
But following the ethics guidelines should not be voluntary, said Richard Painter, a law professor at the University of Minnesota who served as chief ethics lawyer for George W Bush between 2005 and 2007.
“Given what she’s going to do, I don’t think she has any choice,” he said. “She has a West Wing office, she has equipment, she has a White House email address, she’s going to be doing policy work,” said Painter.
“For purposes of the conflict of interest statute, I believe she is a government employee,” he added.
Ivanka Trump’s lawyer, Jamie Gorelick, argues that since she will earn no salary and not be sworn in, she does not count as a government employee. There is no precedent for adult children whose father is president working in the White House, although two presidents – Andrew Jackson and James Buchanan – had their nieces serve in the role of first lady since Jackson was a widower and Buchanan a bachelor.
Trump has handed control over the day-to-day running of her eponymous clothing business to an executive and its assets are maintained by a trust managed by two of her husband’s siblings.
As part of the trust rules, outlined in the New York Times, Trump can veto any potential business deals for her clothing company that might create a conflict with her political work.
That means, points out Painter, that Trump has to know about any new deal that might put her at risk of breaking the statute, meaning she can be held responsible.
“She’s got accountability on that stuff. She can’t just blame the trustee,” he said.
Trump’s marriage to her father’s senior adviser, the real estate developer Jared Kushner, poses additional potential problems, because both could benefit financially from each other’s businesses.
Painter warned that the pair should avoid official political discussions involved with trade agreements regarding textiles, real estate and even bank deregulation, since that can affect real estate.
Ivanka Trump sees show on welcoming foreigners as new travel ban halted Read more
That means if the premier of China visits the White House – most of Ivanka Trump’s clothing line is made in China and Hong Kong – it is fine for her to attend the meeting, but she should not mention trade and if the discussion begins to focus on trade, she should excuse herself, says Painter.
The ethics expert noted approvingly that Ivanka Trump engaged Wilmer Cutler Pickering Hale and Dorr, the same legal services used by the secretary of state, Rex Tillerson, former head of ExxonMobil, to handle issues of conflict of interest. Kushner also used the DC-based lawyers to manage his potential conflicts of interest with his family business after taking the role of adviser in the Trump administration.
“It’s a criminal statute, so people better not mess up under it. But I think she’ll do the right thing,” said Painter. ||||| Ivanka Trump's Move To The White House Raises Questions About Ethics
Enlarge this image toggle caption Saul Loeb/AFP/Getty Images Saul Loeb/AFP/Getty Images
When Donald Trump was elected president, his daughter Ivanka Trump said she would move to Washington, D.C., but not into a White House office.
Since then, she has often been photographed in key White House meetings with foreign leaders and Cabinet members. Now she will have her own office in the West Wing, along with a security clearance and government-issued communications devices.
In her unpaid role, Ivanka will "continue to be the eyes and ears of her father and provide candid advice as she has for her entire adult life," her attorney, Jamie Gorelick, said in an NPR interview. "She is intending to spend some time on initiatives that she cares about, particularly with regard to women in the workplace."
Politics Ethics Experts Raise Alarm Over Ivanka Trump's White House Role Ethics Experts Raise Alarm Over Ivanka Trump's White House Role Listen · 2:29 2:29
Ivanka's elevated position has historians and ethics experts questioning the appropriateness of having one of the president's adult children serving directly in the administration, especially while continuing to own a business.
Julian Zelizer, a political historian at Princeton University, says Ivanka's White House role raises concerns such as: "Do the rules apply on nepotism, on conflict of interest, on other kinds of regulations that employees face?"
Zelizer says previous presidents have relied upon children for input. For example, President Teddy Roosevelt's daughter Alice frequently offered political advice, and George W. Bush often played an advisory role when his father, George H.W. Bush, was president.
But this situation is different because it appears Ivanka will be have an expansive portfolio, not just offer insights. She will not be sworn in, nor need Senate approval.
Her husband, Jared Kushner, also is serving in the administration, but he has official status as a senior adviser who took an oath of office. Earlier this year, some critics questioned whether his role violated the anti-nepotism law passed in 1967 to prevent a president from placing a relative in a Cabinet or federal agency job.
But that law was challenged when President Bill Clinton named his wife, Hillary Clinton, to lead a task force on health care. A federal judge ruled in that case the anti-nepotism law doesn't apply to White House staff jobs.
When Kushner came into the White House job, he turned over his role in his family's real estate empire to family members, who continue to own the business.
Ivanka plans to continue to own her eponymous fashion and jewelry business, even though she has stepped back from daily management. Her father has followed that pattern, too, continuing to own The Trump Organization while putting his two eldest sons in charge of management.
Gorelick said Ivanka spoke with the White House counsel about the ethical implication of taking this role and "they are comfortable with what she's doing."
She added that Ivanka will "abide by all the ethical rules that she would abide by if she were an employee."
But just having a high-visibility role could stir up questions about whether her White House presence could be seen as promoting her products as people try to curry favor with her.
"She is not simply part of the family; she is a businesswoman," Zelizer notes.
Kathleen Clark, a law professor at Washington University in St Louis and a specialist in government ethics, says the Trump administration's arguments that conflict-of-interest rules don't bind the president or his daughter are disheartening.
"My biggest concern is that this is yet another erosion of government ethics standards in this White House," Clark said. | – Ivanka Trump has attended meetings with foreign leaders and Cabinet members; she has security clearance that allows her access to classified information, a government-issued phone and computer, a White House email address, and even her own White House office. She still doesn't have an official White House job, but NPR and the Guardian talked to historians and ethics experts who are concerned with the ethical implications of Trump having such a high-profile role, even if an unpaid one, in her father's administration—particularly since she still owns a business. Since her role is not official, she requires no swearing-in and no Senate approval. And, though past presidents have looked to their children for insight before, Trump's role looks to be more extensive than simply offering advice. Trump's attorney explains her role to NPR thusly: She will "continue to be the eyes and ears of her father and provide candid advice as she has for her entire adult life," plus "spend some time on initiatives that she cares about, particularly with regard to women in the workplace." She'll do all that while continuing to own her fashion and jewelry line, though she won't be as involved in managing the company; one political historian points out that could raise questions about whether her presence in the administration is looked at as promotion for her company. Trump's attorney says White House counsel is "comfortable" with the ethics of her role in the administration, and that Trump will abide by the same ethical rules paid employees abide by. The former chief ethics lawyer for George W. Bush concludes, "For purposes of the conflict of interest statute, I believe she is a government employee." |
blue stragglers ( bss ) , which have stayed on the main sequence for a time exceeding that expected from standard stellar evolution theory for their masses , are important in population synthesis because of their peculiar properties .
these objects lie above and blueward of the turn - off in the colour magnitude diagram ( cmd ) of a cluster , may contribute remarkably spectral energy in the blue and ultraviolet , and affect the integrated spectrum of the host clusters as they are bright and blue @xcite .
the characters of bss , i.e. luminosity , temperature , gravity etc . , are relevant to their formation mechanism .
there are various possible origins for bss in theory , i.e. close - binary evolution ( mass transfer from a companion or coalescence of both companions ) , stellar collisions ( single - single , binary - single and binary - binary ) , interior mixing , recent star formation etc . @xcite . given the diversity of bss within one cluster , it is likely that more than one formation mechanisms play a role @xcite .
observational evidence shows that binaries are at least important in some cluster bss and in some field bss @xcite . since the binary mass - exchange hypothesis was originally advanced by mccrea to explain the bs phenomenon , a number of attempts have been made to test the hypothesis in open clusters @xcite .
however there is a lack of detailed binary evolution calculations .
monte - carlo simulations @xcite show that binary coalescence via case a evolution ( mass transfer begins when the primary is on the main sequence ) may be an important source of bss in some clusters while case b evolution ( mass transfer begins when the primary is in hertzsprung gap ) can only account for bss in short - orbital - period binaries .
meanwhile , case a may also produce bss by stable mass transfer as it does not always or immediately lead to a merger .
the difficulty in verifying the binary mass transfer hypothesis is the lack of evidence for variations of radial velocities for most bss .
however some bss have already been confirmed to be in binaries .
a typical example is f190 , a single - lined spectroscopic binary with a 4.2 days period @xcite in the old open cluster m67 .
as well , iue ( international ultraviolet explorer ) spectra @xcite provide evidence that f90 and f131 ( in m67 ) are algol - type mass transfer systems . with the improvement of observational means ,
more and more bss are detected to have variations in radial velocity .
it is therefore necessary to study the hypothesis in detail . as more hydrogen is mixed into the center of an accretor which is still on the main sequence at the onset of roche lobe overflow(rlof ) , the accreting component goes upwards along the main sequence in response to accretion and its time on the main sequence is extended .
when the mass of the accretor is more massive than the corresponding cluster turn - off mass at the age of the cluster , it may be recognized as a bs and show element contamination from the primary on the surface .
the element contamination will effect observational characteristics of the star and it is dependent on the details of the accretion process .
evolution of close binaries has been well studied over the last decades ( see a review of van den heuvel and some recent papers by de loore & vanbeveren , marks & sarna , han et al . , hurley , tout & pols and nelson & eggleton etc . ) .
many of these works are related to the secondary ( initially lower mass component in a binary ) , where accretion mode and accretion rate are concerned on .
since the accreting matter may be originally in the convective core of the primary , thermohaline mixing , which results from accreting material with a higher molecular weight than the surface layers , was first introduced by ulrich to describe this effect ( see also kippenhanhn , ruschenplatt & thomas ) . it was treated as an instantaneous process in intermediate - mass and massive close binaries for its short time scale in these systems by some authors @xcite . as well , the details of accretion process during rlof were ignored in these studies , and we have no knowledge of the effect of composition of secondaries during rlof .
wellstein , langer & braun treated thermohaline mixing in a time - dependent way in massive binary evolution and argued that this is important as both thermohaline mixing and accretion occur on a thermal timescale . in this paper , we concentrate on the behavior of secondaries in low - mass binaries during or just ceasing rlof , applying the results on short - orbital - period bss .
the computations are introduced in section 2 and thermohaline mixing for low - mass close binaries is examined in section 3 .
the results are shown in section 4 . in section 5 ,
we show a best model of f190 by simulation in a complete parameter space .
finally we give our conclusions , discussions and outlook on the work .
we use the stellar evolution code devised by eggleton and updated with the latest physics over the last three decades @xcite .
the rolf is included via the boundary condition @xmath4,\ ] ] when we follow the evolution of the mass donor . here
@xmath5 is the mass loss rate of the primary , @xmath6 is the radius of the star , @xmath7 is the radius of its roche lobe and c is a constant .
here we take @xmath8 so that rlof can proceed steadily and the lobe - filling star overfills its roche lobe as necessary but never overfills it by much , i.e a transfer rate of @xmath9 corresponding to an overfill of 0.1 per cent . when following the evolution of the primary , we store its mass - loss history as an input in subsequent calculations of the secondary , including the age when rlof begins , mass loss rate , and the composition of the lost matter .
the calculation of the secondary is stopped as it overfills its roche lobe and the system becomes a contact binary .
contact binaries are beyond our considerations here for there are many uncertainties during contact phase .
merger models will be studied in another paper .
the accreting matter is assumed to be deposited onto the surface of the secondary with zero falling velocity and distributed homogeneously all over the outer layers .
the change of chemical composition on the secondary s surface caused by the accreting matter is @xmath10 where @xmath11 is the mass accretion rate , @xmath12 and @xmath13 are element abundances of the accreting matter and of the secondary s surface for species @xmath14 , respectively .
@xmath15 is the mass of outer most layer of the secondary .
@xmath15 will change with the moving of non - lagrangian mesh as well as the chosen resolution , but it is so small ( @xmath16 ) in comparison with @xmath17 ( @xmath18 ) during rlof that we can ignore the effect of various @xmath15 on element abundances . before and after rlof
, we get @xmath19 from the equation , which is reasonable in the absence of mixing .
the opacity table is from opal @xcite with solar composition and from alexander & ferguson in our calculations .
only stable rlofs are considered in our calculations , under which the accreting component may completely accrete the matter lost by the primary as mass transfer is very slow in this case ( @xmath20 , see section 4 ) .
the mass and angular momentum of the systems are therefore assumed to be conservative .
convective overshooting has little effect in the mass range(1.01.6@xmath1 ) we considered @xcite .
we , therefore , have not considered it in our calculations .
during mass transfer in a close binary , the accreting matter may be originally in the convective core of the donor .
it is rich in helium relative to the gainer s surface and has a higher mean molecular weight , which will cause secular instability during or after accretion .
thermohaline mixing will occur in this case .
kippehahn , ruschenplatt and thomas estimated the time scale of the mixing for some massive binaries , in which effects of radiation pressure and degeneracy were considered : @xmath21 here @xmath22 , @xmath23 and @xmath24 have their usual meanings , @xmath25 is the mean value of @xmath26 in the region of interest , @xmath27 is its distance in depth , @xmath28 is the ratio of gas pressure to total pressure , and @xmath29 and @xmath30 are the mean free path and velocity of a photon , respectively . @xmath31 and @xmath32 are defined below : a massive - main - sequence star ( @xmath34 ) with a helium envelope ( @xmath35 ) was examined in that paper .
roughly estimated from a discontinuous model , in which exists a jump from the region of lower mean molecular weight to a higher one , the star has @xmath36 if diffusion extends over the whole radiative envelope .
ulrich obtained a time scale of 400 yrs in a similar case .
thermohaline mixing was then treated as an instantaneous process for intermediate - mass and massive close binaries by some authors in previous studies @xcite .
however the actual time is longer than @xmath37 for some reasons , e.g. , @xmath38 decreases gradually while undergoing thermohaline mixing .
a binary 1.26@xmath39 + 1.00@xmath39 with an initial orbital period of 1.0 day is considered here to examine the effect of thermohaline mixing on low - mass binaries . in a low mass binary ,
he - rich matter is usually accreted onto the surface of the secondary during the last phase of rlof and the he - rich phenomenon is not very severe in the accreting matter .
the hydrogen mass fraction of the accreting matter is therefore assumed to be @xmath40 and helium mass fraction to be @xmath41 , which is an extreme case for low mass binaries since hydrogen abundance usually has not reached such a low value .
another remarkable feature for low
mass binaries is the existence of convective envelope , which may mix the accreting matter into the whole convective envelope very rapidly , minimizing thermohaline mixing in this region .
we therefore show the profile of thermodynamical quantities @xmath22 and @xmath23 during rlof in figure [ grad ] to see the change of surface convective region caused by the increase of the mass .
a , b and c represent the mass of the secondary at about 1.00 , 1.12 and @xmath42 , respectively . the helium profile at corresponding masses is also shown in the figure .
thermohaline mixing has not been included in the figure yet . | blue stragglers ( bss ) are important objects in cluster populations because of their peculiar properties .
the colours and magnitudes of these objects are critical parameters in population synthesis of the host cluster and may depend remarkably on bss surface composition .
observations show that some bss are short - orbital - period binaries , which may be accounted for by mass transfer in low - mass binaries .
we therefore studied the effects of surface composition and thermohaline mixing caused by secular instability on the accreting components for low
mass binaries and applied the results on a short - orbital - period bs f190 in the old cluster m67 .
we examine thermohaline mixing in a low - mass accreting - main - sequence star and find that , except the redistribution of composition under the surface , the mixing affects the accretor very little during roch lobe overflow unless thermohline mixing is treated as an instantaneous process . a series of calculations are then carried out for low - mass binaries under different assumptions .
the results indicate no distinction in surface composition between the models with and without thermohaline mixing during roche lobe overflow , but we still see the divergences of evolutionary tracks on hertzsprung - russell diagram and colour - magnitude diagram .
the change of surface composition makes the gainer bluer and smaller than the ones with original surface composition while thermohaline mixing lessens the effect slightly .
if thermohaline mixing were to act instantaneously , the effect would be lessened more .
our calculation shows that case a and case b mass transfer may produce bss in short- or relatively short - orbital - period binaries ( including algol systems ) , and that cno abundance abnormalities could be observed in these products .
this is consistent with the results of monte - carlo simulations by previous studies .
our simulation of f190 shows that the primary s mass @xmath0 of the appropriate models is located in the range of 1.40 to 1.45@xmath1 with initial mass ratio @xmath2 and initial orbital period @xmath3 days , indicating that case a is a more likely evolutionary channel than case b to form this object .
the simulation also shows that it is very likely that f190 is still in a slow stage of mass transfer . as a consequence , obvious cno abundance abnormalities
should be observed for the object .
psfig.sty binaries : close -stars : evolution - stars : blue straggler |
the mean - field description of a many - body system , i.e. the hartree - fock ( hf ) and/or time - dependent hartree - fock theory ( tdhf ) , provides a simple tool for descriptions of certain aspects of complex quantum systems .
however , it is well known that the mean - field approximation is suitable for the description of mean values of one - body observables , while quantum fluctuations of collective variables are severely underestimated .
a second limitation of mean - field dynamics is that it can not describe spontaneous symmetry breaking during dynamical evolution .
if certain symmetries are present in the initial state , these symmetries are preserved during the evolution .
we have recently shown that a stochastic mean - field ( smf ) approach @xcite where the tdhf evolution is replaced by a set of mean - field evolution with properly chosen initial conditions .
it will be shown that this approach can be a suitable tool to go beyond mean - field and describe the evolution of a system close to a quantum phase - transition @xcite . in a series of article
, we applied the smf approach to describe transport properties in fusion reaction .
transport coefficients related to dissipation and fluctuations have been obtained @xcite that are crucial to understand the physics of heavy - ion collisions around the coulomb barrier .
a summary of recent results is presented .
in a mean - field approach , the nuclear many - body dynamical problem is replaced by a system of particles interacting through a common self - consistent mean - field .
then , the information on the system is contained in the one - body density matrix @xmath0 that evolves according to the so - called tdhf equation : @xmath1,\rho ] , \label{eq : puretdhf}\end{aligned}\ ] ] where @xmath2\equiv \partial { \cal e}(\rho ) /{\partial \rho}$ ] denotes the mean - field hamiltonian . while quite successful in the description of some aspects of nuclear structure and reactions @xcite , it is known to not properly describe fluctuations of one - body degrees of freedom , i.e. correlations .
numerous approaches have been proposed either deterministic or stochastic to extended mean - field and describe fluctuations in collective space ( see ref .
@xcite and reference therein ) . most often , these approaches are too complex to be applied in realistic situations with actual computational power .
a second limitation of mean - field dynamics is that it can not describe spontaneous symmetry breaking during dynamical evolution .
if certain symmetries are present in the initial state , these symmetries are preserved during the evolution @xcite .
the stochastic mean - field ( smf ) has been recently shown to provide a suitable answer for the description of fluctuations as well as of the symmetry breaking process while keeping the attractive aspects of mean - field .
let us assume that the aim is to improve the description of a system that , at the mean - field level and time @xmath3 , is described by a density of the form : @xmath4 ) or more generally an initial many - body density of the form : @xmath5 where @xmath6 is a normalization factor while @xmath7 are the creation / annihilation operators associated to the canonical basis @xmath8 .
then , the mean - field evolution , eq .
( [ eq : puretdhf ] ) reduces to the evolution of the set of single - particle states @xmath9 |\varphi_i ( t ) \rangle , \label{eq : phitdhf}\end{aligned}\ ] ] while keeping the occupation numbers constant . in the smf approach , a set of initial one - body densities @xmath10
is considered , where @xmath11 denotes a given initial density .
the density matrix components @xmath12 are chosen in such a way that initially , the density obtained by averaging over different initial conditions identifies the density ( [ eq : densmf ] ) .
it was shown in ref .
@xcite that a convenient choice for the statistical properties of the initial sampling is @xmath13 where @xmath14 are mean - zero random gaussian numbers while @xmath15 the average is taken here on initial conditions . in this approach , each initial condition given by eq .
( [ eq : ini ] ) is evolved with its own mean - field independently from the other trajectories , i.e. @xmath16 |\varphi^{\lambda}_i ( t ) \rangle , \label{eq : puresmf}\end{aligned}\ ] ] while keeping the density matrix components constant .
therefore , the evolution along each trajectory is similar to standard mean - field propagation and can be implement with existing codes .
a schematic illustration of the standard mean - field and stochastic mean - field is given in figure [ fig1:lacroix ] .
the mean - field theory is a quantal approach and even if it usually underestimates fluctuations of collective observables in the nuclear physics context , these fluctuations are non - zero . within mean - field theory , the expectation value of an observable @xmath17 is obtained through @xmath18 where @xmath19 has the form ( [ eq : mbdensity ] ) .
accordingly , the quantal average and fluctuations of a one - body observable @xmath20 along the mean - field trajectory are given by : @xmath21 and @xmath22 an important aspect of the smf approach is that the quantum expectation value is replaced by a classical statistical average over the initial conditions . denoting by @xmath23 the value of the observable at time @xmath24 for a given event
, fluctuations are obtained using @xmath25 where @xmath26 .
the statistical properties of initial conditions insures that quantal fluctuations [ @xmath27 and statistical [ @xmath28 fluctuations are equal at initial time .
note that such a classical mapping is a known technique to simulate quantum objects and might even be exact in some cases @xcite . in practice
, it might be advantageous to select few collective degrees of freedom instead of the full one - body density matrix . at the mean - field level ,
the evolution of a set of one - body observable @xmath29 is given by the ehrenfest theorem : @xmath30 \rangle\end{aligned}\ ] ] if a complete set of one - body observables is taken , for instance if we consider full set of operators@xmath31 , one recovers eq .
( [ eq : puretdhf ] ) . in many situations
, one might further reduce the evolution to a restricted set of relevant degrees of freedoms in such a way that the mean - field approximation leads to a closed set of equations between them , i.e. @xmath32 starting from this equation , one can also formulate the smf theory directly in the selected space of degrees of freedom by considering a set of initial conditions @xmath33 and by using directly the evolution : @xmath34 for each initial condition @xmath11 . note that statistical properties , i.e. first and second moments , of initial conditions should be computed using the conditions ( [ eq : mean ] ) and ( [ eq : fluc ] ) .
in recent years , we have applied the smf approach either to schematic models or to realistic situations encountered in nuclear reactions where mean - field alone was unable to provide a suitable answer .
some examples are briefly discussed below .
as mentioned in the introduction , the mean - field theory alone can not break a symmetry by itself .
the symmetry breaking can often be regarded as the presence of a saddle point in a collective space while the absence of symmetry breaking in mean - field just means that the system will stay at the top of the saddle if it is left here initially .
such situation is well illustrated in the lipkin - meshkov - glick model .
this model consists of @xmath35 particles distributed in two n - fold degenerated single - particle states separated by an energy @xmath36 .
the associated hamiltonian is given by ( taking @xmath37 ) , @xmath38 where @xmath39 denotes the interaction strength while @xmath40 ( @xmath41 , @xmath42 , @xmath6 ) , are the quasi - spin operators defined as @xmath43 with @xmath44 , @xmath45 and where @xmath46 and @xmath47 are creation operators associated with the upper and lower single - particle levels . in the following ,
energies and times are given in @xmath36 and @xmath48 units respectively .
it could be shown that the tdhf dynamic can be recast as a set of coupled equations between the expectation values of the quasi - spin operators @xmath49 ( for @xmath41 , @xmath42 and @xmath6 ) given by : @xmath50 where @xmath51 .
note that , this equation of motion is nothing but a special case of eq .
( [ eq : rel ] ) where the information is contained in the three quasi - spin components . to illustrate the symmetry breaking in this model
it is convenient to display the hartree - fock energy @xmath52 as a function of the @xmath53 component ( fig .
[ fig2:lacroix ] ) .
note that , here the order parameter @xmath54 is used for conveniency .
when the strength parameter is larger than a critical value ( @xmath55 ) , the parity symmetry is broken in @xmath56 direction . as a function of @xmath56 for @xmath57 ( dashed line ) , @xmath58 ( doted line ) and @xmath59 ( solid line ) for @xmath60 particles .
the arrow indicates the initial condition used in the smf dynamics . ] for @xmath55 , if the system is initially at the position indicated by the arrow in fig .
[ fig2:lacroix ] , with tdhf it will remain at this point , i.e. this initial condition is a stationary solution of eq .
( [ eq : tdhf ] ) . following the strategy discussed above
, a smf approach can be directly formulated in collective space where initial random conditions for the spin components are taken . starting from the statistical properties ( [ eq :
mean ] ) and ( [ eq : fluc ] ) , it could be shown that the quasi - spins should be initially sampled according to gaussian probabilities with first moments given by @xcite : @xmath61 and second moments determined by , @xmath62 while the @xmath6 component is a non fluctuating quantity .
quasi - spin component obtained when the initial state is @xmath63 for three different values of @xmath64 : @xmath65 ( solid line ) , @xmath58 ( dotted line ) and @xmath66 ( dashed line ) for @xmath60 particles . the corresponding results obtained with the smf simulations are shown with circles , squares and triangles respectively .
( adapted from @xcite ) ] an illustration of the initial sampling ( top ) and of results obtained by averaging mean - field trajectories with different initial conditions is shown in fig .
[ fig3:lacroix ] and compared to the exact dynamic . as we can see from the figure , while the original mean - field gives constant quasi - spins as a function of time
, the smf approach greatly improves the dynamics and follows the exact evolution up to a certain time that depends on the interaction strength . as shown in fig .
[ fig4:lacroix ] , the stochastic approach not only improves the description of the mean - value of one - body observables but also the fluctuations . for three different values of @xmath64 , from top to bottom @xmath65 ( a ) ,
@xmath58 ( b ) and @xmath66 ( c ) are shown . in each case ,
solid , dashed and dotted lines correspond to @xmath67 , @xmath68 and @xmath69 , respectively . in each case , results of the smf simulations are shown with triangles ( @xmath70 ) , squares ( @xmath71 ) and circles ( @xmath72 ) .
( taken from @xcite ) ]
the smf has been recently used to deduce transport coefficients associated to momentum dissipation or mass transfer during reactions from a fully microscopic theory @xcite .
the tdhf theory provides a powerful way to get insight nuclear reaction and treat various effects like deformation , nucleon transfer , fusion , ... in a quantal transport theory .
an illustration of nuclear densities obtained at various time of the @xmath73ca + @xmath74zr reactions is given in fig .
[ fig5:lacroix ] . , are indicated by contour plots for the central collision of @xmath73ca + @xmath74zr system at @xmath75 mev in units of @xmath76 .
the black dot is the center of mass point .
the red lines indicate the positions of the window @xmath77 and @xmath78 denotes velocity of the window .
( taken from @xcite ) ] the mean - field approach does include the so - called one - body dissipation associated to the deformation of the system and/or to the exchange of particles .
for instance , considering a set of observables , denoted generically @xmath79 , like the relative distance , relative momentum , angular momentum between nuclei or the number of nucleons inside one of the nucleus , it is possible to reduce the tdhf evolution and obtain classical equations of motion of the form : @xmath80 where @xmath81 is an eventual driving force while @xmath82 corresponds to drift coefficients .
for instance , the nucleus - nucleus interaction potential and energy loss associated to internal dissipation has been extracted in ref .
@xcite using such formula .
when tdhf is extended to incorporate initial fluctuations , the equation of motion itself becomes a stochastic process : @xmath83 for short time , the average drifts @xmath84 should identify with the tdhf one while the extra term is a random variables that leads to dispersion around the mean trajectory . in the markov limit , one can define the diffusion coefficient @xmath85 this mapping has been recently used to not only study dissipative process but also estimate fluctuations properties in the momentum and mass exchange . denoting by @xmath86 the diffusion coefficient associated with mass ,
fluctuations in mass of the target and/or projectile can be computed using the simple formula @xmath87 in figure [ fig6:lacroix ] , an example of estimated variances during the asymmetric reaction @xmath73ca + @xmath74zr is shown as a function of time in the case of fusion reaction ( top ) or below the coulomb barrier ( middle and bottom panel ) .
ca + @xmath74zr system at three different center - of - mass energies .
the dotted lines denote total number of exchanged nucleons until a given time @xmath24 .
( taken from @xcite ) ] all cases correspond to central collisions .
note that below the coulomb barrier the target and projectile re - separate after having exchanged few nucleons corresponding to transfer reactions .
in general , it is observed that the fluctuations are greatly increased compared to the original tdhf and are compatible with the net number of exchanged nucleons from one nucleus to the other ( dotted line in fig .
[ fig6:lacroix ] ) .
in this contribution , illustrations of the application of the stochastic mean - field theory are discussed .
it is shown , that the introduction of initial fluctuations followed by a set of independent mean - field trajectories greatly improves the original mean - field picture .
in particular , it seems that this approach is a powerful to increase the fluctuations that are generally strongly underestimated in tdhf or to describe the many - body dynamics close to a saddle point .
* acknowledgments * s.a .
, b.y . , and k.w .
gratefully acknowledge ganil for the support and warm hospitality extended to them during their visits .
this work is supported in part by the us doe grant no .
de - fg05 - 89er40530 .
99 d. lacroix , s. ayik , and ph .
chomaz , prog .
phys . * 52 * , ( 2004 ) 497 .
s. ayik , phys . lett .
* b 658 * , ( 2008 ) 174 .
d. lacroix , s. ayik and b. yilmaz , phys . rev . * c85 * , ( 2012 ) 041602 s. ayik , k. washiyama , and d. lacroix , phys . rev .
* c 79 * , ( 2009 ) 054606 .
k. washiyama , s. ayik , and d. lacroix , phys .
* c 80 * , ( 2009 ) 031602(r ) .
b. yilmaz , s. ayik , d. lacroix and k. washiyama , phys . rev . * c83 * , ( 2011 ) 064615 . c. simenel , b. avez , and d. lacroix , in lecture notes of the international joliot - curie school , maubuisson , ( 2007 ) , arxiv:0806.2714 .
j. p. blaizot and g. ripka , _ quantum theory of finite systems _ , ( mit press , cambridge , massachusetts , 1986 ) .
p. ring and p. schuck , _ the nuclear many - body problem _ ( springer - verlag , new - york , 1980 )
. m. f. herman and e. kluk , chem .
91 , 27 ( 1984 ) .
k. g. kay , j. chem .
100 , 4432 ( 1994 ) ; 101 , 2250 ( 1994 ) .
k. washiyama and d. lacroix , phys .
c 78 , 024610 ( 2008 ) . | in the stochastic mean - field approach , an ensemble of initial conditions is considered to incorporate correlations beyond the mean - field . then each starting pont is propagated separately using the time - dependent hartree - fock equation of motion .
this approach provides a rather simple tool to better describe fluctuations compared to the standard tdhf .
several illustrations are presented showing that this theory can be rather effective to treat the dynamics close to a quantum phase transition .
applications to fusion and transfer reactions demonstrate the great improvement in the description of mass dispersion . |
a 62-year - old female patient was referred by a regional hospital to department of thoracic and cardiovascular surgery because of a mass in her sternum . at presentation , the patient had no disease other than hypertension .
she had received calcium channel blocker therapy for her hypertension at a regional hospital for > 3 years prior to this presentation , and her blood pressure was well controlled .
she was a housewife by occupation and did not have any specific family history of cancer .
her blood pressure was 118/72 mmhg ; pulse rate , 64 beats / min ; respiratory rate , 16/min ; and body temperature , 36.7. a physical examination revealed a bulging mass in the mid - sternum and mild tenderness around the mass .
laboratory tests revealed white blood cell , 7,800/l and hemoglobin , 13.2 g / dl .
however , other initial laboratory test results were noted to be within the normal range .
chest computed tomography ( ct ) revealed a sternal mass 44 cm in size containing an osteoclastic lesion in the body of the sternum .
a further evaluation using whole body bone scintigraphy revealed a focus of mild increased uptake in the sternum with a bone - to - soft tissue ratio of 2.34:1 , and no evidence of a hot uptake lesion was noted in the other bony areas .
positron emission tomography - ct ( pet - ct ) with 18-fluro - deoxyglucose showed a mildly hypermetabolic mass in the body of the sternum with a standard uptake value of 4.6 but did not detect any other lesions or distant metastasis other than a sternal lesion ( fig .
an incisional biopsy was performed , which showed atypical plasma cells having centrally or eccentrically located ovoid nuclei and eosinophilic cytoplasm .
these histological findings indicated a plasma cell neoplasm suggestive of solitary plasmacytoma of the sternum ( fig .
immunohistochemical studies of the specimen showed positive lambda chains and negative kappa chains , but the results were all negative for the clusters of differentiation 99 , neuron specific enolase , s-100 , cytokeratin , and desmin .
further studies were performed to differentiate solitary plasmacytoma of sternum from multiple myeloma ( mm ) .
complete skeletal radiographs , including the humerus and femur , revealed negative results and no evidence of other osteolytic lesions .
in addition , laboratory studies , including complete blood cell count , serum electrophoresis , 24-hour urine protein electrophoresis , alkaline phosphatase , lactic dehydrogenase , c - reacting protein , serum calcium , and phosphate showed normal values and no evidence of anemia , hypercalcemia , or renal involvement suggestive of systemic myeloma .
multidisplinary approaches for further management , including radiotherapy , surgical management with wide resection , and chemotherapy , were discussed , and radiation therapy was determined to be the best choice for the patient .
solitary bone plasmacytoma ( sbp ) is defined as the clonal proliferation of plasma cells identical to those of plasma cell myeloma , which manifests itself as a localized osseous growth .
plasma cell neoplasms account for approximately 1% to 2% of all human malignancies and occur at a rate of about 3.5/100,000 per year .
sbp is composed of monoclonal plasma cells , which are cytologically , immunophenotypically , and genetically identical to those seen in mm .
mm is a multi - focal plasma cell proliferation in the bone marrow , produces excess immunoglobulin , and infiltrates bone .
free light chains are also produced along with intact proteins ; these light chains are detected by urine protein electrophoresis and are designated as bence - jones proteins .
excess cytokines activate osteoclasts , leading to bone destruction and subsequently to discrete lytic lesions or diffuse osteopenia .
patients are susceptible to recurrent bacterial infections due to the suppression of normal humoral immunity , which is frequently the cause of death .
bence - jones proteins are toxic to renal tubular cells and thus may contribute to renal insufficiency or failure .
chondrosarcoma is the most common tumor among primary malignant tumors of the sternum , although its incidence is extremely low .
of the 11,087 bone tumors in the dahlin tumor series at the mayo clinic , only 66 ( 0.6% ) were primary malignant tumors of the sternum .
of these 66 tumors , 22 ( 33% ) were chondrosarcomas ; 20 ( 30% ) were myelomas , including plasmacytomas ; 14 ( 21% ) were lymphomas ; 8 ( 12% ) were osteosarcomas ; 1 ( 1.5% ) was a fibrosarcoma ; and 1 ( 1.5% ) was a ewing tumor .
the sternum observed for 16 years between 1993 and 2009 , reported that primary tumors of the sternum were very rare and accounted for only 0.5% of all of the primary bone tumors they encountered and that these tumors were often malignant , osteolytic , and aggressive .
they also described 6 cases of primary malignant tumors of the sternum , of which 3 were plasmacytomas , 1 was a chondrosarcoma , 1 was an osteosarcoma , and 1 was a large b - cell lymphoma .
sbp is defined as a tumor confined to a bone with no multiple osteolytic lesions , while extramedullary plasmacytoma ( emp ) is defined as a tumor that occurs only in a soft tissue with no multiple osteolytic lesions .
sbp is relatively rare and accounts for 3% to 5% of all plasma cell neoplasms .
the median age at the diagnosis of sbp is 55 years , and sbp occurs 10 years earlier than mm .
males are more frequently affected than females , and one study reported that two - thirds of all patients were male .
sbp more commonly involves the axial skeleton , and distal appendicular diseases are extremely rare . as in mm ,
marrow areas with active hematopoiesis are targeted , including the vertebrae , ribs , skull , pelvis , femur , clavicle , and scapula , in order of decreasing frequency .
the thoracic vertebrae are more frequently involved than the lumbar , sacral , and cervical spines .
thus , punched - out defects are seen on radiographs and usually measure 1 to 4 cm .
further , cord compression may be the presenting feature of solitary plasmacytoma involving the vertebrae .
soft tissue extension of the tumors may result in palpable masses , particularly when the rib is involved .
the international myeloma working group has established the following criteria for the diagnosis of sbp .
the diagnosis of sbp requires a solitary bone lesion ; a biopsy which shows histological evidence of bony involvement by plasma cells ; negative results of complete skeletal radiographs , including the humerus and femur , that show no lytic lesions ; absence of clonal plasma cells in a random sample of bone marrow ; no evidence of anemia , hypercalcemia , or renal involvement suggestive of systemic myeloma ; and immunofixation of serum and concentrated urine that shows no monoclonal proteins .
ct and particularly , magnetic resonance imaging ( mri ) depict the extent of sbp more clearly .
mri is useful for identifying additional unsuspected plasma cell lesions that do not appear in a skeletal survey .
some recent studies have emphasized the necessity of ct and mri along with 99 m tc - mibi scintigraphy , while others have mentioned the increased diagnostic sensitivity of fluorine-18 fluorodeoxyglucose pet .
electrophoresis of serum and urine samples reveals monoclonal proteins in 24% to 72% of sbp patients , although protein levels are considerably lower in sbp patients than in mm patients .
all sbp patients should undergo serum and urine immunofixation even when electrophoresis results are normal because monoclonal proteins may not be detected in approximately one - third of all patients .
histologically , plasmacytoma appears as sheets of plasma cells . these are small round blue cells with ' clock - face ' nuclei and abundant cytoplasm with a perinuclear clearing or ' halo . '
plasmacytoma exhibits monoclonal kappa or lambda light chains , whereas plasma cells of reactive chronic osteomyelitis are polyclonal .
radiotherapy is the treatment of choice for sbp ; however , there exists controversy regarding extensive wide resection .
treatment fields should be designed to encompass all diseases shown by mri or ct scanning and should include a margin of normal tissue .
localized radiotherapy should be administered even if the tumor is completely removed for diagnostic purposes .
the local response rate has been shown to be 80% to 90% , and there is no clinical evidence that adjuvant or prophylactic chemotherapy prevents the ultimate development of mm .
approximately 55% of patients with spb develop mm within 10 years of successful treatment ; 10% develop local recurrences or solitary plasmacytomas at different locations .
although solitary plasmacytoma occurs less commonly in the sternum than mm , it must be considered in the differential diagnosis of bone and soft tissue tumors , particularly in the absence of lytic lesions in a skeletal survey and in the presence of clinical evidence of end organ damage .
unfortunately , more than half of the patients with solitary plasmacytoma develop mm during their lifetime . | plasmacytoma is a plasma cell neoplasm that locally infiltrates a bone or spreads to extramedullary areas .
a new world health organization criterion defines solitary plasmacytoma of bone as a localized bone tumor consisting of plasma cells identical to those seen in plasma cell myeloma , which is manifested as a solitary osteolytic lesion in a radiological evaluation .
primary tumors of the sternum are generally malignant , and solitary plasmacytomas of the sternum are very rare tumors .
we present herein the case of a patient who had a primary sternal tumor with solitary plasmacytoma and no evidence of multiple myeloma . |
most fusions occur between c3 and c7 , adjacent to the highly mobile upper cervical region that accommodates approximately half of all cervical motion.4 transfer of motion and stress to adjacent levels after a fusion has been extensively studied biomechanically .
schwab et al examined human cadaveric cervical spines from c2 to t1 and assessed the effects of incremental single - level fusions at different regions of the cervical spine.5 they concluded that motion compensation was distributed among the unfused segments , with significant compensation at the segments adjacent to the fusion .
interestingly , the fusion level determined whether the increased motion was seen at the adjacent level above or below the fusion .
when the fusion level was at c3c4 or c4c5 , significant increases in motion were seen at the level above the fusion .
when the fusion level was at c5c6 or c6c7 , significantly increased motion was seen at the levels above and below the fusion .
greater compensation occurred at the inferior segments than the superior segments for these lower - level fusions at c5c6 and c6c7 .
though this study demonstrated increased motion at the levels adjacent to single - level cervical fusions , it did not conclude that this increased motion was responsible for adjacent segment degeneration .
baba et al studied 106 patients who underwent acdf for cervical myelopathy with an average of 8.5 years of follow - up.6 he found that 25% of patients developed spinal canal stenosis at the level above the previously fused segments .
gore and sepic followed 121 patients who had undergone an acdf for an average of 5 years.7 they found that 25% had new - onset spondylosis , and another 25% had progression of preexisting spondylosis .
neither of these two studies found any correlation between adjacent segment degeneration and clinical symptoms .
matsumoto et al performed a prospective study with 10-year follow - up comparing magnetic resonance imaging ( mri ) findings of patients who underwent an acdf with healthy control subjects.8 they further subcategorized cervical spondylosis as decreased signal intensity of the disk ( dsi ) , posterior disc protrusion ( pdi ) , disc space narrowing , and foraminal stenosis .
results showed that dsi occurred significantly more in the acdf group at the c4c5 level , and pdp occurred significantly more in the acdf group at all levels except c5c6 .
disc space narrowing and foraminal stenosis occurred significantly more in the acdf group at c3c4 and c6c7 , respectively .
they concluded that although progression of cervical spondylosis occurred in both groups , acdf did accelerate adjacent segment degeneration .
hilibrand et al 's landmark work on symptomatic adjacent segment degeneration followed 374 patients for a maximum of 21 years.9 they found that symptomatic adjacent segment degeneration occurred at a relatively constant rate of 2.9% per year for the first 10 years after surgery .
kaplan - meier survivorship analysis predicted that 25.6% of patients would have symptomatic adjacent segment disease 10 years postoperatively .
in addition , patients who underwent a multilevel arthrodesis were significantly less likely to develop adjacent segment degeneration than those who underwent a single - level fusion .
the authors concluded that adjacent segment degeneration was likely related to the natural history of cervical spondylosis and not the fusion itself .
the aforementioned studies underscore the conundrum surgeons face when performing an acdf . it has provided excellent clinical results for decades ,
yet the question of adjacent segment degeneration continues to spur interest in motion - preserving technology .
currently , nine different cervical total disc arthroplasty devices have completed the food and drug administration ( fda)-regulated investigational device exemption ( ide ) clinical trial .
these devices have evolved significantly over time , incorporating principles of both orthopedics and tribology .
the first cervical artificial disc was implanted by ulf fernstrom in 1966.10 it consisted of a stainless - steel ball - bearing prosthesis and was used in both the cervical and lumbar spines .
it was implanted in over 250 patients , but later fell out of favor because of unacceptably high failure rates .
the device caused hypermobility and was found to erode into the vertebral end plate and body .
cervical arthrodesis procedures developed by smith and robinson increased in popularity and sidelined arthroplasty devices for the next 2 decades .
use of lumbar arthroplasty devices in the 1980s spurred renewed interest in cervical motion preservation . in 1989 , cummins designed a stainless steel metal - on - metal cervical artificial disc in bristol , uk .
it was a stainless steel , ball - and - socket design with two anchoring screws .
initial clinical results in the 18 patients also showed unacceptably high failure rates : three cases of screw pullout , one of screw breakage , one subluxed joint , and persistent dysphagia reported in all 18 patients.11 it was redesigned and reintroduced as the frenchay cervical disc .
the frenchay had better clinical results and was purchased by medtronic and renamed the prestige disc.12 in 2007 , the prestige disc was approved by the fda for treatment of cervical radiculopathy and/or cervical myelopathy between c3 and c7 .
the bryan cervical disc was designed in 1992 by an american neurosurgeon , vincent bryan .
it is a metal - on - plastic design , consisting of two titanium alloy shells with a polyurethane core .
a polyurethane sheath surrounds the nucleus and is filled with saline , mimicking synovial fluid and containing any potential wear debris .
unlike the prestige , the bryan disc is not secured into the disc space with any hardware ; it required a press - fit
it consists of a cobalt - chromium - molybdenum end plates ( ccm ) with an ultra - high - molecular - weight polyethylene ( uhmwpe ) articulating surface .
two keels on each surface anchor the pro disc c to the vertebral end plates , paralleling the design of the lumbar arthroplasty device , the pro disc l. the porous coated motion disc prosthesis , or pcm device , was designed by dr .
it is also composed of ccm end plates that articulate with an uhmwpe inner core .
the outer surface of the end plates are serrated and coated with titanium / calcium phosphate , thus allowing for bony ingrowth .
initial fixation is that of a press - fit mechanism . its broad radius of curvature is thought to provide more end - plate support laterally . in 2009
multiple other devices are in development and will likely come to market : cervicore ( stryker , kalamazoo , mi ) , discocerv ( scientix , maitland , fl ) , and neodisc ( nuvasive ) .
, mountain view , ca ) , a cobalt - chrome metal - on - metal semiconstrained disc with a mobile center core , utilizes midline keels for immediate fixation .
the variations in design and implantation have given surgeons multiple devices to choose from , and the number will continue to grow .
each of these designs attempts to closely mimic the kinematics of the native cervical disc in the hopes of preserving motion and preventing degeneration of the adjacent segments .
multiple prospective , randomized , multicenter studies have examined the indications and outcomes of these devices , with varying results .
mummaneni et al performed a perspective , randomized multicenter study to evaluate the prestige st with acdf.13 their study included 541 patients : 276 in the investigational group underwent anterior cervical arthroplasty with the prestige ii , and 265 patients in the control group underwent a single - level acdf .
primary outcome measures included neck disability index score ( ndi ) , neurological success , short form ( sf)-36 , supplementary surgical procedures , relief of neck and arm pain , and return to work .
improvements in the ndi score were significantly in favor of the arthroplasty group at 6 weeks and 3 months , but lost significance at 12 and 24 months .
sf-36 pcs ( physical component summary score ) and mcs ( mental component summary score ) were not significantly different between the two groups .
employment status was not different between the two groups at the end of the study , but the investigational group returned to work on average 16 days earlier . neck pain was significantly improved in the investigational group up to 12 months , but there were no differences in regards to arm pain between the two groups .
their results showed a significantly lower reoperation rate for adjacent segment degeneration in the arthroplasty group , but no criteria were given for its clinical or radiographic diagnosis .
neurological success , defined as a greater than or equal to 15-point improvement in the ndi and maintenance or improvement in neurological status , was significantly higher in the control group at 12 and 24 months .
overall , their results showed early benefits in the arthroplasty group that become insignificant at 24 months .
these results should be interpreted with caution as the follow - up rates in this study were low : 80% in the investigational group and 75% in the control group .
murrey et al examined the pro disc - c versus acdf for symptomatic one - level cervical disease in a prospective , multicenter randomized study.14 the patient follow - up in this study was excellent : 94.8% for the control group and 98% for the investigational group .
their results showed no statistically significant differences between the two cohorts at 24 months for visual analog scale ( vas ) neck and arm , sf-36 , ndi , or neurological success .
there was a significant difference in strong narcotic use between the groups , favoring arthroplasty .
in addition , there was a significant difference in the number of secondary surgeries between the two groups , favoring arthroplasty : 8.5% of fusion patients required a reoperation , revision , or supplemental fixation within the 24-month postoperative period , compared with 1.8% of the arthroplasty group .
overall , the study demonstrated essentially equivalent short - term outcomes between the two groups .
heller et al performed a prospective , randomized , multicenter study comparing single - level acdf with bryan cervical disc arthroplasty.15 at 24-month follow - up , the disc arthroplasty group had statistically greater improvement in the primary outcome variables : ndi scores and overall success ( greater than or equal to 15-point improvement in the ndi scores , maintenance or improvement of neurological status , no serious adverse events related to the implant / surgical procedure , and no subsequent surgery or intervention classified as failure ) . neck and arm pain improved significantly in both groups from baseline , but the arthroplasty group had a significantly greater reduction in neck pain .
the investigational group returned to work ~2 weeks earlier than the control group , but there was no significant difference in the return to work rates at 2 years postoperatively .
though this study concluded that cervical arthroplasty with the bryan disc is a viable alternative to acdf for single - level disease , several shortcomings of the study are noteworthy .
twelve patients who were assigned to investigational group received the control treatment because of anatomic constraints : four had a disc space smaller than the smallest available bryan disc , five could not have the c6c7 disc space adequately visualized with intraoperative radiography and therefore could not have the bryan disc safely implanted , and one mistakenly received the control treatment .
another area of concern is the fact that 117 patients who were randomized declined participation in the study before receiving their assigned treatment .
though the authors comment that there were no statistical differences in demographics and baseline measurements between the group of patients who dropped out and those who participated , such a high number of dropouts inevitably introduced bias into the results .
garrido et al , whose patients were part of the aforementioned study , reported outcomes on their cohort of patients at 48 months.16 primary outcome measures were ndi , vas neck and arm , and sf-36 scores and complications and reoperations .
their data demonstrated improved outcomes in both the arthroplasty group for all outcome measures except the sf-36 physical component score .
however , due to their small sample size , the study was underpowered and failed to reach statistical significance in any outcome measure .
recently , coric et al performed a prospective , randomized multicenter study evaluating 2-year outcomes of the kineflex c artificial cervical disk with acdf for single - level disease.17 primary outcome measures such as the vas pain scores and overall clinical success were all significantly in favor of the arthroplasty group .
ndi improved significantly in both groups , but there was no significant difference between groups at the 2-year end point .
adjacent segment degeneration was evaluated from a quantitative analysis of disc height and an independent radiologist 's subjective assessments based on a previously published qualitative and quantitative analysis of disc height.18 though radiographic evidence of severe adjacent segment degeneration was significantly lower in the arthroplasty group , reoperation rates at the adjacent levels showed no significant difference between groups .
based on the published data , the authors can not conclude any difference between the symptomatic adjacent - level disease between the two cohorts .
in one of the few non - industry - sponsored studies , nunley et al performed a prospective , multicenter study evaluating 170 patients with symptomatic cervical spondylosis at one or two levels.19 subjects received either an acdf or one of three different types of artificial discs .
primary outcome measures included ndi , neurological examination , and visual analog pain scores up to 48 months after surgery .
patients who had radiographic and/or mri evidence of spondylosis at levels other than the index levels to be treated were excluded from the study .
patients with persistent postoperative symptoms were investigated for adjacent segment degeneration and underwent advanced imaging , neurophysiology , and subsequent active interventions .
they authors concluded that the risk of developing adjacent segment degeneration was equivalent between the two cohorts at a median of 38 months postoperatively .
patients with osteopenia and concurrent lumbar degenerative disease were significantly more likely to develop adjacent segment degeneration .
over the course of the next few years , motion preservation devices for the cervical spine will continue to increase in number and complexity .
many of the current randomized , prospective studies conclude that disc arthroplasty is a viable alternative to acdf , citing improved outcomes in their cohort of patients in the short term .
late failure of arthroplasty devices , as demonstrated in total knee and total hip arthroplasty , are of real concern and require long - term studies to define . in today 's health care environment ,
superior , not equivalent , outcomes will be required for a true paradigm shift in the surgical treatment of cervical spondylosis .
if these devices can clinically demonstrate reduced rates of symptomatic adjacent segment degeneration in long - term studies , only then will they truly become the standard of care .
| symptomatic adjacent segment degeneration of the cervical spine remains problematic for patients and surgeons alike . despite advances in surgical techniques and instrumentation
, the solution remains elusive .
spurred by the success of total joint arthroplasty in hips and knees , surgeons and industry have turned to motion preservation devices in the cervical spine . by preserving motion at the diseased level , the hope is that adjacent segment degeneration can be prevented .
multiple cervical disc arthroplasty devices have come onto the market and completed food and drug administration investigational device exemption trials .
though some of the early results demonstrate equivalency of arthroplasty to fusion , compelling evidence of benefits in terms of symptomatic adjacent segment degeneration are lacking .
in addition , non - industry - sponsored studies indicate that these devices are equivalent to fusion in terms of adjacent segment degeneration .
longer - term studies will eventually provide the definitive answer . |
extrinsic compression of the celiac trunk by the median arcuate ligament ( mal ) occurs in 10 - 24% of patients . although this compression may lead to clinical manifestations of postprandial epigastric pain , nausea or vomiting , and weight loss , it is usually asymptomatic , presumably due to collateral supply from the superior mesenteric circulation .
we present a rare case of mal compression of the celiomesenteric trunk ( cmt ) diagnosed by computed tomography ( ct ) angiography , in which a lack of collateral circulation ( and subsequent ischemia ) is the most likely cause of recurrent abdominal pain .
a 26-year - old male with no significant past medical history presented with a 1 year history of intermittent abdominal pain exacerbated by meals that led to an aversion to food and a 25-pound weight loss over the same time period .
main findings on abdominal ct with intravenous ( iv ) contrast were a segmental thickening and an inflammation of the ascending colon .
the symptoms did not improve with antibiotic administration and the patient was referred for further gi evaluation .
a repeat colonoscopy showed no ulcers , but ultrasound and a subsequent ct angiogram ( cta ) revealed compression of the celiac axis [ figure 1 ] .
this was confirmed by sagittal [ figure 2 ] and three - dimensional reconstructions [ figure 3 ] that showed a common origin of the celiac axis and superior mesenteric artery ( sma ) with approximately 70% narrowing , and post - stenotic dilation of the celiac axis and sma .
the patient was referred for vascular surgery and underwent lysis of the mal , a retroperitoneal aorto - sma bypass , and a retroperitoneal aorto - celiac bypass . of note ,
the patient developed a post - op thrombotic occlusion of the celiac vascular graft [ figure 4 ] .
the sma graft remained patent and the blood flow was preserved in the distal vessels of the celiac axis .
the patient has been advised to continue long term follow - up monitoring of symptoms .
axial cta shows compression of the celiac axis by the median arcuate ligament ( arrow ) .
sagittal cta shows compression ( arrow ) of the celiomesenteric trunk by the median arcuate ligament with post - stenotic dilatation of the celiac and superior mesenteric arteries ( arrowheads ) . ( a ) sagittal oblique three - dimensional ( 3d ) volume rendered ( vr ) image of the celiomesenteric trunk ( arrow ) .
( b ) magnified view of 3d vr image shows stenosis ( arrow ) at the origin of the celiomesenteric trunk .
coronal ct abdomen with iv contrast ( status post bypass surgery ) , shows thrombotic occlusion ( arrow ) of the celiac vascular graft .
the median arcuate ligament is a fibrous arch that unites the diaphragmatic crura on either side of the aortic hiatus and usually passes over the aorta , superior to the origin of the celiac axis . in 10% to 24% of the population ,
the ligament may cross over the proximal portion of the celiac axis and cause a characteristic indentation that is usually asymptomatic .
additionally , a small subset of this population may present with mal syndrome , an anatomic and clinical entity in which extrinsic compression of the celiac axis leads to postprandial epigastric pain , nausea or vomiting , and weight loss ( often related to " food fear " or fear of pain triggered by eating ) .
symptoms are thought to arise from the compression of the celiac axis , resulting in a compromise of the blood flow .
although characterized several decades ago , the existence of this syndrome is still challenged by several authors .
most celiac compressions actually do not present with symptoms , presumably due to collateral supply from the sma . our patient presented with symptoms of chronic , intermittent abdominal pain and significant weight loss along with radiographic findings confirming a compression of the celiac axis by the mal .
moreover , imaging studies revealed a common origin of the sma and the celiac artery .
common cmt is a rare occurrence , estimated to have an incidence of 0.25% of all celiac axis abnormalities .
when this anomaly is found , it has a wide - ranging implications . a patient with common cmt is deprived of some of the protective benefits of dual origin vessels with multiple mutually supporting anastomoses .
occlusive disease of a cmt would logically produce symptoms of acute or chronic mesenteric ischemia : the redundancy between the celiac and superior mesenteric arterial circulation is nonexistent in the case of a cmt , and a proximal stenosis affecting this vessel would have serious ischemic consequences to the intestine .
a disease involving the rarely encountered cmt anomaly is extremely uncommon ; there is only one other reported case in the literature of symptomatic mal compression of a cmt diagnosed with conventional angiography .
the ct findings characteristic of mal compression may not be appreciated on axial images alone .
the focal narrowing has a characteristic hooked appearance that can help distinguish this condition from other causes of celiac artery narrowing , such as atherosclerotic disease .
treatment of mal syndrome is aimed at restoring normal blood flow in the celiac axis and eliminating neural irritation produced by the celiac ganglion fibers , and has traditionally consisted of laparotomy through an upper abdominal incision , an open division of the mal , and a resection of associated periarterial neural tissue comprising the celiac plexus or ganglion . in a study by reilley et al , patients treated with both splanchnic nerve decompression and vessel reconstruction experienced better symptom relief than patients treated with celiac decompression alone .
a stenosed or occluded celiac axis on angiography was noted in 75% of patients who presented with persistent symptoms . in summary ,
we report the ct findings of a rare case of mal compression of the cmt , in which the anatomic findings indicate the most likely cause of recurrent abdominal pain and weight loss . | median arcuate ligament ( mal ) syndrome is a controversial condition characterized by compression of the celiac trunk and symptoms of intestinal angina .
we present a case of mal compressing the celiomesenteric trunk , a rare variation .
we report computed tomography ( ct ) angiography and three - dimensional reconstructions of this rare phenomenon . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.