system_instruction
stringlengths
29
665
user_request
stringlengths
15
882
context_document
stringlengths
539
130k
full_prompt
stringlengths
74
130k
prompt
stringlengths
853
130k
has_url_in_context
bool
2 classes
len_system
int64
5
108
len_user
int64
2
144
len_context
int64
90
19.9k
target
float64
row_id
int64
0
859
You can only respond using information in the context block.
What is the difference between supply-side economics and trickle-down economics?
In this paper I discuss what can be learned about “trickle-down” ideas from recent empirical evidence on tax incidence. Tax incidence, defined as the effect of tax policies on the distribution of welfare, provides an ideal framework because of the explicit focus on tracing the impacts of a policy beyond the directly affected group (ex. the rich). I arrive at three main lessons. First, recent evidence finds that business income taxes do affect the earnings of workers, but these effects are mostly a result of rent-sharing and taxation of rents, not from traditional supply-side channels. Second, there are systematic differences in the types of workers that are affected by the tax policies, so to understand how taxing businesses or business owners affects the distribution of welfare, it is not sufficient to treat workers/labor as a class. Third, across different income tax policies that statutorily affect the rich, the burden is generally ultimately born by the rich. I conclude with a discussion of fruitful avenues of further research, particularly on how tax incidence depends on various institutional features of labor markets, product markets and tax systems. There are two ideas of government. There are those who believe that if you just legislate to make the well-to-do prosperous, that their prosperity will leak through on those below. The Democratic idea has been that if you legislate to make the masses prosperous their prosperity will find its way up and through every class that rests upon it. - William Jennings Bryan (1896) 1 to the needy. Mr. Hoover was an engineer. He knew that water trickles down. Put it uphill and let it go and it will reach the driest little spot. But he didn’t know that money trickled up. Give it to the people at the bottom and the people at the top will have it before night, anyhow. But it will at least have passed through the poor fellow’s hands. They saved the big banks, but the little ones went up the flue. - Will Rogers (1932), first use of “trickle down” The idea of “trickle-down” originated from political debates to describe the economic policies of a party or politician. There was never a formal concept of “trickle-down economics” in the sense of economic theory. The term-of-art was used to describe policies that directly benefited to rich but were justified by arguments they would ultimately also benefit the middle class and poor. In fact, the term was not originally used by those advocating for such policies, but as a critique of the political discourse promoting such policies. While the term “trickle-down” was not used by William Jennings Bryan in his 1896 speech as he was running for president, the rhetoric was present in the introductory quote above. The term was first introduced by humorist and vaudville performer Will Rogers in a column critiquing then-President Herbert Hoover’s economic policies, also quoted above. The term, and the critique it embodied, stuck with politicians and parties that promoted economic policies where the direct benefits were for the rich, particularly those with respect to tax policies.1 The relationship between trickle-down ideas, tax policy and economics was secured during the Ronald Reagan administration, when the proposed tax cuts where linked to the recently articulated “supply-side” economic theory. Supply-side economics, broadly developed around the ideas of economists Robert Mundell and Arthur Laffer, focused on growth through reducing marginal income tax rates and promoting investment 1William J. Bennett, a conservative politician who served in the administrations of Ronald Reagan and George H. W. Bush, lamented in his 2007 book, “Humorist Will Rogers referred to the theory that cutting taxes for higher earners and businesses was a ”trickle-down” policy, a term that has stuck over the years.” 2 through lower capital income tax rates and deregulation. These ideas had a natural relationship with trickle-down ideas in that the direct beneficiaries of lower marginal and capital tax rates were disproportionately the rich - those that faced the highest marginal tax rates and disproportionately owned the capital - but the theory stated that this would ultimately benefit lower income consumers/workers through growth (led by capital investment), employment and lower prices. The Reagan administration turned to “supply-side” rhetoric to promote large marginal rate and business income tax cuts, and the concepts of supply-side and trickle-down tax policies have been linked since.2 Figure 1 shows the Google Trends of the term “trickle down” since 2005, and reveals that spikes in its use are concentrated around changes in tax policy or U.S. presidential elections where tax policy was on the agenda. 1.2. Economic Analysis of Trickle-Down In this article I will discuss the idea of trickle-down as it relates to taxes. I will focus on tax policies that have direct effects on the rich and capital owners - tax rates faced by high income households and capital tax policies specifically related to growth (supply-side) - with a focus on how the effects of these policies “trickle down” to lower income households or workers. Given that trickle-down originated as a political debate, I will discuss both positive analyses of these policies and normative frameworks that apply to the policies. Given this, the best economic framework to study these questions is the theory of tax incidence. Tax incidence is the study of the impact of taxes on the distribution of welfare, 3 and it derives from the insight that the person or entity with the the legal or statutory obligation to make the tax payment may not be the only one whose welfare is affected by the tax. In this way, the study of tax incidence maps directly onto trickle-down ideas by taking the direct or statutory beneficiary of the tax policy and following how it affects the distribution of welfare across the economy (whom does it trickle to?). Therefore, this paper will frame trickle-down ideas through positive and normative applications of tax incidence. I focus primarily on new empirical research about how taxing capital or the rich affect “the distribution of welfare.” Various economic models offer competing predictions about whether to expect that taxing capital owners at the top of the income distribution affects lower earning workers, if so, in what direction and by what channel. In the wake of this, some supply-side advocates have lamented how it has been used to promote trickledown ideas. In a 2007 article titled, How Supply-Side Economics Trickled Down, Bruce Bartlett, a former Reagan advisor wrote, “most accept the basic ideas of supply-side economics – that incentives matter, that high tax rates are bad for growth, and that inflation is fundamentally a monetary phenomenon. . . . Today, supply-side economics has become associated with an obsession for cutting taxes under any and all circumstances. No longer do its advocates in Congress and elsewhere confine themselves to cutting marginal tax rates – the tax on each additional dollar earned – as the original supply-siders did. Rather, they support even the most gimmicky, economically dubious tax cuts with the same intensity. ... today it is common to hear tax cutters claim, implausibly, that all tax cuts raise revenue.” Yet, another former Reagan advisor David Stockman has issued a competing complaint arguing that supply-side economics was always a cover for trickle-down ideas stating, “It’s kind of hard to sell ’trickle down,’ so the supply-side formula was the only way to get a tax policy that was really ’trickle down.’ Supply-side is ’trickle-down’ theory.” Therefore, it is ultimately an empirical question as to whether, and how, changes in these tax rates affect workers. Advances in data quality, particularly administrative linked firm-worker data, econometric methods for identifying causal effects of tax policies, and micro-economic theory on product and labor markets have led to new insights about whether and how taxes that directly affect the rich / capital owners ultimately affect lower earning workers. I review this new literature according to themes related to trickle-down and supply-side tax ideas and arrive at three main lessons. First, recent evidence finds that business income taxes do affect the earnings of workers, but these effects are mostly a result of rent-sharing and taxation of rents, not from traditional supply-side channels. Second, there are systematic differences in the types of workers that are affected by the tax policies, so to understand how taxing businesses or business owners affects the distribution of welfare, it is not sufficient to treat workers/labor as a class. Third, across different income tax policies that statutorily affect the rich, the burden is generally ultimately born by the rich. I conclude by arguing that from a policy standpoint, considering who bears the burden of a tax in isolation is insufficient for addressing trickle-down ideas or critiques and advocate for a more unified discussion of the efficiency and equity consequences of both tax and spending policies.
You can only respond using information in the context block.] What is the difference between supply-side economics and trickle-down economics? In this paper I discuss what can be learned about “trickle-down” ideas from recent empirical evidence on tax incidence. Tax incidence, defined as the effect of tax policies on the distribution of welfare, provides an ideal framework because of the explicit focus on tracing the impacts of a policy beyond the directly affected group (ex. the rich). I arrive at three main lessons. First, recent evidence finds that business income taxes do affect the earnings of workers, but these effects are mostly a result of rent-sharing and taxation of rents, not from traditional supply-side channels. Second, there are systematic differences in the types of workers that are affected by the tax policies, so to understand how taxing businesses or business owners affects the distribution of welfare, it is not sufficient to treat workers/labor as a class. Third, across different income tax policies that statutorily affect the rich, the burden is generally ultimately born by the rich. I conclude with a discussion of fruitful avenues of further research, particularly on how tax incidence depends on various institutional features of labor markets, product markets and tax systems. There are two ideas of government. There are those who believe that if you just legislate to make the well-to-do prosperous, that their prosperity will leak through on those below. The Democratic idea has been that if you legislate to make the masses prosperous their prosperity will find its way up and through every class that rests upon it. - William Jennings Bryan (1896) 1 to the needy. Mr. Hoover was an engineer. He knew that water trickles down. Put it uphill and let it go and it will reach the driest little spot. But he didn’t know that money trickled up. Give it to the people at the bottom and the people at the top will have it before night, anyhow. But it will at least have passed through the poor fellow’s hands. They saved the big banks, but the little ones went up the flue. - Will Rogers (1932), first use of “trickle down” The idea of “trickle-down” originated from political debates to describe the economic policies of a party or politician. There was never a formal concept of “trickle-down economics” in the sense of economic theory. The term-of-art was used to describe policies that directly benefited to rich but were justified by arguments they would ultimately also benefit the middle class and poor. In fact, the term was not originally used by those advocating for such policies, but as a critique of the political discourse promoting such policies. While the term “trickle-down” was not used by William Jennings Bryan in his 1896 speech as he was running for president, the rhetoric was present in the introductory quote above. The term was first introduced by humorist and vaudville performer Will Rogers in a column critiquing then-President Herbert Hoover’s economic policies, also quoted above. The term, and the critique it embodied, stuck with politicians and parties that promoted economic policies where the direct benefits were for the rich, particularly those with respect to tax policies.1 The relationship between trickle-down ideas, tax policy and economics was secured during the Ronald Reagan administration, when the proposed tax cuts where linked to the recently articulated “supply-side” economic theory. Supply-side economics, broadly developed around the ideas of economists Robert Mundell and Arthur Laffer, focused on growth through reducing marginal income tax rates and promoting investment 1William J. Bennett, a conservative politician who served in the administrations of Ronald Reagan and George H. W. Bush, lamented in his 2007 book, “Humorist Will Rogers referred to the theory that cutting taxes for higher earners and businesses was a ”trickle-down” policy, a term that has stuck over the years.” 2 through lower capital income tax rates and deregulation. These ideas had a natural relationship with trickle-down ideas in that the direct beneficiaries of lower marginal and capital tax rates were disproportionately the rich - those that faced the highest marginal tax rates and disproportionately owned the capital - but the theory stated that this would ultimately benefit lower income consumers/workers through growth (led by capital investment), employment and lower prices. The Reagan administration turned to “supply-side” rhetoric to promote large marginal rate and business income tax cuts, and the concepts of supply-side and trickle-down tax policies have been linked since.2 Figure 1 shows the Google Trends of the term “trickle down” since 2005, and reveals that spikes in its use are concentrated around changes in tax policy or U.S. presidential elections where tax policy was on the agenda. 1.2. Economic Analysis of Trickle-Down In this article I will discuss the idea of trickle-down as it relates to taxes. I will focus on tax policies that have direct effects on the rich and capital owners - tax rates faced by high income households and capital tax policies specifically related to growth (supply-side) - with a focus on how the effects of these policies “trickle down” to lower income households or workers. Given that trickle-down originated as a political debate, I will discuss both positive analyses of these policies and normative frameworks that apply to the policies. Given this, the best economic framework to study these questions is the theory of tax incidence. Tax incidence is the study of the impact of taxes on the distribution of welfare, 3 and it derives from the insight that the person or entity with the the legal or statutory obligation to make the tax payment may not be the only one whose welfare is affected by the tax. In this way, the study of tax incidence maps directly onto trickle-down ideas by taking the direct or statutory beneficiary of the tax policy and following how it affects the distribution of welfare across the economy (whom does it trickle to?). Therefore, this paper will frame trickle-down ideas through positive and normative applications of tax incidence. I focus primarily on new empirical research about how taxing capital or the rich affect “the distribution of welfare.” Various economic models offer competing predictions about whether to expect that taxing capital owners at the top of the income distribution affects lower earning workers, if so, in what direction and by what channel. In the wake of this, some supply-side advocates have lamented how it has been used to promote trickledown ideas. In a 2007 article titled, How Supply-Side Economics Trickled Down, Bruce Bartlett, a former Reagan advisor wrote, “most accept the basic ideas of supply-side economics – that incentives matter, that high tax rates are bad for growth, and that inflation is fundamentally a monetary phenomenon. . . . Today, supply-side economics has become associated with an obsession for cutting taxes under any and all circumstances. No longer do its advocates in Congress and elsewhere confine themselves to cutting marginal tax rates – the tax on each additional dollar earned – as the original supply-siders did. Rather, they support even the most gimmicky, economically dubious tax cuts with the same intensity. ... today it is common to hear tax cutters claim, implausibly, that all tax cuts raise revenue.” Yet, another former Reagan advisor David Stockman has issued a competing complaint arguing that supply-side economics was always a cover for trickle-down ideas stating, “It’s kind of hard to sell ’trickle down,’ so the supply-side formula was the only way to get a tax policy that was really ’trickle down.’ Supply-side is ’trickle-down’ theory.” Therefore, it is ultimately an empirical question as to whether, and how, changes in these tax rates affect workers. Advances in data quality, particularly administrative linked firm-worker data, econometric methods for identifying causal effects of tax policies, and micro-economic theory on product and labor markets have led to new insights about whether and how taxes that directly affect the rich / capital owners ultimately affect lower earning workers. I review this new literature according to themes related to trickle-down and supply-side tax ideas and arrive at three main lessons. First, recent evidence finds that business income taxes do affect the earnings of workers, but these effects are mostly a result of rent-sharing and taxation of rents, not from traditional supply-side channels. Second, there are systematic differences in the types of workers that are affected by the tax policies, so to understand how taxing businesses or business owners affects the distribution of welfare, it is not sufficient to treat workers/labor as a class. Third, across different income tax policies that statutorily affect the rich, the burden is generally ultimately born by the rich. I conclude by arguing that from a policy standpoint, considering who bears the burden of a tax in isolation is insufficient for addressing trickle-down ideas or critiques and advocate for a more unified discussion of the efficiency and equity consequences of both tax and spending policies.
You can only respond using information in the context block. EVIDENCE: In this paper I discuss what can be learned about “trickle-down” ideas from recent empirical evidence on tax incidence. Tax incidence, defined as the effect of tax policies on the distribution of welfare, provides an ideal framework because of the explicit focus on tracing the impacts of a policy beyond the directly affected group (ex. the rich). I arrive at three main lessons. First, recent evidence finds that business income taxes do affect the earnings of workers, but these effects are mostly a result of rent-sharing and taxation of rents, not from traditional supply-side channels. Second, there are systematic differences in the types of workers that are affected by the tax policies, so to understand how taxing businesses or business owners affects the distribution of welfare, it is not sufficient to treat workers/labor as a class. Third, across different income tax policies that statutorily affect the rich, the burden is generally ultimately born by the rich. I conclude with a discussion of fruitful avenues of further research, particularly on how tax incidence depends on various institutional features of labor markets, product markets and tax systems. There are two ideas of government. There are those who believe that if you just legislate to make the well-to-do prosperous, that their prosperity will leak through on those below. The Democratic idea has been that if you legislate to make the masses prosperous their prosperity will find its way up and through every class that rests upon it. - William Jennings Bryan (1896) 1 to the needy. Mr. Hoover was an engineer. He knew that water trickles down. Put it uphill and let it go and it will reach the driest little spot. But he didn’t know that money trickled up. Give it to the people at the bottom and the people at the top will have it before night, anyhow. But it will at least have passed through the poor fellow’s hands. They saved the big banks, but the little ones went up the flue. - Will Rogers (1932), first use of “trickle down” The idea of “trickle-down” originated from political debates to describe the economic policies of a party or politician. There was never a formal concept of “trickle-down economics” in the sense of economic theory. The term-of-art was used to describe policies that directly benefited to rich but were justified by arguments they would ultimately also benefit the middle class and poor. In fact, the term was not originally used by those advocating for such policies, but as a critique of the political discourse promoting such policies. While the term “trickle-down” was not used by William Jennings Bryan in his 1896 speech as he was running for president, the rhetoric was present in the introductory quote above. The term was first introduced by humorist and vaudville performer Will Rogers in a column critiquing then-President Herbert Hoover’s economic policies, also quoted above. The term, and the critique it embodied, stuck with politicians and parties that promoted economic policies where the direct benefits were for the rich, particularly those with respect to tax policies.1 The relationship between trickle-down ideas, tax policy and economics was secured during the Ronald Reagan administration, when the proposed tax cuts where linked to the recently articulated “supply-side” economic theory. Supply-side economics, broadly developed around the ideas of economists Robert Mundell and Arthur Laffer, focused on growth through reducing marginal income tax rates and promoting investment 1William J. Bennett, a conservative politician who served in the administrations of Ronald Reagan and George H. W. Bush, lamented in his 2007 book, “Humorist Will Rogers referred to the theory that cutting taxes for higher earners and businesses was a ”trickle-down” policy, a term that has stuck over the years.” 2 through lower capital income tax rates and deregulation. These ideas had a natural relationship with trickle-down ideas in that the direct beneficiaries of lower marginal and capital tax rates were disproportionately the rich - those that faced the highest marginal tax rates and disproportionately owned the capital - but the theory stated that this would ultimately benefit lower income consumers/workers through growth (led by capital investment), employment and lower prices. The Reagan administration turned to “supply-side” rhetoric to promote large marginal rate and business income tax cuts, and the concepts of supply-side and trickle-down tax policies have been linked since.2 Figure 1 shows the Google Trends of the term “trickle down” since 2005, and reveals that spikes in its use are concentrated around changes in tax policy or U.S. presidential elections where tax policy was on the agenda. 1.2. Economic Analysis of Trickle-Down In this article I will discuss the idea of trickle-down as it relates to taxes. I will focus on tax policies that have direct effects on the rich and capital owners - tax rates faced by high income households and capital tax policies specifically related to growth (supply-side) - with a focus on how the effects of these policies “trickle down” to lower income households or workers. Given that trickle-down originated as a political debate, I will discuss both positive analyses of these policies and normative frameworks that apply to the policies. Given this, the best economic framework to study these questions is the theory of tax incidence. Tax incidence is the study of the impact of taxes on the distribution of welfare, 3 and it derives from the insight that the person or entity with the the legal or statutory obligation to make the tax payment may not be the only one whose welfare is affected by the tax. In this way, the study of tax incidence maps directly onto trickle-down ideas by taking the direct or statutory beneficiary of the tax policy and following how it affects the distribution of welfare across the economy (whom does it trickle to?). Therefore, this paper will frame trickle-down ideas through positive and normative applications of tax incidence. I focus primarily on new empirical research about how taxing capital or the rich affect “the distribution of welfare.” Various economic models offer competing predictions about whether to expect that taxing capital owners at the top of the income distribution affects lower earning workers, if so, in what direction and by what channel. In the wake of this, some supply-side advocates have lamented how it has been used to promote trickledown ideas. In a 2007 article titled, How Supply-Side Economics Trickled Down, Bruce Bartlett, a former Reagan advisor wrote, “most accept the basic ideas of supply-side economics – that incentives matter, that high tax rates are bad for growth, and that inflation is fundamentally a monetary phenomenon. . . . Today, supply-side economics has become associated with an obsession for cutting taxes under any and all circumstances. No longer do its advocates in Congress and elsewhere confine themselves to cutting marginal tax rates – the tax on each additional dollar earned – as the original supply-siders did. Rather, they support even the most gimmicky, economically dubious tax cuts with the same intensity. ... today it is common to hear tax cutters claim, implausibly, that all tax cuts raise revenue.” Yet, another former Reagan advisor David Stockman has issued a competing complaint arguing that supply-side economics was always a cover for trickle-down ideas stating, “It’s kind of hard to sell ’trickle down,’ so the supply-side formula was the only way to get a tax policy that was really ’trickle down.’ Supply-side is ’trickle-down’ theory.” Therefore, it is ultimately an empirical question as to whether, and how, changes in these tax rates affect workers. Advances in data quality, particularly administrative linked firm-worker data, econometric methods for identifying causal effects of tax policies, and micro-economic theory on product and labor markets have led to new insights about whether and how taxes that directly affect the rich / capital owners ultimately affect lower earning workers. I review this new literature according to themes related to trickle-down and supply-side tax ideas and arrive at three main lessons. First, recent evidence finds that business income taxes do affect the earnings of workers, but these effects are mostly a result of rent-sharing and taxation of rents, not from traditional supply-side channels. Second, there are systematic differences in the types of workers that are affected by the tax policies, so to understand how taxing businesses or business owners affects the distribution of welfare, it is not sufficient to treat workers/labor as a class. Third, across different income tax policies that statutorily affect the rich, the burden is generally ultimately born by the rich. I conclude by arguing that from a policy standpoint, considering who bears the burden of a tax in isolation is insufficient for addressing trickle-down ideas or critiques and advocate for a more unified discussion of the efficiency and equity consequences of both tax and spending policies. USER: What is the difference between supply-side economics and trickle-down economics? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
10
10
1,459
null
811
Respond to the following question with only the information I provide within this prompt, your answer should be two paragraphs long.
Why is Mr Lloyd's strategy unusual?
A. INTRODUCTION 1. Mr Richard Lloyd - with financial backing from Therium Litigation Funding IC, a commercial litigation funder - has issued a claim against Google LLC, alleging breach of its duties as a data controller under section 4(4) of the Data Protection Act 1998 (“the DPA 1998”). The claim alleges that, for several months in late 2011 and early 2012, Google secretly tracked the internet activity of millions of Apple iPhone users and used the data collected in this way for commercial purposes without the users’ knowledge or consent. 2. The factual allegation is not new. In August 2012, Google agreed to pay a civil penalty of US$22.5m to settle charges brought by the United States Federal Trade Commission based upon the allegation. In November 2013, Google agreed to pay US$17m to settle consumer-based actions brought against it in the United States. In England and Wales, three individuals sued Google in June 2013 making the same allegation and claiming compensation under the DPA 1998 and at common law for misuse of private information: see Vidal-Hall v Google Inc (Information Comr intervening) [2015] EWCA Civ 311; [2016] QB 1003. Following a dispute over jurisdiction, their claims were settled before Google had served a defence. What is new about the present action is that Mr Lloyd is not just claiming damages in his own right, as the three claimants did in Vidal-Hall. He claims to represent everyone resident in England and Wales who owned an Apple iPhone at the relevant time and whose data were obtained by Google without their consent, and to be entitled to recover damages on behalf of all these people. It is estimated that they number more than 4m. 3. Class actions, in which a single person is permitted to bring a claim and obtain redress on behalf of a class of people who have been affected in a similar way by alleged wrongdoing, have long been possible in the United States and, more recently, in Canada and Australia. Whether legislation to establish a class action regime should be enacted in the UK has been much discussed. In 2009, the Government rejected a recommendation from the Civil Justice Council to introduce a generic class action regime applicable to all types of claim, preferring a “sector based approach”. This was for two reasons: “Firstly, there are potential structural differences between the sectors which will require different consideration. … Secondly, it will be necessary to undertake a full assessment Page 3 of the likely economic and other impacts before implementing any reform.” See the Government’s Response to the Civil Justice Council’s Report: “Improving Access to Justice through Collective Actions” (2008), paras 12-13. 4. Since then, the only sector for which such a regime has so far been enacted is that of competition law. Parliament has not legislated to establish a class action regime in the field of data protection. 5. Mr Lloyd has sought to overcome this difficulty by what the Court of Appeal in this case described as “an unusual and innovative use of the representative procedure” in rule 19.6 of the Civil Procedure Rules: see [2019] EWCA Civ 1599; [2020] QB 747, para 7. This is a procedure of very long standing in England and Wales whereby a claim can be brought by (or against) one or more persons as representatives of others who have “the same interest” in the claim. Mr Lloyd accepts that he could not use this procedure to claim compensation on behalf of other iPhone users if the compensation recoverable by each user would have to be individually assessed. But he contends that such individual assessment is unnecessary. He argues that, as a matter of law, compensation can be awarded under the DPA 1998 for “loss of control” of personal data without the need to prove that the claimant suffered any financial loss or mental distress as a result of the breach. Mr Lloyd further argues that a “uniform sum” of damages can properly be awarded in relation to each person whose data protection rights have been infringed without the need to investigate any circumstances particular to their individual case. The amount of damages recoverable per person would be a matter for argument, but a figure of £750 was advanced in a letter of claim. Multiplied by the number of people whom Mr Lloyd claims to represent, this would produce an award of damages of the order of £3 billion. 6. Because Google is a Delaware corporation, the claimant needs the court’s permission to serve the claim form on Google outside the jurisdiction. The application for permission has been contested by Google on the grounds that the claim has no real prospect of success as: (1) damages cannot be awarded under the DPA 1998 for “loss of control” of data without proof that it caused financial damage or distress; and (2) the claim in any event is not suitable to proceed as a representative action. In the High Court Warby J decided both issues in Google’s favour and therefore refused permission to serve the proceedings on Google: see [2018] EWHC 2599 (QB); [2019] 1 WLR 1265. The Court of Appeal reversed that decision, for reasons given in a judgment of the Chancellor, Sir Geoffrey Vos, with which Davis LJ and Dame Victoria Sharp agreed: [2019] EWCA Civ 1599; [2020] QB 747. Page 4 7. On this further appeal, because of the potential ramifications of the issues raised, as well as hearing the claimant and Google, the court has received written and oral submissions from the Information Commissioner and written submissions from five further interested parties. 8. In this judgment I will first summarise the facts alleged and the relevant legal framework for data protection before considering the different methods currently available in English procedural law for claiming collective redress and, in particular, the representative procedure which the claimant is seeking to use. Whether that procedure is capable of being used in this case critically depends, as the claimant accepts, on whether compensation for the alleged breaches of data protection law would need to be individually assessed. I will then consider the claimant’s arguments that individual assessment is unnecessary. For the reasons given in detail below, those arguments cannot in my view withstand scrutiny. In order to recover compensation under the DPA 1998 for any given individual, it would be necessary to show both that Google made some unlawful use of personal data relating to that individual and that the individual suffered some damage as a result. The claimant’s attempt to recover compensation under the Act without proving either matter in any individual case is therefore doomed to fail. B. FACTUAL BACKGROUND 9. The relevant events took place between 9 August 2011 and 15 February 2012 and involved the alleged use by Google of what has been called the “Safari workaround” to bypass privacy settings on Apple iPhones. 10. Safari is an internet browser developed by Apple and installed on its iPhones. At the relevant time, unlike most other internet browsers, all relevant versions of Safari were set by default to block third party cookies. A “cookie” is a small block of data that is placed on a device when the user visits a website. A “third party cookie” is a cookie placed on the device not by the website visited by the user but by a third party whose content is included on that website. Third party cookies are often used to gather information about internet use, and in particular web pages visited over time, to enable the delivery to the user of advertisements tailored to interests inferred from the user’s browsing history. 11. Google had a cookie known as the “DoubleClick Ad cookie” which could operate as a third party cookie. It would be placed on a device if the user visited a website that included DoubleClick Ad content. The DoubleClick Ad cookie enabled Google to identify visits by the device to any website displaying an advertisement from its vast Page 5 advertising network and to collect considerable amounts of information. It could tell the date and time of any visit to a given website, how long the user spent there, which pages were visited for how long, and what advertisements were viewed for how long. In some cases, by means of the IP address of the browser, the user’s approximate geographical location could be identified. 12. Although the default settings for Safari blocked all third party cookies, a blanket application of these settings would have prevented the use of certain popular web functions; so Apple devised some exceptions to them. These exceptions were in place until March 2012, when the system was changed. But in the meantime the exceptions made it possible for Google to devise and implement the Safari workaround. Its effect was to place the DoubleClick Ad cookie on an Apple device, without the user’s knowledge or consent, immediately, whenever the user visited a website that contained DoubleClick Ad content. 13. It is alleged that, in this way, Google was able to collect or infer information relating not only to users’ internet surfing habits and location, but also about such diverse factors as their interests and pastimes, race or ethnicity, social class, political or religious beliefs or affiliations, health, sexual interests, age, gender and financial situation. 14. Further, it is said that Google aggregated browser generated information from users displaying similar patterns, creating groups with labels such as “football lovers”, or “current affairs enthusiasts”. Google’s DoubleClick service then offered these group labels to subscribing advertisers to choose from when selecting the type of people at whom they wanted to target their advertisements
Respond to the following question with only the information I provide within this prompt, your answer should be two paragraphs long. Why is Mr Lloyd's strategy unusual? A. INTRODUCTION 1. Mr Richard Lloyd - with financial backing from Therium Litigation Funding IC, a commercial litigation funder - has issued a claim against Google LLC, alleging breach of its duties as a data controller under section 4(4) of the Data Protection Act 1998 (“the DPA 1998”). The claim alleges that, for several months in late 2011 and early 2012, Google secretly tracked the internet activity of millions of Apple iPhone users and used the data collected in this way for commercial purposes without the users’ knowledge or consent. 2. The factual allegation is not new. In August 2012, Google agreed to pay a civil penalty of US$22.5m to settle charges brought by the United States Federal Trade Commission based upon the allegation. In November 2013, Google agreed to pay US$17m to settle consumer-based actions brought against it in the United States. In England and Wales, three individuals sued Google in June 2013 making the same allegation and claiming compensation under the DPA 1998 and at common law for misuse of private information: see Vidal-Hall v Google Inc (Information Comr intervening) [2015] EWCA Civ 311; [2016] QB 1003. Following a dispute over jurisdiction, their claims were settled before Google had served a defence. What is new about the present action is that Mr Lloyd is not just claiming damages in his own right, as the three claimants did in Vidal-Hall. He claims to represent everyone resident in England and Wales who owned an Apple iPhone at the relevant time and whose data were obtained by Google without their consent, and to be entitled to recover damages on behalf of all these people. It is estimated that they number more than 4m. 3. Class actions, in which a single person is permitted to bring a claim and obtain redress on behalf of a class of people who have been affected in a similar way by alleged wrongdoing, have long been possible in the United States and, more recently, in Canada and Australia. Whether legislation to establish a class action regime should be enacted in the UK has been much discussed. In 2009, the Government rejected a recommendation from the Civil Justice Council to introduce a generic class action regime applicable to all types of claim, preferring a “sector based approach”. This was for two reasons: “Firstly, there are potential structural differences between the sectors which will require different consideration. … Secondly, it will be necessary to undertake a full assessment Page 3 of the likely economic and other impacts before implementing any reform.” See the Government’s Response to the Civil Justice Council’s Report: “Improving Access to Justice through Collective Actions” (2008), paras 12-13. 4. Since then, the only sector for which such a regime has so far been enacted is that of competition law. Parliament has not legislated to establish a class action regime in the field of data protection. 5. Mr Lloyd has sought to overcome this difficulty by what the Court of Appeal in this case described as “an unusual and innovative use of the representative procedure” in rule 19.6 of the Civil Procedure Rules: see [2019] EWCA Civ 1599; [2020] QB 747, para 7. This is a procedure of very long standing in England and Wales whereby a claim can be brought by (or against) one or more persons as representatives of others who have “the same interest” in the claim. Mr Lloyd accepts that he could not use this procedure to claim compensation on behalf of other iPhone users if the compensation recoverable by each user would have to be individually assessed. But he contends that such individual assessment is unnecessary. He argues that, as a matter of law, compensation can be awarded under the DPA 1998 for “loss of control” of personal data without the need to prove that the claimant suffered any financial loss or mental distress as a result of the breach. Mr Lloyd further argues that a “uniform sum” of damages can properly be awarded in relation to each person whose data protection rights have been infringed without the need to investigate any circumstances particular to their individual case. The amount of damages recoverable per person would be a matter for argument, but a figure of £750 was advanced in a letter of claim. Multiplied by the number of people whom Mr Lloyd claims to represent, this would produce an award of damages of the order of £3 billion. 6. Because Google is a Delaware corporation, the claimant needs the court’s permission to serve the claim form on Google outside the jurisdiction. The application for permission has been contested by Google on the grounds that the claim has no real prospect of success as: (1) damages cannot be awarded under the DPA 1998 for “loss of control” of data without proof that it caused financial damage or distress; and (2) the claim in any event is not suitable to proceed as a representative action. In the High Court Warby J decided both issues in Google’s favour and therefore refused permission to serve the proceedings on Google: see [2018] EWHC 2599 (QB); [2019] 1 WLR 1265. The Court of Appeal reversed that decision, for reasons given in a judgment of the Chancellor, Sir Geoffrey Vos, with which Davis LJ and Dame Victoria Sharp agreed: [2019] EWCA Civ 1599; [2020] QB 747. Page 4 7. On this further appeal, because of the potential ramifications of the issues raised, as well as hearing the claimant and Google, the court has received written and oral submissions from the Information Commissioner and written submissions from five further interested parties. 8. In this judgment I will first summarise the facts alleged and the relevant legal framework for data protection before considering the different methods currently available in English procedural law for claiming collective redress and, in particular, the representative procedure which the claimant is seeking to use. Whether that procedure is capable of being used in this case critically depends, as the claimant accepts, on whether compensation for the alleged breaches of data protection law would need to be individually assessed. I will then consider the claimant’s arguments that individual assessment is unnecessary. For the reasons given in detail below, those arguments cannot in my view withstand scrutiny. In order to recover compensation under the DPA 1998 for any given individual, it would be necessary to show both that Google made some unlawful use of personal data relating to that individual and that the individual suffered some damage as a result. The claimant’s attempt to recover compensation under the Act without proving either matter in any individual case is therefore doomed to fail. B. FACTUAL BACKGROUND 9. The relevant events took place between 9 August 2011 and 15 February 2012 and involved the alleged use by Google of what has been called the “Safari workaround” to bypass privacy settings on Apple iPhones. 10. Safari is an internet browser developed by Apple and installed on its iPhones. At the relevant time, unlike most other internet browsers, all relevant versions of Safari were set by default to block third party cookies. A “cookie” is a small block of data that is placed on a device when the user visits a website. A “third party cookie” is a cookie placed on the device not by the website visited by the user but by a third party whose content is included on that website. Third party cookies are often used to gather information about internet use, and in particular web pages visited over time, to enable the delivery to the user of advertisements tailored to interests inferred from the user’s browsing history. 11. Google had a cookie known as the “DoubleClick Ad cookie” which could operate as a third party cookie. It would be placed on a device if the user visited a website that included DoubleClick Ad content. The DoubleClick Ad cookie enabled Google to identify visits by the device to any website displaying an advertisement from its vast Page 5 advertising network and to collect considerable amounts of information. It could tell the date and time of any visit to a given website, how long the user spent there, which pages were visited for how long, and what advertisements were viewed for how long. In some cases, by means of the IP address of the browser, the user’s approximate geographical location could be identified. 12. Although the default settings for Safari blocked all third party cookies, a blanket application of these settings would have prevented the use of certain popular web functions; so Apple devised some exceptions to them. These exceptions were in place until March 2012, when the system was changed. But in the meantime the exceptions made it possible for Google to devise and implement the Safari workaround. Its effect was to place the DoubleClick Ad cookie on an Apple device, without the user’s knowledge or consent, immediately, whenever the user visited a website that contained DoubleClick Ad content. 13. It is alleged that, in this way, Google was able to collect or infer information relating not only to users’ internet surfing habits and location, but also about such diverse factors as their interests and pastimes, race or ethnicity, social class, political or religious beliefs or affiliations, health, sexual interests, age, gender and financial situation. 14. Further, it is said that Google aggregated browser generated information from users displaying similar patterns, creating groups with labels such as “football lovers”, or “current affairs enthusiasts”. Google’s DoubleClick service then offered these group labels to subscribing advertisers to choose from when selecting the type of people at whom they wanted to target their advertisements
Respond to the following question with only the information I provide within this prompt, your answer should be two paragraphs long. EVIDENCE: A. INTRODUCTION 1. Mr Richard Lloyd - with financial backing from Therium Litigation Funding IC, a commercial litigation funder - has issued a claim against Google LLC, alleging breach of its duties as a data controller under section 4(4) of the Data Protection Act 1998 (“the DPA 1998”). The claim alleges that, for several months in late 2011 and early 2012, Google secretly tracked the internet activity of millions of Apple iPhone users and used the data collected in this way for commercial purposes without the users’ knowledge or consent. 2. The factual allegation is not new. In August 2012, Google agreed to pay a civil penalty of US$22.5m to settle charges brought by the United States Federal Trade Commission based upon the allegation. In November 2013, Google agreed to pay US$17m to settle consumer-based actions brought against it in the United States. In England and Wales, three individuals sued Google in June 2013 making the same allegation and claiming compensation under the DPA 1998 and at common law for misuse of private information: see Vidal-Hall v Google Inc (Information Comr intervening) [2015] EWCA Civ 311; [2016] QB 1003. Following a dispute over jurisdiction, their claims were settled before Google had served a defence. What is new about the present action is that Mr Lloyd is not just claiming damages in his own right, as the three claimants did in Vidal-Hall. He claims to represent everyone resident in England and Wales who owned an Apple iPhone at the relevant time and whose data were obtained by Google without their consent, and to be entitled to recover damages on behalf of all these people. It is estimated that they number more than 4m. 3. Class actions, in which a single person is permitted to bring a claim and obtain redress on behalf of a class of people who have been affected in a similar way by alleged wrongdoing, have long been possible in the United States and, more recently, in Canada and Australia. Whether legislation to establish a class action regime should be enacted in the UK has been much discussed. In 2009, the Government rejected a recommendation from the Civil Justice Council to introduce a generic class action regime applicable to all types of claim, preferring a “sector based approach”. This was for two reasons: “Firstly, there are potential structural differences between the sectors which will require different consideration. … Secondly, it will be necessary to undertake a full assessment Page 3 of the likely economic and other impacts before implementing any reform.” See the Government’s Response to the Civil Justice Council’s Report: “Improving Access to Justice through Collective Actions” (2008), paras 12-13. 4. Since then, the only sector for which such a regime has so far been enacted is that of competition law. Parliament has not legislated to establish a class action regime in the field of data protection. 5. Mr Lloyd has sought to overcome this difficulty by what the Court of Appeal in this case described as “an unusual and innovative use of the representative procedure” in rule 19.6 of the Civil Procedure Rules: see [2019] EWCA Civ 1599; [2020] QB 747, para 7. This is a procedure of very long standing in England and Wales whereby a claim can be brought by (or against) one or more persons as representatives of others who have “the same interest” in the claim. Mr Lloyd accepts that he could not use this procedure to claim compensation on behalf of other iPhone users if the compensation recoverable by each user would have to be individually assessed. But he contends that such individual assessment is unnecessary. He argues that, as a matter of law, compensation can be awarded under the DPA 1998 for “loss of control” of personal data without the need to prove that the claimant suffered any financial loss or mental distress as a result of the breach. Mr Lloyd further argues that a “uniform sum” of damages can properly be awarded in relation to each person whose data protection rights have been infringed without the need to investigate any circumstances particular to their individual case. The amount of damages recoverable per person would be a matter for argument, but a figure of £750 was advanced in a letter of claim. Multiplied by the number of people whom Mr Lloyd claims to represent, this would produce an award of damages of the order of £3 billion. 6. Because Google is a Delaware corporation, the claimant needs the court’s permission to serve the claim form on Google outside the jurisdiction. The application for permission has been contested by Google on the grounds that the claim has no real prospect of success as: (1) damages cannot be awarded under the DPA 1998 for “loss of control” of data without proof that it caused financial damage or distress; and (2) the claim in any event is not suitable to proceed as a representative action. In the High Court Warby J decided both issues in Google’s favour and therefore refused permission to serve the proceedings on Google: see [2018] EWHC 2599 (QB); [2019] 1 WLR 1265. The Court of Appeal reversed that decision, for reasons given in a judgment of the Chancellor, Sir Geoffrey Vos, with which Davis LJ and Dame Victoria Sharp agreed: [2019] EWCA Civ 1599; [2020] QB 747. Page 4 7. On this further appeal, because of the potential ramifications of the issues raised, as well as hearing the claimant and Google, the court has received written and oral submissions from the Information Commissioner and written submissions from five further interested parties. 8. In this judgment I will first summarise the facts alleged and the relevant legal framework for data protection before considering the different methods currently available in English procedural law for claiming collective redress and, in particular, the representative procedure which the claimant is seeking to use. Whether that procedure is capable of being used in this case critically depends, as the claimant accepts, on whether compensation for the alleged breaches of data protection law would need to be individually assessed. I will then consider the claimant’s arguments that individual assessment is unnecessary. For the reasons given in detail below, those arguments cannot in my view withstand scrutiny. In order to recover compensation under the DPA 1998 for any given individual, it would be necessary to show both that Google made some unlawful use of personal data relating to that individual and that the individual suffered some damage as a result. The claimant’s attempt to recover compensation under the Act without proving either matter in any individual case is therefore doomed to fail. B. FACTUAL BACKGROUND 9. The relevant events took place between 9 August 2011 and 15 February 2012 and involved the alleged use by Google of what has been called the “Safari workaround” to bypass privacy settings on Apple iPhones. 10. Safari is an internet browser developed by Apple and installed on its iPhones. At the relevant time, unlike most other internet browsers, all relevant versions of Safari were set by default to block third party cookies. A “cookie” is a small block of data that is placed on a device when the user visits a website. A “third party cookie” is a cookie placed on the device not by the website visited by the user but by a third party whose content is included on that website. Third party cookies are often used to gather information about internet use, and in particular web pages visited over time, to enable the delivery to the user of advertisements tailored to interests inferred from the user’s browsing history. 11. Google had a cookie known as the “DoubleClick Ad cookie” which could operate as a third party cookie. It would be placed on a device if the user visited a website that included DoubleClick Ad content. The DoubleClick Ad cookie enabled Google to identify visits by the device to any website displaying an advertisement from its vast Page 5 advertising network and to collect considerable amounts of information. It could tell the date and time of any visit to a given website, how long the user spent there, which pages were visited for how long, and what advertisements were viewed for how long. In some cases, by means of the IP address of the browser, the user’s approximate geographical location could be identified. 12. Although the default settings for Safari blocked all third party cookies, a blanket application of these settings would have prevented the use of certain popular web functions; so Apple devised some exceptions to them. These exceptions were in place until March 2012, when the system was changed. But in the meantime the exceptions made it possible for Google to devise and implement the Safari workaround. Its effect was to place the DoubleClick Ad cookie on an Apple device, without the user’s knowledge or consent, immediately, whenever the user visited a website that contained DoubleClick Ad content. 13. It is alleged that, in this way, Google was able to collect or infer information relating not only to users’ internet surfing habits and location, but also about such diverse factors as their interests and pastimes, race or ethnicity, social class, political or religious beliefs or affiliations, health, sexual interests, age, gender and financial situation. 14. Further, it is said that Google aggregated browser generated information from users displaying similar patterns, creating groups with labels such as “football lovers”, or “current affairs enthusiasts”. Google’s DoubleClick service then offered these group labels to subscribing advertisers to choose from when selecting the type of people at whom they wanted to target their advertisements USER: Why is Mr Lloyd's strategy unusual? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
21
6
1,603
null
754
Exclusively use the information found in the prompt to provide responses limited to 300 words. Use bullet formatting if listing more than 2 items. Use numerical formatting is the response includes instructions. If there's is not enough information to answer some or all of an user's prompt then state so but answer the parts that you can, if any. Cite examples supporting main statements in your response when available.
How does Executive Order (E.O.) 14110 address concerns that were previously outlined in the E.O. on _Advancing Effective, Accountable Policing and Criminal Justice Practices to Enhance Public Trust and Public Safety_?
A number of concerns have been raised about law enforcement use of AI, including whether its use perpetuates biases; one criticism is that the data on which the software are trained contain bias, thus training bias into the AI systems. Another concern is whether reliance on AI technology may lead police to ignore contradictory evidence. Policymakers may consider increased oversight over police use of AI systems to help evaluate and alleviate some of the shortcomings. On October 30, 2023, President Biden issued Executive Order (E.O.) 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This E.O. advances a government-wide approach to “governing the development and use of AI safely and responsibly” and directs efforts in AI policy areas involving safety and security, innovation and competition, worker support, equity and civil rights, individual protections, privacy protections, federal AI use, and international leadership. E.O. 14110 acknowledges the risk of AI exacerbating discrimination and directs federal law enforcement in various ways. (In doing so, it references accountability focused directives for federal law enforcement previously outlined in the May 25, 2022, E.O. 14074 on Advancing Effective, Accountable Policing and Criminal Justice Practices to Enhance Public Trust and Public Safety.) Directives in E.O. 14110 include the following: • The Attorney General (AG) shall coordinate and support enforcement of federal laws addressing discrimination and violations of civil rights and civil liberties related to AI. The Department of Justice’s Civil Rights Division shall also coordinate with other federal civil rights offices to assess how their offices can prevent and address discrimination in automated systems—including algorithmic discrimination. • The AG, with the Homeland Security Secretary and Office of Science and Technology Policy Director, shall submit a report to the President on the use of AI in the criminal justice system, including how AI can enhance law enforcement efficiency and accuracy, consistent with privacy, civil rights, and civil liberties protections. The report should also recommend best practices for law enforcement, including guidance on AI use, to address concerns outlined in E.O. 14074 with respect to law enforcement use of “facial recognition technology, other technologies using biometric information, and predictive algorithms, as well as data storage and access regarding such technologies.” • The interagency working group established by E.O. 14074 shall share best practices for recruiting law enforcement professionals with AI expertise and training them on responsible AI use. The AG, along with the Homeland Security Secretary, may review these and recommend best practices for state, local, tribal, and territorial law enforcement. • The AG shall review the Justice Department’s capacity to “investigate law enforcement deprivation of rights under color of law resulting from the use of AI,” including through increasing or improving training for federal law enforcement officers and prosecutors. Congressional Research Service 3 Policymakers conducting oversight of executive branch activities to ensure that AI is used in a fair and equitable manner may examine not only these elements of E.O. 14110 that specifically relate to federal law enforcement but also other elements—such as the development of industry standards on AI—that may in turn affect law enforcement use of AI. They may also explore whether there should be specific standards for AI use in the criminal justice sector or AI-specific requirements for criminal justice entities receiving federal grants. Additionally, policymakers may continue to debate law enforcement use of specific AI technologies in its toolbox such as facial recognition technology.
System instruction: Exclusively use the information found in the prompt to provide responses limited to 300 words. Use bullet formatting if listing more than 2 items. Use numerical formatting is the response includes instructions. If there's is not enough information to answer some or all of an user's prompt then state so but answer the parts that you can, if any. Cite examples supporting main statements in your response when available. context: A number of concerns have been raised about law enforcement use of AI, including whether its use perpetuates biases; one criticism is that the data on which the software are trained contain bias, thus training bias into the AI systems. Another concern is whether reliance on AI technology may lead police to ignore contradictory evidence. Policymakers may consider increased oversight over police use of AI systems to help evaluate and alleviate some of the shortcomings. On October 30, 2023, President Biden issued Executive Order (E.O.) 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This E.O. advances a government-wide approach to “governing the development and use of AI safely and responsibly” and directs efforts in AI policy areas involving safety and security, innovation and competition, worker support, equity and civil rights, individual protections, privacy protections, federal AI use, and international leadership. E.O. 14110 acknowledges the risk of AI exacerbating discrimination and directs federal law enforcement in various ways. (In doing so, it references accountability focused directives for federal law enforcement previously outlined in the May 25, 2022, E.O. 14074 on Advancing Effective, Accountable Policing and Criminal Justice Practices to Enhance Public Trust and Public Safety.) Directives in E.O. 14110 include the following: • The Attorney General (AG) shall coordinate and support enforcement of federal laws addressing discrimination and violations of civil rights and civil liberties related to AI. The Department of Justice’s Civil Rights Division shall also coordinate with other federal civil rights offices to assess how their offices can prevent and address discrimination in automated systems—including algorithmic discrimination. • The AG, with the Homeland Security Secretary and Office of Science and Technology Policy Director, shall submit a report to the President on the use of AI in the criminal justice system, including how AI can enhance law enforcement efficiency and accuracy, consistent with privacy, civil rights, and civil liberties protections. The report should also recommend best practices for law enforcement, including guidance on AI use, to address concerns outlined in E.O. 14074 with respect to law enforcement use of “facial recognition technology, other technologies using biometric information, and predictive algorithms, as well as data storage and access regarding such technologies.” • The interagency working group established by E.O. 14074 shall share best practices for recruiting law enforcement professionals with AI expertise and training them on responsible AI use. The AG, along with the Homeland Security Secretary, may review these and recommend best practices for state, local, tribal, and territorial law enforcement. • The AG shall review the Justice Department’s capacity to “investigate law enforcement deprivation of rights under color of law resulting from the use of AI,” including through increasing or improving training for federal law enforcement officers and prosecutors. Congressional Research Service 3 Policymakers conducting oversight of executive branch activities to ensure that AI is used in a fair and equitable manner may examine not only these elements of E.O. 14110 that specifically relate to federal law enforcement but also other elements—such as the development of industry standards on AI—that may in turn affect law enforcement use of AI. They may also explore whether there should be specific standards for AI use in the criminal justice sector or AI-specific requirements for criminal justice entities receiving federal grants. Additionally, policymakers may continue to debate law enforcement use of specific AI technologies in its toolbox such as facial recognition technology. question: How does Executive Order (E.O.) 14110 address concerns that were previously outlined in the E.O. on _Advancing Effective, Accountable Policing and Criminal Justice Practices to Enhance Public Trust and Public Safety_?
Exclusively use the information found in the prompt to provide responses limited to 300 words. Use bullet formatting if listing more than 2 items. Use numerical formatting is the response includes instructions. If there's is not enough information to answer some or all of an user's prompt then state so but answer the parts that you can, if any. Cite examples supporting main statements in your response when available. EVIDENCE: A number of concerns have been raised about law enforcement use of AI, including whether its use perpetuates biases; one criticism is that the data on which the software are trained contain bias, thus training bias into the AI systems. Another concern is whether reliance on AI technology may lead police to ignore contradictory evidence. Policymakers may consider increased oversight over police use of AI systems to help evaluate and alleviate some of the shortcomings. On October 30, 2023, President Biden issued Executive Order (E.O.) 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This E.O. advances a government-wide approach to “governing the development and use of AI safely and responsibly” and directs efforts in AI policy areas involving safety and security, innovation and competition, worker support, equity and civil rights, individual protections, privacy protections, federal AI use, and international leadership. E.O. 14110 acknowledges the risk of AI exacerbating discrimination and directs federal law enforcement in various ways. (In doing so, it references accountability focused directives for federal law enforcement previously outlined in the May 25, 2022, E.O. 14074 on Advancing Effective, Accountable Policing and Criminal Justice Practices to Enhance Public Trust and Public Safety.) Directives in E.O. 14110 include the following: • The Attorney General (AG) shall coordinate and support enforcement of federal laws addressing discrimination and violations of civil rights and civil liberties related to AI. The Department of Justice’s Civil Rights Division shall also coordinate with other federal civil rights offices to assess how their offices can prevent and address discrimination in automated systems—including algorithmic discrimination. • The AG, with the Homeland Security Secretary and Office of Science and Technology Policy Director, shall submit a report to the President on the use of AI in the criminal justice system, including how AI can enhance law enforcement efficiency and accuracy, consistent with privacy, civil rights, and civil liberties protections. The report should also recommend best practices for law enforcement, including guidance on AI use, to address concerns outlined in E.O. 14074 with respect to law enforcement use of “facial recognition technology, other technologies using biometric information, and predictive algorithms, as well as data storage and access regarding such technologies.” • The interagency working group established by E.O. 14074 shall share best practices for recruiting law enforcement professionals with AI expertise and training them on responsible AI use. The AG, along with the Homeland Security Secretary, may review these and recommend best practices for state, local, tribal, and territorial law enforcement. • The AG shall review the Justice Department’s capacity to “investigate law enforcement deprivation of rights under color of law resulting from the use of AI,” including through increasing or improving training for federal law enforcement officers and prosecutors. Congressional Research Service 3 Policymakers conducting oversight of executive branch activities to ensure that AI is used in a fair and equitable manner may examine not only these elements of E.O. 14110 that specifically relate to federal law enforcement but also other elements—such as the development of industry standards on AI—that may in turn affect law enforcement use of AI. They may also explore whether there should be specific standards for AI use in the criminal justice sector or AI-specific requirements for criminal justice entities receiving federal grants. Additionally, policymakers may continue to debate law enforcement use of specific AI technologies in its toolbox such as facial recognition technology. USER: How does Executive Order (E.O.) 14110 address concerns that were previously outlined in the E.O. on _Advancing Effective, Accountable Policing and Criminal Justice Practices to Enhance Public Trust and Public Safety_? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
69
31
564
null
136
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document]
I'm middle-aged, never smoked, had my ears blown out in the war, get a case of the sads pretty regular, and eat mostly garbage. What are my risk factors for dementia? What does cognitive engagement have to do with it?
high blood pressure People who have consistent high blood pressure (hypertension) in mid-life (ages 45 to 65) are more likely to develop dementia compared to those with normal blood pressure. High blood pressure can increase the risk of developing dementia, particularly vascular dementia, because of its effect on the heart, the arteries, and blood circulation. Smoking The evidence is strong and consistent that smokers are at a higher risk of developing dementia vs. non-smokers or ex-smokers. It’s never too late to quit! Smokers who quit can reduce their risk of developing dementia. diabetes People with type 2 diabetes in mid-life (ages 45 to 65) are at an increased risk of developing dementia, particularly Alzheimer’s disease and vascular dementia. Obesity Obesity in mid-life (ages 45 to 65) increases the risk of developing dementia. Obesity also increases the risk of developing other risk factors such as type 2 diabetes. lack of physical activity Physical inactivity in later life (ages 65 and up) increases the risk of developing dementia. poor diet An unhealthy diet, high in saturated fat, sugar, and salt, can increase the risk of developing many illnesses, including dementia and cardiovascular disease. high alcohol consumption Drinking excessively (more than 12 drinks per week), can increase your risk of developing dementia low cognitive engagement Cognitive engagement is thought to support the development of a “cognitive reserve”. This is the idea that people who actively use their brains throughout their lives may be more protected against brain cell damage caused by dementia. depression People who experience depression in mid- or later life have a higher risk of developing dementia. However, the relationship between depression and dementia is still unclear. Many researchers believe that depression is a risk factor for dementia, whereas others believe it may be an early symptom of the disease, or both. traumatic brain injury People who experience severe or repeated head injuries are at increased risk of developing dementia. Brain injuries may trigger a process that might eventually lead to dementia. This particularly affects athletes in boxing, soccer, hockey, and football, which often have repeated head injuries. Falls are the leading cause of traumatic brain injury. Falling is especially dangerous for older adults. hearing loss Mild levels of hearing loss increase the risk of cognitive decline and dementia. Though it is still unclear how exactly it affects cognitive decline, hearing loss can lead to social isolation, loss of independence, and problems with everyday activities. social isolation Social isolation can increase the risk of hypertension, coronary heart disease, depression, and dementia. Staying socially active may reduce the risk of dementia. Social interaction may also help slow down the progression of the disease. air pollution The relationship between air pollution and dementia is still unclear. However, it’s estimated that those living close to busy roads have a higher risk of dementia because they may be exposed to higher levels of air pollution from vehicle emissions. It’s never too soon, or too late, to make changes that will maintain or improve your brain health. Learn more about managing some of these risk factors.
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== I'm middle-aged, never smoked, had my ears blown out in the war, get a case of the sads pretty regular, and eat mostly garbage. What are my risk factors for dementia? What does cognitive engagement have to do with it? {passage 0} ========== high blood pressure People who have consistent high blood pressure (hypertension) in mid-life (ages 45 to 65) are more likely to develop dementia compared to those with normal blood pressure. High blood pressure can increase the risk of developing dementia, particularly vascular dementia, because of its effect on the heart, the arteries, and blood circulation. Smoking The evidence is strong and consistent that smokers are at a higher risk of developing dementia vs. non-smokers or ex-smokers. It’s never too late to quit! Smokers who quit can reduce their risk of developing dementia. diabetes People with type 2 diabetes in mid-life (ages 45 to 65) are at an increased risk of developing dementia, particularly Alzheimer’s disease and vascular dementia. Obesity Obesity in mid-life (ages 45 to 65) increases the risk of developing dementia. Obesity also increases the risk of developing other risk factors such as type 2 diabetes. lack of physical activity Physical inactivity in later life (ages 65 and up) increases the risk of developing dementia. poor diet An unhealthy diet, high in saturated fat, sugar, and salt, can increase the risk of developing many illnesses, including dementia and cardiovascular disease. high alcohol consumption Drinking excessively (more than 12 drinks per week), can increase your risk of developing dementia low cognitive engagement Cognitive engagement is thought to support the development of a “cognitive reserve”. This is the idea that people who actively use their brains throughout their lives may be more protected against brain cell damage caused by dementia. depression People who experience depression in mid- or later life have a higher risk of developing dementia. However, the relationship between depression and dementia is still unclear. Many researchers believe that depression is a risk factor for dementia, whereas others believe it may be an early symptom of the disease, or both. traumatic brain injury People who experience severe or repeated head injuries are at increased risk of developing dementia. Brain injuries may trigger a process that might eventually lead to dementia. This particularly affects athletes in boxing, soccer, hockey, and football, which often have repeated head injuries. Falls are the leading cause of traumatic brain injury. Falling is especially dangerous for older adults. hearing loss Mild levels of hearing loss increase the risk of cognitive decline and dementia. Though it is still unclear how exactly it affects cognitive decline, hearing loss can lead to social isolation, loss of independence, and problems with everyday activities. social isolation Social isolation can increase the risk of hypertension, coronary heart disease, depression, and dementia. Staying socially active may reduce the risk of dementia. Social interaction may also help slow down the progression of the disease. air pollution The relationship between air pollution and dementia is still unclear. However, it’s estimated that those living close to busy roads have a higher risk of dementia because they may be exposed to higher levels of air pollution from vehicle emissions. It’s never too soon, or too late, to make changes that will maintain or improve your brain health. Learn more about managing some of these risk factors. https://alzheimer.ca/en/about-dementia/how-can-i-reduce-risk-dementia/risk-factors-dementia?gad_source=1&gclid=CjwKCAjw3P-2BhAEEiwA3yPhwN2aQl6V8InKOUxaehsfGBSWmuIpGEoeJdWNsl5fH_T9LOUlOk7-gxoCHcYQAvD_BwE
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document] EVIDENCE: high blood pressure People who have consistent high blood pressure (hypertension) in mid-life (ages 45 to 65) are more likely to develop dementia compared to those with normal blood pressure. High blood pressure can increase the risk of developing dementia, particularly vascular dementia, because of its effect on the heart, the arteries, and blood circulation. Smoking The evidence is strong and consistent that smokers are at a higher risk of developing dementia vs. non-smokers or ex-smokers. It’s never too late to quit! Smokers who quit can reduce their risk of developing dementia. diabetes People with type 2 diabetes in mid-life (ages 45 to 65) are at an increased risk of developing dementia, particularly Alzheimer’s disease and vascular dementia. Obesity Obesity in mid-life (ages 45 to 65) increases the risk of developing dementia. Obesity also increases the risk of developing other risk factors such as type 2 diabetes. lack of physical activity Physical inactivity in later life (ages 65 and up) increases the risk of developing dementia. poor diet An unhealthy diet, high in saturated fat, sugar, and salt, can increase the risk of developing many illnesses, including dementia and cardiovascular disease. high alcohol consumption Drinking excessively (more than 12 drinks per week), can increase your risk of developing dementia low cognitive engagement Cognitive engagement is thought to support the development of a “cognitive reserve”. This is the idea that people who actively use their brains throughout their lives may be more protected against brain cell damage caused by dementia. depression People who experience depression in mid- or later life have a higher risk of developing dementia. However, the relationship between depression and dementia is still unclear. Many researchers believe that depression is a risk factor for dementia, whereas others believe it may be an early symptom of the disease, or both. traumatic brain injury People who experience severe or repeated head injuries are at increased risk of developing dementia. Brain injuries may trigger a process that might eventually lead to dementia. This particularly affects athletes in boxing, soccer, hockey, and football, which often have repeated head injuries. Falls are the leading cause of traumatic brain injury. Falling is especially dangerous for older adults. hearing loss Mild levels of hearing loss increase the risk of cognitive decline and dementia. Though it is still unclear how exactly it affects cognitive decline, hearing loss can lead to social isolation, loss of independence, and problems with everyday activities. social isolation Social isolation can increase the risk of hypertension, coronary heart disease, depression, and dementia. Staying socially active may reduce the risk of dementia. Social interaction may also help slow down the progression of the disease. air pollution The relationship between air pollution and dementia is still unclear. However, it’s estimated that those living close to busy roads have a higher risk of dementia because they may be exposed to higher levels of air pollution from vehicle emissions. It’s never too soon, or too late, to make changes that will maintain or improve your brain health. Learn more about managing some of these risk factors. USER: I'm middle-aged, never smoked, had my ears blown out in the war, get a case of the sads pretty regular, and eat mostly garbage. What are my risk factors for dementia? What does cognitive engagement have to do with it? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
40
510
null
0
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document]
What does each option's Greek measure and how is it used to determine options pricing?Also, even though it's not a Greek, also include an explanation of implied volatility.
Delta Delta measures how much an option's price can be expected to move for every $1 change in the price of the underlying security or index. For example, a Delta of 0.40 means the option's price will theoretically move $0.40 for every $1 change in the price of the underlying stock or index. As you might guess, this means the higher the Delta, the bigger the price change. Traders often use Delta to predict whether a given option will expire ITM. So, a Delta of 0.40 is taken to mean that at that moment in time, the option has about a 40% chance of being ITM at expiration. This doesn't mean higher-Delta options are always profitable. After all, if you paid a large premium for an option that expires ITM, you might not make any money. You can also think of Delta as the number of shares of the underlying stock the option behaves like. So, a Delta of 0.40 suggests that given a $1 move in the underlying stock, the option will likely gain or lose about the same amount of money as 40 shares of the stock. Call options Call options have a positive Delta that can range from 0.00 to 1.00. At-the-money options usually have a Delta near 0.50. The Delta will increase (and approach 1.00) as the option gets deeper ITM. The Delta of ITM call options will get closer to 1.00 as expiration approaches. The Delta of out-of-the-money call options will get closer to 0.00 as expiration approaches. Put options Put options have a negative Delta that can range from 0.00 to –1.00. At-the-money options usually have a Delta near –0.50. The Delta will decrease (and approach –1.00) as the option gets deeper ITM. The Delta of ITM put options will get closer to –1.00 as expiration approaches. The Delta of out-of-the-money put options will get closer to 0.00 as expiration approaches. Gamma Where Delta is a snapshot in time, Gamma measures the rate of change in an option's Delta over time. If you remember high school physics class, you can think of Delta as speed and Gamma as acceleration. In practice, Gamma is the rate of change in an option's Delta per $1 change in the price of the underlying stock. In the example above, we imagined an option with a Delta of .40. If the underlying stock moves $1 and the option moves $.40 along with it, the option's Delta is no longer 0.40. Why? This $1 move would mean the call option is now even deeper ITM, and so its Delta should move even closer to 1.00. So, let's assume that as a result the Delta is now 0.55. The change in Delta from 0.40 to 0.55 is 0.15—this is the option's Gamma. Because Delta can't exceed 1.00, Gamma decreases as an option gets further ITM and Delta approaches 1.00. After all, there's less room for acceleration as you approach top speed. Theta Theta tells you how much the price of an option should decrease each day as the option nears expiration, if all other factors remain the same. This kind of price erosion over time is known as time decay. Time-value erosion is not linear, meaning the price erosion of at-the-money (ATM), just slightly out-of-the-money, and ITM options generally increases as expiration approaches, while that of far out-of-the-money (OOTM) options generally decreases as expiration approaches. Time-value erosion Source: Schwab Center for Financial Research Vega Vega measures the rate of change in an option's price per one-percentage-point change in the implied volatility of the underlying stock. (There's more on implied volatility below.) While Vega is not a real Greek letter, it is intended to tell you how much an option's price should move when the volatility of the underlying security or index increases or decreases. More about Vega: Volatility is one of the most important factors affecting the value of options. A drop in Vega will typically cause both calls and puts to lose value. An increase in Vega will typically cause both calls and puts to gain value. Neglecting Vega can cause you to potentially overpay when buying options. All other factors being equal, when determining strategy, consider buying options when Vega is below "normal" levels and selling options when Vega is above "normal" levels. One way to determine this is to compare the historical volatility to the implied volatility. Chart studies for both values are available on StreetSmart Edge®. Rho Rho measures the expected change in an option's price per one-percentage-point change in interest rates. It tells you how much the price of an option should rise or fall if the risk-free interest rate (U.S. Treasury-bills)* increases or decreases. More about Rho: As interest rates increase, the value of call options will generally increase. As interest rates increase, the value of put options will usually decrease. For these reasons, call options have positive Rho and put options have negative Rho. Consider a hypothetical stock that's trading exactly at its strike price. If the stock is trading at $25, the 25 calls and the 25 puts would both be exactly at the money. You might see the calls trading at, say, $0.60, while the puts could be trading at $0.50. When interest rates are low, the price difference between puts and calls will be relatively small. If interest rates increase, the gap will get wider—calls will become more expensive and puts will become less so. Rho is generally not a huge factor in the price of an option, but should be considered if prevailing interest rates are expected to change, such as just before a Federal Open Market Committee (FOMC) meeting. Long-Term Equity AnticiPation Securities® (LEAPS®) options are far more sensitive to changes in interest rates than are shorter-term options. Implied volatility: like a Greek Though not actually a Greek, implied volatility is closely related. Implied volatility is a forecast of how volatile an underlying stock is expected to be in the future—but it's strictly theoretical. While it's possible to forecast a stock's future moves by looking at its historical volatility, among other factors, the implied volatility reflected in the price of an option is an inference based on other factors, too, such as upcoming earnings reports, merger and acquisition rumors, pending product launches, etc. Key points to remember: Figuring out exactly how volatile a stock will be at any given time is difficult, but looking at implied volatility can give you a sense of what assumptions market makers are using to determine their quoted bid and ask prices. As such, implied volatility can be a helpful proxy in gauging the market. Higher-than-normal implied volatilities are usually more favorable for options sellers, while lower-than-normal implied volatilities are more favorable for option buyers, because volatility often reverts back to its mean over time. Implied volatility is often provided on options trading platforms because it is typically more useful for traders to know how volatile a market maker thinks a stock will be than to try to estimate it themselves. Implied volatility is usually not consistent for all options of a particular security or index and will generally be lowest for at-the-money and near-the-money options.
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. What does each option's Greek measure and how is it used to determine options pricing?Also, even though it's not a Greek, also include an explanation of implied volatility. Delta Delta measures how much an option's price can be expected to move for every $1 change in the price of the underlying security or index. For example, a Delta of 0.40 means the option's price will theoretically move $0.40 for every $1 change in the price of the underlying stock or index. As you might guess, this means the higher the Delta, the bigger the price change. Traders often use Delta to predict whether a given option will expire ITM. So, a Delta of 0.40 is taken to mean that at that moment in time, the option has about a 40% chance of being ITM at expiration. This doesn't mean higher-Delta options are always profitable. After all, if you paid a large premium for an option that expires ITM, you might not make any money. You can also think of Delta as the number of shares of the underlying stock the option behaves like. So, a Delta of 0.40 suggests that given a $1 move in the underlying stock, the option will likely gain or lose about the same amount of money as 40 shares of the stock. Call options Call options have a positive Delta that can range from 0.00 to 1.00. At-the-money options usually have a Delta near 0.50. The Delta will increase (and approach 1.00) as the option gets deeper ITM. The Delta of ITM call options will get closer to 1.00 as expiration approaches. The Delta of out-of-the-money call options will get closer to 0.00 as expiration approaches. Put options Put options have a negative Delta that can range from 0.00 to –1.00. At-the-money options usually have a Delta near –0.50. The Delta will decrease (and approach –1.00) as the option gets deeper ITM. The Delta of ITM put options will get closer to –1.00 as expiration approaches. The Delta of out-of-the-money put options will get closer to 0.00 as expiration approaches. Gamma Where Delta is a snapshot in time, Gamma measures the rate of change in an option's Delta over time. If you remember high school physics class, you can think of Delta as speed and Gamma as acceleration. In practice, Gamma is the rate of change in an option's Delta per $1 change in the price of the underlying stock. In the example above, we imagined an option with a Delta of .40. If the underlying stock moves $1 and the option moves $.40 along with it, the option's Delta is no longer 0.40. Why? This $1 move would mean the call option is now even deeper ITM, and so its Delta should move even closer to 1.00. So, let's assume that as a result the Delta is now 0.55. The change in Delta from 0.40 to 0.55 is 0.15—this is the option's Gamma. Because Delta can't exceed 1.00, Gamma decreases as an option gets further ITM and Delta approaches 1.00. After all, there's less room for acceleration as you approach top speed. Theta Theta tells you how much the price of an option should decrease each day as the option nears expiration, if all other factors remain the same. This kind of price erosion over time is known as time decay. Time-value erosion is not linear, meaning the price erosion of at-the-money (ATM), just slightly out-of-the-money, and ITM options generally increases as expiration approaches, while that of far out-of-the-money (OOTM) options generally decreases as expiration approaches. Time-value erosion Source: Schwab Center for Financial Research Vega Vega measures the rate of change in an option's price per one-percentage-point change in the implied volatility of the underlying stock. (There's more on implied volatility below.) While Vega is not a real Greek letter, it is intended to tell you how much an option's price should move when the volatility of the underlying security or index increases or decreases. More about Vega: Volatility is one of the most important factors affecting the value of options. A drop in Vega will typically cause both calls and puts to lose value. An increase in Vega will typically cause both calls and puts to gain value. Neglecting Vega can cause you to potentially overpay when buying options. All other factors being equal, when determining strategy, consider buying options when Vega is below "normal" levels and selling options when Vega is above "normal" levels. One way to determine this is to compare the historical volatility to the implied volatility. Chart studies for both values are available on StreetSmart Edge®. Rho Rho measures the expected change in an option's price per one-percentage-point change in interest rates. It tells you how much the price of an option should rise or fall if the risk-free interest rate (U.S. Treasury-bills)* increases or decreases. More about Rho: As interest rates increase, the value of call options will generally increase. As interest rates increase, the value of put options will usually decrease. For these reasons, call options have positive Rho and put options have negative Rho. Consider a hypothetical stock that's trading exactly at its strike price. If the stock is trading at $25, the 25 calls and the 25 puts would both be exactly at the money. You might see the calls trading at, say, $0.60, while the puts could be trading at $0.50. When interest rates are low, the price difference between puts and calls will be relatively small. If interest rates increase, the gap will get wider—calls will become more expensive and puts will become less so. Rho is generally not a huge factor in the price of an option, but should be considered if prevailing interest rates are expected to change, such as just before a Federal Open Market Committee (FOMC) meeting. Long-Term Equity AnticiPation Securities® (LEAPS®) options are far more sensitive to changes in interest rates than are shorter-term options. Implied volatility: like a Greek Though not actually a Greek, implied volatility is closely related. Implied volatility is a forecast of how volatile an underlying stock is expected to be in the future—but it's strictly theoretical. While it's possible to forecast a stock's future moves by looking at its historical volatility, among other factors, the implied volatility reflected in the price of an option is an inference based on other factors, too, such as upcoming earnings reports, merger and acquisition rumors, pending product launches, etc. Key points to remember: Figuring out exactly how volatile a stock will be at any given time is difficult, but looking at implied volatility can give you a sense of what assumptions market makers are using to determine their quoted bid and ask prices. As such, implied volatility can be a helpful proxy in gauging the market. Higher-than-normal implied volatilities are usually more favorable for options sellers, while lower-than-normal implied volatilities are more favorable for option buyers, because volatility often reverts back to its mean over time. Implied volatility is often provided on options trading platforms because it is typically more useful for traders to know how volatile a market maker thinks a stock will be than to try to estimate it themselves. Implied volatility is usually not consistent for all options of a particular security or index and will generally be lowest for at-the-money and near-the-money options. https://www.schwab.com/learn/story/get-to-know-option-greeks
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document] EVIDENCE: Delta Delta measures how much an option's price can be expected to move for every $1 change in the price of the underlying security or index. For example, a Delta of 0.40 means the option's price will theoretically move $0.40 for every $1 change in the price of the underlying stock or index. As you might guess, this means the higher the Delta, the bigger the price change. Traders often use Delta to predict whether a given option will expire ITM. So, a Delta of 0.40 is taken to mean that at that moment in time, the option has about a 40% chance of being ITM at expiration. This doesn't mean higher-Delta options are always profitable. After all, if you paid a large premium for an option that expires ITM, you might not make any money. You can also think of Delta as the number of shares of the underlying stock the option behaves like. So, a Delta of 0.40 suggests that given a $1 move in the underlying stock, the option will likely gain or lose about the same amount of money as 40 shares of the stock. Call options Call options have a positive Delta that can range from 0.00 to 1.00. At-the-money options usually have a Delta near 0.50. The Delta will increase (and approach 1.00) as the option gets deeper ITM. The Delta of ITM call options will get closer to 1.00 as expiration approaches. The Delta of out-of-the-money call options will get closer to 0.00 as expiration approaches. Put options Put options have a negative Delta that can range from 0.00 to –1.00. At-the-money options usually have a Delta near –0.50. The Delta will decrease (and approach –1.00) as the option gets deeper ITM. The Delta of ITM put options will get closer to –1.00 as expiration approaches. The Delta of out-of-the-money put options will get closer to 0.00 as expiration approaches. Gamma Where Delta is a snapshot in time, Gamma measures the rate of change in an option's Delta over time. If you remember high school physics class, you can think of Delta as speed and Gamma as acceleration. In practice, Gamma is the rate of change in an option's Delta per $1 change in the price of the underlying stock. In the example above, we imagined an option with a Delta of .40. If the underlying stock moves $1 and the option moves $.40 along with it, the option's Delta is no longer 0.40. Why? This $1 move would mean the call option is now even deeper ITM, and so its Delta should move even closer to 1.00. So, let's assume that as a result the Delta is now 0.55. The change in Delta from 0.40 to 0.55 is 0.15—this is the option's Gamma. Because Delta can't exceed 1.00, Gamma decreases as an option gets further ITM and Delta approaches 1.00. After all, there's less room for acceleration as you approach top speed. Theta Theta tells you how much the price of an option should decrease each day as the option nears expiration, if all other factors remain the same. This kind of price erosion over time is known as time decay. Time-value erosion is not linear, meaning the price erosion of at-the-money (ATM), just slightly out-of-the-money, and ITM options generally increases as expiration approaches, while that of far out-of-the-money (OOTM) options generally decreases as expiration approaches. Time-value erosion Source: Schwab Center for Financial Research Vega Vega measures the rate of change in an option's price per one-percentage-point change in the implied volatility of the underlying stock. (There's more on implied volatility below.) While Vega is not a real Greek letter, it is intended to tell you how much an option's price should move when the volatility of the underlying security or index increases or decreases. More about Vega: Volatility is one of the most important factors affecting the value of options. A drop in Vega will typically cause both calls and puts to lose value. An increase in Vega will typically cause both calls and puts to gain value. Neglecting Vega can cause you to potentially overpay when buying options. All other factors being equal, when determining strategy, consider buying options when Vega is below "normal" levels and selling options when Vega is above "normal" levels. One way to determine this is to compare the historical volatility to the implied volatility. Chart studies for both values are available on StreetSmart Edge®. Rho Rho measures the expected change in an option's price per one-percentage-point change in interest rates. It tells you how much the price of an option should rise or fall if the risk-free interest rate (U.S. Treasury-bills)* increases or decreases. More about Rho: As interest rates increase, the value of call options will generally increase. As interest rates increase, the value of put options will usually decrease. For these reasons, call options have positive Rho and put options have negative Rho. Consider a hypothetical stock that's trading exactly at its strike price. If the stock is trading at $25, the 25 calls and the 25 puts would both be exactly at the money. You might see the calls trading at, say, $0.60, while the puts could be trading at $0.50. When interest rates are low, the price difference between puts and calls will be relatively small. If interest rates increase, the gap will get wider—calls will become more expensive and puts will become less so. Rho is generally not a huge factor in the price of an option, but should be considered if prevailing interest rates are expected to change, such as just before a Federal Open Market Committee (FOMC) meeting. Long-Term Equity AnticiPation Securities® (LEAPS®) options are far more sensitive to changes in interest rates than are shorter-term options. Implied volatility: like a Greek Though not actually a Greek, implied volatility is closely related. Implied volatility is a forecast of how volatile an underlying stock is expected to be in the future—but it's strictly theoretical. While it's possible to forecast a stock's future moves by looking at its historical volatility, among other factors, the implied volatility reflected in the price of an option is an inference based on other factors, too, such as upcoming earnings reports, merger and acquisition rumors, pending product launches, etc. Key points to remember: Figuring out exactly how volatile a stock will be at any given time is difficult, but looking at implied volatility can give you a sense of what assumptions market makers are using to determine their quoted bid and ask prices. As such, implied volatility can be a helpful proxy in gauging the market. Higher-than-normal implied volatilities are usually more favorable for options sellers, while lower-than-normal implied volatilities are more favorable for option buyers, because volatility often reverts back to its mean over time. Implied volatility is often provided on options trading platforms because it is typically more useful for traders to know how volatile a market maker thinks a stock will be than to try to estimate it themselves. Implied volatility is usually not consistent for all options of a particular security or index and will generally be lowest for at-the-money and near-the-money options. USER: What does each option's Greek measure and how is it used to determine options pricing?Also, even though it's not a Greek, also include an explanation of implied volatility. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
24
28
1,192
null
588
You will answer all questions using only information from the resource provided in the prompt.
Use the text provided to explain the difference between Adam Smith's economic philosophy and that of Friedrich List.
Class 1: The Purpose of the Corporation (Dodge v. Ford Motor Company) Dodge v. Ford Motor Company is a great case. It is important because its ruling touches on a question at the very core of corporate law: what is the purpose of the corporation? Is it exclusively to make the most money for shareholders? (And if so – making the most money long-term or short- term?) Or perhaps it is also permissible – or even required – that the corporation would act in the interests of other stakeholders – employees, creditors, customers, the local community, or the nation in which it is incorporated? But there is another reason why Dodge v. Ford Motor Company is a great case: the parties are pretending to act for reasons different than those that really motivate them. As we will see in class, the plaintiff and defendant present their interests in ways that don’t make sense once you think things through. And read narrowly, the court’s decision seems almost arbitrary and in contrast to established law. But once you understand the entire context, the court ruling can be seen as a clever way to maintain both the letter and the spirit of established law. But no case is perfect. The main weakness of Dodge is that it is not well-written; indeed, it is quite boring to read. Another weakness is that the actual legal question it discusses is a narrow one that requires knowing some corporate law to understand. Therefore, though I am including the text of the case for you to read ahead of class, it is not the main assignment and you should not feel frustrated if it’s not clear to you. I will explain the case in class. Rather, the main reading assignment ahead of class is an excerpt from an old magazine article, about an economist you may never have heard about – Friedrich List. I think this is a more enjoyable reading, and it will give you background for a discussion on the big policy question Dodge tackles: whose interests should the corporation serve? No doubt you have heard of Adam Smith and later classical economists who espoused free-market economics, based on the idea that self-interested behavior by market participants enriches society as a whole. The line of corporate law doctrine that fits with this worldview is the norm that a corporation should operate solely for the purpose of its shareholders, and that this would ultimately benefit all other stakeholders (employees, customers, society as a whole, etc.). Friedrich List is a leading intellectual force behind an opposing view, which is why I ask that you read the article to understand the main differences between his world view and that of his free- market opponents (which he called the “cosmopolitans”). While List is not widely known today, his work is credited with influencing the thinking of several policy makers and leaders, including China’s Deng Xiaoping. In some ways, List appears more relevant to political debate today – with the rise of populist politicians in several countries including the U.S. – than it was when the article was written. But in other ways, this article is very much a product of its time. To a contemporary reader it may appear odd how much Japan and Germany are mentioned in the article compared to other countries (for example, China). But this was very typical of American policy analysis (and popular culture) in the 1980s. At that time, the American economy was relatively stagnant, while the economies of Japan and Germany were booming. The US had a large trade deficit with these countries, with cheaper German and Japanese imports crowding out a shrinking American industry, and German and Japanese firms used the dollars they acquired from the deficit to acquire iconic American assets. The result was fear of those two countries on one hand, and a desire to mimic them on the other hand. The article is in the tail end of that trend; by the 1990s Japan entered a prolonged recession, the German economy slowed under the costs of the reunification of West and East Germany, and the American economy prospered again. You may be more familiar with a reincarnation of this trend, in the 2000s and early 2010s, this time focused on China.
System Instruction: You will answer all questions using only information from the resource provided in the prompt. Question: Use the text provided to explain the difference between Adam Smith's economic philosophy and that of Friedrich List. Context Block: Class 1: The Purpose of the Corporation (Dodge v. Ford Motor Company) Dodge v. Ford Motor Company is a great case. It is important because its ruling touches on a question at the very core of corporate law: what is the purpose of the corporation? Is it exclusively to make the most money for shareholders? (And if so – making the most money long-term or short- term?) Or perhaps it is also permissible – or even required – that the corporation would act in the interests of other stakeholders – employees, creditors, customers, the local community, or the nation in which it is incorporated? But there is another reason why Dodge v. Ford Motor Company is a great case: the parties are pretending to act for reasons different than those that really motivate them. As we will see in class, the plaintiff and defendant present their interests in ways that don’t make sense once you think things through. And read narrowly, the court’s decision seems almost arbitrary and in contrast to established law. But once you understand the entire context, the court ruling can be seen as a clever way to maintain both the letter and the spirit of established law. But no case is perfect. The main weakness of Dodge is that it is not well-written; indeed, it is quite boring to read. Another weakness is that the actual legal question it discusses is a narrow one that requires knowing some corporate law to understand. Therefore, though I am including the text of the case for you to read ahead of class, it is not the main assignment and you should not feel frustrated if it’s not clear to you. I will explain the case in class. Rather, the main reading assignment ahead of class is an excerpt from an old magazine article, about an economist you may never have heard about – Friedrich List. I think this is a more enjoyable reading, and it will give you background for a discussion on the big policy question Dodge tackles: whose interests should the corporation serve? No doubt you have heard of Adam Smith and later classical economists who espoused free-market economics, based on the idea that self-interested behavior by market participants enriches society as a whole. The line of corporate law doctrine that fits with this worldview is the norm that a corporation should operate solely for the purpose of its shareholders, and that this would ultimately benefit all other stakeholders (employees, customers, society as a whole, etc.). Friedrich List is a leading intellectual force behind an opposing view, which is why I ask that you read the article to understand the main differences between his world view and that of his free- market opponents (which he called the “cosmopolitans”). While List is not widely known today, his work is credited with influencing the thinking of several policy makers and leaders, including China’s Deng Xiaoping. In some ways, List appears more relevant to political debate today – with the rise of populist politicians in several countries including the U.S. – than it was when the article was written. But in other ways, this article is very much a product of its time. To a contemporary reader it may appear odd how much Japan and Germany are mentioned in the article compared to other countries (for example, China). But this was very typical of American policy analysis (and popular culture) in the 1980s. At that time, the American economy was relatively stagnant, while the economies of Japan and Germany were booming. The US had a large trade deficit with these countries, with cheaper German and Japanese imports crowding out a shrinking American industry, and German and Japanese firms used the dollars they acquired from the deficit to acquire iconic American assets. The result was fear of those two countries on one hand, and a desire to mimic them on the other hand. The article is in the tail end of that trend; by the 1990s Japan entered a prolonged recession, the German economy slowed under the costs of the reunification of West and East Germany, and the American economy prospered again. You may be more familiar with a reincarnation of this trend, in the 2000s and early 2010s, this time focused on China.
You will answer all questions using only information from the resource provided in the prompt. EVIDENCE: Class 1: The Purpose of the Corporation (Dodge v. Ford Motor Company) Dodge v. Ford Motor Company is a great case. It is important because its ruling touches on a question at the very core of corporate law: what is the purpose of the corporation? Is it exclusively to make the most money for shareholders? (And if so – making the most money long-term or short- term?) Or perhaps it is also permissible – or even required – that the corporation would act in the interests of other stakeholders – employees, creditors, customers, the local community, or the nation in which it is incorporated? But there is another reason why Dodge v. Ford Motor Company is a great case: the parties are pretending to act for reasons different than those that really motivate them. As we will see in class, the plaintiff and defendant present their interests in ways that don’t make sense once you think things through. And read narrowly, the court’s decision seems almost arbitrary and in contrast to established law. But once you understand the entire context, the court ruling can be seen as a clever way to maintain both the letter and the spirit of established law. But no case is perfect. The main weakness of Dodge is that it is not well-written; indeed, it is quite boring to read. Another weakness is that the actual legal question it discusses is a narrow one that requires knowing some corporate law to understand. Therefore, though I am including the text of the case for you to read ahead of class, it is not the main assignment and you should not feel frustrated if it’s not clear to you. I will explain the case in class. Rather, the main reading assignment ahead of class is an excerpt from an old magazine article, about an economist you may never have heard about – Friedrich List. I think this is a more enjoyable reading, and it will give you background for a discussion on the big policy question Dodge tackles: whose interests should the corporation serve? No doubt you have heard of Adam Smith and later classical economists who espoused free-market economics, based on the idea that self-interested behavior by market participants enriches society as a whole. The line of corporate law doctrine that fits with this worldview is the norm that a corporation should operate solely for the purpose of its shareholders, and that this would ultimately benefit all other stakeholders (employees, customers, society as a whole, etc.). Friedrich List is a leading intellectual force behind an opposing view, which is why I ask that you read the article to understand the main differences between his world view and that of his free- market opponents (which he called the “cosmopolitans”). While List is not widely known today, his work is credited with influencing the thinking of several policy makers and leaders, including China’s Deng Xiaoping. In some ways, List appears more relevant to political debate today – with the rise of populist politicians in several countries including the U.S. – than it was when the article was written. But in other ways, this article is very much a product of its time. To a contemporary reader it may appear odd how much Japan and Germany are mentioned in the article compared to other countries (for example, China). But this was very typical of American policy analysis (and popular culture) in the 1980s. At that time, the American economy was relatively stagnant, while the economies of Japan and Germany were booming. The US had a large trade deficit with these countries, with cheaper German and Japanese imports crowding out a shrinking American industry, and German and Japanese firms used the dollars they acquired from the deficit to acquire iconic American assets. The result was fear of those two countries on one hand, and a desire to mimic them on the other hand. The article is in the tail end of that trend; by the 1990s Japan entered a prolonged recession, the German economy slowed under the costs of the reunification of West and East Germany, and the American economy prospered again. You may be more familiar with a reincarnation of this trend, in the 2000s and early 2010s, this time focused on China. USER: Use the text provided to explain the difference between Adam Smith's economic philosophy and that of Friedrich List. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
15
18
713
null
92
Use only the information contained in the prompt to answer any questions the user may ask. Do not use any other sources or any information from your stored data from before this conversation. If you cannot answer the user's question using only the provided context, say "I can't determine the answer as the information you are seeking is not provided in the reference document." Format your answer in a bullet point list.
What mechanisms have been proposed for post-Covid neurological complications?
Introduction The predominant acute presentations of COVID-19 are respiratory, but neurological manifestations have been recognized as an important component of the disease, even in cases without respiratory symptoms (2-5). The neurological manifestations associated with COVID-19 range from mild to critical, affect adults and children and can present both during and after acute COVID-19 infection. Reported neurological signs, symptoms or syndromes in the acute phase include headache, dizziness, impaired taste or smell, delirium, agitation, stroke, seizures, coma, meningoencephalitis and Guillain-Barré syndrome (6, 7). Consequences in the post-acute phase are also emerging, as either persisting or newly developing signs and symptoms (post-COVID-19 condition); these include headache, problems with smell or taste, cognitive impairment, confusion, fatigue, difficulty concentrating, sleep disturbances and neuropsychiatric symptoms (8, 9). COVID-19 disproportionately affects people with pre-existing neurological disorders. Chronic neurological disorders were found to be independently associated with increased mortality in hospitalized COVID-19 patients (hazard ratio [HR]: 2.13; 95% confidence interval [CI]: 1.38–3.28) (10). Individuals with pre-existing neurological conditions have been affected by disruptions to routine care, delayed care because of concerns about infectious risks and disruptions to supply chains for medicines and resultant stock-outs (11). This scientific brief provides a comprehensive overview of the relationship between neurology and COVID-19 and covers what is currently known about: • the acute neurological manifestations of COVID-19 • the neurological sequelae associated with post-COVID-19 condition • the risk of infection, severe illness and mortality from COVID-19 for people with pre-existing neurological conditions • the extent of disruptions to neurological services caused by the pandemic and mitigation strategies to address these disruptions • emerging evidence for neurological complications following COVID-19 vaccination. The target audience for this document includes health care providers, researchers, policy-makers and other stakeholders interested in the evidence relating to neurology and COVID-19. The aim is to increase awareness and recognition of the associated neurological aspects of COVID-19 to improve care and mitigation responses, particularly in low-resource settings. Methods This scientific brief is based on the evidence that emerged from systematic or rapid reviews and meta-analyses commissioned by WHO (14);1 WHO pulse surveys (15); WHO’s rapid assessment on services for mental, neurological and substance use (MNS) disorders (16) and other relevant publications. A commissioned rapid review. (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3907265); and Misra S, Kolappa K, Prasad M, Radhakrishnan D, Thakur KT, Solomon T. et al. Frequency of neurological manifestations in COVID-19: a systematic review and meta-analysis of 350 studies (https://www.medrxiv.org/content/10.1101/2021.04.20.21255780v1) 1 Neurology and COVID-19: Scientific brief -2- Review of the evidence Acute neurological manifestations of COVID-19 To assess the types and frequencies of reported neurological manifestations associated with COVID-19, WHO assisted with a systematic review and meta-analysis involving data from 145 721 patients with acute COVID-19 infections derived from 350 case series (17). COVID-19 infection was confirmed by real-time reverse-transcription polymerase chain reaction (RT-PCR) detection, high-throughput sequencing, SARS-CoV-2 viral culture in throat swab specimens, SARS-CoV-2 antibody detection in blood samples or SARS-CoV-2 viral culture in throat swab specimens. Most patients (n=129 786, 89%) included in the review were hospitalized. A total of 23 acute neurological symptoms (Table 1) and 14 neurological diagnoses (Table 2) were reported in the literature. Up to one third (n=48 059) of COVID-19 patients experienced some type of neurological manifestation, and 1 in 50 developed a stroke. In COVID-19 patients aged over 60 years, the most frequent neurological manifestation was acute confusion/delirium (pooled prevalence: 34%; 95% CI: 23–46%). For all ages, the likelihood of experiencing acute confusion/delirium, stroke, seizure and movement disorders increased with increasing severity of COVID-19, but these associations were not statistically significant. Smell and taste impairments were significantly associated with non-severe COVID-19 (odds ratio [OR]: 0.44; 95% CI: 0.28–0.68 and OR: 0.62; 95% CI: 0.42–0.91, respectively). In COVID-19 patients aged over 60 years, the presence of any neurological manifestations was associated with significantly increased mortality (OR: 1.80; 95% CI: 1.11–2.91). Limitations The overall risk of bias was assessed as being low for most studies (n=296, 85%) but studies with higher risk of bias yielded higher prevalence estimates. Also, for most outcomes the meta-analyses yielded a high degree of heterogeneity, indicating substantial clinical or methodological diversity, which in some instances rendered the pooling of data inappropriate. There are gaps in the evidence for non-hospitalized patient cohorts because their data are rarely reported in the literature. The evidence gaps have implications for incidence, prevalence, duration and severity. Similarly, the timing of the onset of signs or symptoms is often not reported. Limitations in study design of included case series precluded the comparison between acute neurological manifestations caused by COVID-19 and the incidence of such manifestations in the general population. Finally, in the absence of well-designed cohort studies, there are insufficient data to definitively assert causality between these symptoms and COVID-19. Neurological sequelae associated with post-COVID-19 condition Complications following acute viral illnesses are well described (18, 19). Soon after the advent of the COVID-19 pandemic, longitudinal cohort studies started to assess long-term sequelae of COVID-19, including neurological manifestations. At the same time, patients began to connect with each other and report on prolonged symptoms of COVID-19. In response, WHO commissioned a rapid review of 28 published population-based, cohort or case-control studies2. The review established specific new-onset neurological symptoms, signs or diagnoses occurring after the acute phase of COVID-19 that can be interpreted as complications of COVID-19; assessed specific neurological symptoms, signs or diagnoses that persist after the acute phase of COVID-19; and determined factors associated with these post acute neurological manifestations. Of the 28 studies, only two followed patients for up to 6 months. Pooling of information was not possible for methodological reasons. In a retrospective cohort of 1733 COVID-19 patients discharged from hospital, 19.6% (n=340) reported neurological manifestations after a median follow-up of 186 days (9). The complaints most commonly reported were fatigue or muscle weakness (63%; 1038/1655) and sleep difficulties (26%; 437/1655). Anxiety and depression were reported by 23% (367/1617) of patients and difficulty walking by 24% (103/423). The second prospective study followed 61 hospitalized COVID-19 patients with and without history of admission to an intensive care unit (ICU) (20). 2 Beghi E, Giussani G, Westenberg E, Allegri R, Garcia-Azorin D, Guekht A, Acute and Post-Acute Neurological Manifestations of COVID-19: Present findings, critical appraisal, and future directions. Manuscript in preparation, 2021. Neurology and COVID-19: Scientific brief Common complaints at discharge included amnestic dysfunction (30%; 18/61), dysexecutive syndrome (33%; 20/61), ataxia (11%, 7/61), and tetraparesis (18%; 11/61) (20). Limitations The evidence for long-term or newly emerging neurological complications after COVID-19 is limited, particularly in asymptomatic or non-hospitalized patients. Similarly, little is known about neurological sequelae in paediatric patients with conditions related to COVID-19, including multisystem inflammatory syndrome (MIS-C). Data from low- and middle-income countries are scarce, particularly in the post-acute phase. This has led to underreporting of neurological findings in the context of COVID-19 with reference to geography, ethnicity and sociocultural environment. Methodological issues and study design flaws further reduce the strength of the current evidence because some studies have included in the control group asymptomatic patients who were not screened with molecular or serological tests to confirm or exclude SARS-CoV-2 infection. Screening methods and diagnostic protocols vary across studies, depending on the background of the local investigators, the diagnostic approach, the number and type of contacts during follow-up and, not least, attrition and patient compliance. In addition, studies were done under surge conditions, which led to incomplete diagnostic assessment. The current understanding of neurological sequelae associated with post-COVID-19 condition is based mainly on patient reports; clinically relevant manifestations; and greater attention towards symptoms, signs and diseases that have been illustrated in previous reports By contrast, information is limited on signs that can be documented only through testing, imaging or biochemical or pathological investigations. Pre-existing neurological conditions and COVID-19 A range of pre-existing noncommunicable diseases (NCDs) are associated with an increased risk of severe outcomes in COVID-19 (21). These include several neurological conditions such as stroke and dementia. People with certain pre existing neurological conditions are more vulnerable to SARS-CoV-2 infection, experience exacerbations of their pre existing disease (22) and have higher risks of severe outcomes and death (10, 23). To synthesize the growing evidence on this topic, WHO commissioned a rapid review of 26 articles from 12 countries across three continents, with a total of 379 947 COVID-19 patients, to establish the risk of infection, severe illness and mortality from COVID-19 for people with pre-existing neurological conditions.3 The rapid review found that certain pre-existing neurological diseases are associated with severity of COVID-19.4 The most prevalent were cerebrovascular disease and dementia/neurodegenerative diseases (pooled OR: 1.99; 95% CI: 1.81 2.18). Mortality was high among people with pre-existing neurological conditions (pooled OR: 1.74; 95% CI: 1.56 1.94). Limitations Risk of bias was deemed high for most articles, and the overall quality of studies using GRADE (Grading of Recommendations Assessment, Development and Evaluations) methodology was low; hence, the value of the current evidence is limited. Most studies on the relationship between SARS-CoV-2 and pre-existing neurological conditions are based on retrospective cohorts or case series, with few data from prospective studies. Future research will benefit greatly from the use of standardized definitions and reporting for comorbidities, neurological symptoms or diagnoses. Use of standardized case report forms – such as those published by WHO (25, 26) – can also contribute to the accuracy and reliability of data. Disruptions to essential neurological services caused by the COVID-19 pandemic and mitigation strategies Interruption of routine treatment and care, as well as supply chains for medications during the COVID-19 pandemic, present significant challenges for people with neurological conditions (11). According to the latest WHO Pulse survey on continuity of essential health services during the COVID-19 pandemic (27), 45% of 121 countries for which information was available still reported disruptions to services for MNS disorders in the first quarter of 2021. Likewise, 3Chomba M, Schiess N, Seeher K, Akpalu A, Baila J, Boruah AP et al. Pre-existing neurological conditions and COVID-19 risk. A commissioned rapid review. (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3907265) 4Ibid. -4- Neurology and COVID-19: Scientific brief disruptions to rehabilitation services, a crucial aspect of neurological care, continue to be reported by 53% (of 89 countries). With respect to neurology-specific services, WHO’s rapid assessment of services for MNS disorders during the COVID-19 pandemic in mid-2020 (16) revealed that one in three of 98 countries closed down neurology inpatient units at least partly during the pandemic. Regarding service provision, surgeries for neurological disorders were disrupted in two-thirds of 130 countries for which information was available, and the management of emergency conditions such as status epilepticus was at least partially disrupted in 35% of the same 130 countries. To better understand the extent of service disruption, its causes and mitigation strategies for neurological disorders in the context of COVID-19, WHO commissioned a rapid review of 369 articles, providing data on 210 419 patients from 105 countries (14). Studies that investigated the extent of service disruption (n=188) described it as mild (n=40, 21%), moderate (n=131, 70%) or severe (n=10, 5%). The most frequently described reasons for service disruption across 240 studies were travel restrictions related to lockdown (n=196, 82%), closure of services or consultations as per health authority directive (n=157, 65%) and reduced outpatient volume due to patients not presenting (n=135, 56%). A total of 224 studies reported on mitigation strategies, with the most frequently reported strategies being telemedicine and other teleconsultation formats (n=184, 82%), novel dispensing approaches for medicines (n=116, 52%) and redirection of patients (n=95, 42%). Limitations To date, most of the data on service disruption have been derived from high- and middle-income countries, with information from low-income countries lacking. Similarly, evidence of the effectiveness and acceptability of mitigation strategies to patients remains limited. In addition, the current published literature seems biased towards certain settings or types of services (e.g. outpatient, emergency or inpatient care). There are few reports on other areas that are crucial for treating people with chronic neurological conditions (e.g. neurorehabilitation). Going forward, more systematic evaluations and reporting of disruption of the whole spectrum of neurological services can provide a more comprehensive picture. Neurological complications following COVID-19 vaccination There is a low risk following COVID-19 vaccination of neurological complications including Bell’s palsy (28), cerebral venous sinus thrombosis (CVST) and possibly Guillain-Barré syndrome (29). However, the risk of such complications is substantially lower than the risks associated with infection with SARS-CoV-2 (30, 31). Since March 2021, cases of thromboses associated with thrombocytopenia have been reported in patients vaccinated with the Oxford-AstraZeneca ChAdOx1-S and Johnson & Johnson (J&J) Janssen Ad26.COV2.S COVID-19 vaccines. Evaluation of the cases by national and international bodies concluded that there was a plausible causal link between these two adenovirus-vectored vaccines and CVST (32-34), based on the temporal association with vaccination and an increased incidence when compared with expected baseline rates of CVST (35-42). WHO has provided guidance for clinical case management of thrombosis with thrombocytopenia syndrome (TTS) following vaccination against COVID-19 (43). Overall knowledge gaps Current evidence suggests that SARS-CoV-2 can affect the nervous system. Multiple and probably overlapping mechanisms have been proposed for the neurological manifestations; they include hypoxia, cytokine storm, post infectious autoimmune responses, hypercoagulability, neurologic complications of severe systemic illness and potential direct neurotropism. Questions remain regarding the characteristics, timing and severity of neurological manifestations of COVID-19, including the pathophysiological mechanisms through which SARS-CoV-2 affects the nervous system. As more data emerge, associations of specific neurological disorders with COVID-19 will be further clarified – as has been seen, for example, with Guillain-Barré syndrome (29). Prospective data, as well as biomarker and neuropathological studies, are needed on the short- and long-term neurological sequelae. Existing reports on the association between COVID-19 and most neurological manifestations are flawed by selection and information bias, and available data reflect the spectrum of neurological manifestations in patients with the more severe COVID-19 cases. Neurological signs or symptoms occurring during the acute phase of COVID-19 infection cannot easily be disentangled from those with onset in the post-acute phase, and follow-up data are scarce, particularly for subclinical findings such as cognitive impairment. -5- Neurology and COVID-19: Scientific brief Other gaps in the literature include a lack of clarity on the interplay between pre-existing neurological disease and other underlying comorbidities such as hypertension and diabetes. Studies in this area were hospital-based and biased to people with more severe symptoms, making the findings difficult to generalize to people based in the community or having only mild symptoms. Understanding the impact of neurological conditions requires the inclusion of diverse populations from a variety of social backgrounds. Guidance is also needed for studies evaluating the disruption or the efficacy of mitigation strategies for care. Efforts should be made to harmonize the methods in this area of research and to enhance the comparability between studies and over time. In addition, funding for and progress in neurological research and training have been affected by the pandemic, owing to the temporary suspension of research projects or postponement or cancellation of fellowships, which need to be re-established as soon as possible (44). Implications for further research Well-designed case–control and cohort studies are needed to understand which patients are most vulnerable to neurological manifestations in the acute and post COVID-19 condition and to understand causality related to COVID 19. Series of patients with neurological conditions need to be compared to patients without neurological conditions. Use of case report forms (CRFs) such as WHO’s post-COVID-19 condition CRF (45) is encouraged to standardize data collection. Future research directions should include more “bottom-up” evidence-gathering efforts; for example, international surveys of neurological associations such as one recently undertaken by the European Federation of Neurological Associations (EFNA) with support from members of the WHO Neurology and COVID-19 Global Forum (46). Conclusion A wide spectrum of acute and post-acute neurological manifestations associated with COVID-19 have been reported across the globe. Clinicians and health care workers should be aware of such presentations and complications even in the absence of respiratory symptoms. Disruptions in access to essential neurological services and availability of essential medications for people with pre-existing neurological conditions can be detrimental; hence, mitigation strategies such as remote technology and telemedicine alternatives should be judiciously employed. The COVID-19 pandemic continues to have an impact on neurological health, service delivery, research and training while widening existing disparities worldwide. Recognizing and addressing these factors will provide opportunities to improve neurological care worldwide. Plans for updating WHO continues to monitor the situation closely for any changes that may affect this scientific brief.
System instruction: Use only the information contained in the prompt to answer any questions the user may ask. Do not use any other sources or any information from your stored data from before this conversation. If you cannot answer the user's question using only the provided context, say "I can't determine the answer as the information you are seeking is not provided in the reference document." Format your answer in a bullet point list. Question: What mechanisms have been proposed for post-Covid neurological complications? Context: Introduction The predominant acute presentations of COVID-19 are respiratory, but neurological manifestations have been recognized as an important component of the disease, even in cases without respiratory symptoms (2-5). The neurological manifestations associated with COVID-19 range from mild to critical, affect adults and children and can present both during and after acute COVID-19 infection. Reported neurological signs, symptoms or syndromes in the acute phase include headache, dizziness, impaired taste or smell, delirium, agitation, stroke, seizures, coma, meningoencephalitis and Guillain-Barré syndrome (6, 7). Consequences in the post-acute phase are also emerging, as either persisting or newly developing signs and symptoms (post-COVID-19 condition); these include headache, problems with smell or taste, cognitive impairment, confusion, fatigue, difficulty concentrating, sleep disturbances and neuropsychiatric symptoms (8, 9). COVID-19 disproportionately affects people with pre-existing neurological disorders. Chronic neurological disorders were found to be independently associated with increased mortality in hospitalized COVID-19 patients (hazard ratio [HR]: 2.13; 95% confidence interval [CI]: 1.38–3.28) (10). Individuals with pre-existing neurological conditions have been affected by disruptions to routine care, delayed care because of concerns about infectious risks and disruptions to supply chains for medicines and resultant stock-outs (11). This scientific brief provides a comprehensive overview of the relationship between neurology and COVID-19 and covers what is currently known about: • the acute neurological manifestations of COVID-19 • the neurological sequelae associated with post-COVID-19 condition • the risk of infection, severe illness and mortality from COVID-19 for people with pre-existing neurological conditions • the extent of disruptions to neurological services caused by the pandemic and mitigation strategies to address these disruptions • emerging evidence for neurological complications following COVID-19 vaccination. The target audience for this document includes health care providers, researchers, policy-makers and other stakeholders interested in the evidence relating to neurology and COVID-19. The aim is to increase awareness and recognition of the associated neurological aspects of COVID-19 to improve care and mitigation responses, particularly in low-resource settings. Methods This scientific brief is based on the evidence that emerged from systematic or rapid reviews and meta-analyses commissioned by WHO (14);1 WHO pulse surveys (15); WHO’s rapid assessment on services for mental, neurological and substance use (MNS) disorders (16) and other relevant publications. A commissioned rapid review. (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3907265); and Misra S, Kolappa K, Prasad M, Radhakrishnan D, Thakur KT, Solomon T. et al. Frequency of neurological manifestations in COVID-19: a systematic review and meta-analysis of 350 studies (https://www.medrxiv.org/content/10.1101/2021.04.20.21255780v1) 1 Neurology and COVID-19: Scientific brief -2- Review of the evidence Acute neurological manifestations of COVID-19 To assess the types and frequencies of reported neurological manifestations associated with COVID-19, WHO assisted with a systematic review and meta-analysis involving data from 145 721 patients with acute COVID-19 infections derived from 350 case series (17). COVID-19 infection was confirmed by real-time reverse-transcription polymerase chain reaction (RT-PCR) detection, high-throughput sequencing, SARS-CoV-2 viral culture in throat swab specimens, SARS-CoV-2 antibody detection in blood samples or SARS-CoV-2 viral culture in throat swab specimens. Most patients (n=129 786, 89%) included in the review were hospitalized. A total of 23 acute neurological symptoms (Table 1) and 14 neurological diagnoses (Table 2) were reported in the literature. Up to one third (n=48 059) of COVID-19 patients experienced some type of neurological manifestation, and 1 in 50 developed a stroke. In COVID-19 patients aged over 60 years, the most frequent neurological manifestation was acute confusion/delirium (pooled prevalence: 34%; 95% CI: 23–46%). For all ages, the likelihood of experiencing acute confusion/delirium, stroke, seizure and movement disorders increased with increasing severity of COVID-19, but these associations were not statistically significant. Smell and taste impairments were significantly associated with non-severe COVID-19 (odds ratio [OR]: 0.44; 95% CI: 0.28–0.68 and OR: 0.62; 95% CI: 0.42–0.91, respectively). In COVID-19 patients aged over 60 years, the presence of any neurological manifestations was associated with significantly increased mortality (OR: 1.80; 95% CI: 1.11–2.91). Limitations The overall risk of bias was assessed as being low for most studies (n=296, 85%) but studies with higher risk of bias yielded higher prevalence estimates. Also, for most outcomes the meta-analyses yielded a high degree of heterogeneity, indicating substantial clinical or methodological diversity, which in some instances rendered the pooling of data inappropriate. There are gaps in the evidence for non-hospitalized patient cohorts because their data are rarely reported in the literature. The evidence gaps have implications for incidence, prevalence, duration and severity. Similarly, the timing of the onset of signs or symptoms is often not reported. Limitations in study design of included case series precluded the comparison between acute neurological manifestations caused by COVID-19 and the incidence of such manifestations in the general population. Finally, in the absence of well-designed cohort studies, there are insufficient data to definitively assert causality between these symptoms and COVID-19. Neurological sequelae associated with post-COVID-19 condition Complications following acute viral illnesses are well described (18, 19). Soon after the advent of the COVID-19 pandemic, longitudinal cohort studies started to assess long-term sequelae of COVID-19, including neurological manifestations. At the same time, patients began to connect with each other and report on prolonged symptoms of COVID-19. In response, WHO commissioned a rapid review of 28 published population-based, cohort or case-control studies2. The review established specific new-onset neurological symptoms, signs or diagnoses occurring after the acute phase of COVID-19 that can be interpreted as complications of COVID-19; assessed specific neurological symptoms, signs or diagnoses that persist after the acute phase of COVID-19; and determined factors associated with these post acute neurological manifestations. Of the 28 studies, only two followed patients for up to 6 months. Pooling of information was not possible for methodological reasons. In a retrospective cohort of 1733 COVID-19 patients discharged from hospital, 19.6% (n=340) reported neurological manifestations after a median follow-up of 186 days (9). The complaints most commonly reported were fatigue or muscle weakness (63%; 1038/1655) and sleep difficulties (26%; 437/1655). Anxiety and depression were reported by 23% (367/1617) of patients and difficulty walking by 24% (103/423). The second prospective study followed 61 hospitalized COVID-19 patients with and without history of admission to an intensive care unit (ICU) (20). 2 Beghi E, Giussani G, Westenberg E, Allegri R, Garcia-Azorin D, Guekht A, Acute and Post-Acute Neurological Manifestations of COVID-19: Present findings, critical appraisal, and future directions. Manuscript in preparation, 2021. Neurology and COVID-19: Scientific brief Common complaints at discharge included amnestic dysfunction (30%; 18/61), dysexecutive syndrome (33%; 20/61), ataxia (11%, 7/61), and tetraparesis (18%; 11/61) (20). Limitations The evidence for long-term or newly emerging neurological complications after COVID-19 is limited, particularly in asymptomatic or non-hospitalized patients. Similarly, little is known about neurological sequelae in paediatric patients with conditions related to COVID-19, including multisystem inflammatory syndrome (MIS-C). Data from low- and middle-income countries are scarce, particularly in the post-acute phase. This has led to underreporting of neurological findings in the context of COVID-19 with reference to geography, ethnicity and sociocultural environment. Methodological issues and study design flaws further reduce the strength of the current evidence because some studies have included in the control group asymptomatic patients who were not screened with molecular or serological tests to confirm or exclude SARS-CoV-2 infection. Screening methods and diagnostic protocols vary across studies, depending on the background of the local investigators, the diagnostic approach, the number and type of contacts during follow-up and, not least, attrition and patient compliance. In addition, studies were done under surge conditions, which led to incomplete diagnostic assessment. The current understanding of neurological sequelae associated with post-COVID-19 condition is based mainly on patient reports; clinically relevant manifestations; and greater attention towards symptoms, signs and diseases that have been illustrated in previous reports By contrast, information is limited on signs that can be documented only through testing, imaging or biochemical or pathological investigations. Pre-existing neurological conditions and COVID-19 A range of pre-existing noncommunicable diseases (NCDs) are associated with an increased risk of severe outcomes in COVID-19 (21). These include several neurological conditions such as stroke and dementia. People with certain pre existing neurological conditions are more vulnerable to SARS-CoV-2 infection, experience exacerbations of their pre existing disease (22) and have higher risks of severe outcomes and death (10, 23). To synthesize the growing evidence on this topic, WHO commissioned a rapid review of 26 articles from 12 countries across three continents, with a total of 379 947 COVID-19 patients, to establish the risk of infection, severe illness and mortality from COVID-19 for people with pre-existing neurological conditions.3 The rapid review found that certain pre-existing neurological diseases are associated with severity of COVID-19.4 The most prevalent were cerebrovascular disease and dementia/neurodegenerative diseases (pooled OR: 1.99; 95% CI: 1.81 2.18). Mortality was high among people with pre-existing neurological conditions (pooled OR: 1.74; 95% CI: 1.56 1.94). Limitations Risk of bias was deemed high for most articles, and the overall quality of studies using GRADE (Grading of Recommendations Assessment, Development and Evaluations) methodology was low; hence, the value of the current evidence is limited. Most studies on the relationship between SARS-CoV-2 and pre-existing neurological conditions are based on retrospective cohorts or case series, with few data from prospective studies. Future research will benefit greatly from the use of standardized definitions and reporting for comorbidities, neurological symptoms or diagnoses. Use of standardized case report forms – such as those published by WHO (25, 26) – can also contribute to the accuracy and reliability of data. Disruptions to essential neurological services caused by the COVID-19 pandemic and mitigation strategies Interruption of routine treatment and care, as well as supply chains for medications during the COVID-19 pandemic, present significant challenges for people with neurological conditions (11). According to the latest WHO Pulse survey on continuity of essential health services during the COVID-19 pandemic (27), 45% of 121 countries for which information was available still reported disruptions to services for MNS disorders in the first quarter of 2021. Likewise, 3Chomba M, Schiess N, Seeher K, Akpalu A, Baila J, Boruah AP et al. Pre-existing neurological conditions and COVID-19 risk. A commissioned rapid review. (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3907265) 4Ibid. -4- Neurology and COVID-19: Scientific brief disruptions to rehabilitation services, a crucial aspect of neurological care, continue to be reported by 53% (of 89 countries). With respect to neurology-specific services, WHO’s rapid assessment of services for MNS disorders during the COVID-19 pandemic in mid-2020 (16) revealed that one in three of 98 countries closed down neurology inpatient units at least partly during the pandemic. Regarding service provision, surgeries for neurological disorders were disrupted in two-thirds of 130 countries for which information was available, and the management of emergency conditions such as status epilepticus was at least partially disrupted in 35% of the same 130 countries. To better understand the extent of service disruption, its causes and mitigation strategies for neurological disorders in the context of COVID-19, WHO commissioned a rapid review of 369 articles, providing data on 210 419 patients from 105 countries (14). Studies that investigated the extent of service disruption (n=188) described it as mild (n=40, 21%), moderate (n=131, 70%) or severe (n=10, 5%). The most frequently described reasons for service disruption across 240 studies were travel restrictions related to lockdown (n=196, 82%), closure of services or consultations as per health authority directive (n=157, 65%) and reduced outpatient volume due to patients not presenting (n=135, 56%). A total of 224 studies reported on mitigation strategies, with the most frequently reported strategies being telemedicine and other teleconsultation formats (n=184, 82%), novel dispensing approaches for medicines (n=116, 52%) and redirection of patients (n=95, 42%). Limitations To date, most of the data on service disruption have been derived from high- and middle-income countries, with information from low-income countries lacking. Similarly, evidence of the effectiveness and acceptability of mitigation strategies to patients remains limited. In addition, the current published literature seems biased towards certain settings or types of services (e.g. outpatient, emergency or inpatient care). There are few reports on other areas that are crucial for treating people with chronic neurological conditions (e.g. neurorehabilitation). Going forward, more systematic evaluations and reporting of disruption of the whole spectrum of neurological services can provide a more comprehensive picture. Neurological complications following COVID-19 vaccination There is a low risk following COVID-19 vaccination of neurological complications including Bell’s palsy (28), cerebral venous sinus thrombosis (CVST) and possibly Guillain-Barré syndrome (29). However, the risk of such complications is substantially lower than the risks associated with infection with SARS-CoV-2 (30, 31). Since March 2021, cases of thromboses associated with thrombocytopenia have been reported in patients vaccinated with the Oxford-AstraZeneca ChAdOx1-S and Johnson & Johnson (J&J) Janssen Ad26.COV2.S COVID-19 vaccines. Evaluation of the cases by national and international bodies concluded that there was a plausible causal link between these two adenovirus-vectored vaccines and CVST (32-34), based on the temporal association with vaccination and an increased incidence when compared with expected baseline rates of CVST (35-42). WHO has provided guidance for clinical case management of thrombosis with thrombocytopenia syndrome (TTS) following vaccination against COVID-19 (43). Overall knowledge gaps Current evidence suggests that SARS-CoV-2 can affect the nervous system. Multiple and probably overlapping mechanisms have been proposed for the neurological manifestations; they include hypoxia, cytokine storm, post infectious autoimmune responses, hypercoagulability, neurologic complications of severe systemic illness and potential direct neurotropism. Questions remain regarding the characteristics, timing and severity of neurological manifestations of COVID-19, including the pathophysiological mechanisms through which SARS-CoV-2 affects the nervous system. As more data emerge, associations of specific neurological disorders with COVID-19 will be further clarified – as has been seen, for example, with Guillain-Barré syndrome (29). Prospective data, as well as biomarker and neuropathological studies, are needed on the short- and long-term neurological sequelae. Existing reports on the association between COVID-19 and most neurological manifestations are flawed by selection and information bias, and available data reflect the spectrum of neurological manifestations in patients with the more severe COVID-19 cases. Neurological signs or symptoms occurring during the acute phase of COVID-19 infection cannot easily be disentangled from those with onset in the post-acute phase, and follow-up data are scarce, particularly for subclinical findings such as cognitive impairment. -5- Neurology and COVID-19: Scientific brief Other gaps in the literature include a lack of clarity on the interplay between pre-existing neurological disease and other underlying comorbidities such as hypertension and diabetes. Studies in this area were hospital-based and biased to people with more severe symptoms, making the findings difficult to generalize to people based in the community or having only mild symptoms. Understanding the impact of neurological conditions requires the inclusion of diverse populations from a variety of social backgrounds. Guidance is also needed for studies evaluating the disruption or the efficacy of mitigation strategies for care. Efforts should be made to harmonize the methods in this area of research and to enhance the comparability between studies and over time. In addition, funding for and progress in neurological research and training have been affected by the pandemic, owing to the temporary suspension of research projects or postponement or cancellation of fellowships, which need to be re-established as soon as possible (44). Implications for further research Well-designed case–control and cohort studies are needed to understand which patients are most vulnerable to neurological manifestations in the acute and post COVID-19 condition and to understand causality related to COVID 19. Series of patients with neurological conditions need to be compared to patients without neurological conditions. Use of case report forms (CRFs) such as WHO’s post-COVID-19 condition CRF (45) is encouraged to standardize data collection. Future research directions should include more “bottom-up” evidence-gathering efforts; for example, international surveys of neurological associations such as one recently undertaken by the European Federation of Neurological Associations (EFNA) with support from members of the WHO Neurology and COVID-19 Global Forum (46). Conclusion A wide spectrum of acute and post-acute neurological manifestations associated with COVID-19 have been reported across the globe. Clinicians and health care workers should be aware of such presentations and complications even in the absence of respiratory symptoms. Disruptions in access to essential neurological services and availability of essential medications for people with pre-existing neurological conditions can be detrimental; hence, mitigation strategies such as remote technology and telemedicine alternatives should be judiciously employed. The COVID-19 pandemic continues to have an impact on neurological health, service delivery, research and training while widening existing disparities worldwide. Recognizing and addressing these factors will provide opportunities to improve neurological care worldwide. Plans for updating WHO continues to monitor the situation closely for any changes that may affect this scientific brief.
Use only the information contained in the prompt to answer any questions the user may ask. Do not use any other sources or any information from your stored data from before this conversation. If you cannot answer the user's question using only the provided context, say "I can't determine the answer as the information you are seeking is not provided in the reference document." Format your answer in a bullet point list. EVIDENCE: Introduction The predominant acute presentations of COVID-19 are respiratory, but neurological manifestations have been recognized as an important component of the disease, even in cases without respiratory symptoms (2-5). The neurological manifestations associated with COVID-19 range from mild to critical, affect adults and children and can present both during and after acute COVID-19 infection. Reported neurological signs, symptoms or syndromes in the acute phase include headache, dizziness, impaired taste or smell, delirium, agitation, stroke, seizures, coma, meningoencephalitis and Guillain-Barré syndrome (6, 7). Consequences in the post-acute phase are also emerging, as either persisting or newly developing signs and symptoms (post-COVID-19 condition); these include headache, problems with smell or taste, cognitive impairment, confusion, fatigue, difficulty concentrating, sleep disturbances and neuropsychiatric symptoms (8, 9). COVID-19 disproportionately affects people with pre-existing neurological disorders. Chronic neurological disorders were found to be independently associated with increased mortality in hospitalized COVID-19 patients (hazard ratio [HR]: 2.13; 95% confidence interval [CI]: 1.38–3.28) (10). Individuals with pre-existing neurological conditions have been affected by disruptions to routine care, delayed care because of concerns about infectious risks and disruptions to supply chains for medicines and resultant stock-outs (11). This scientific brief provides a comprehensive overview of the relationship between neurology and COVID-19 and covers what is currently known about: • the acute neurological manifestations of COVID-19 • the neurological sequelae associated with post-COVID-19 condition • the risk of infection, severe illness and mortality from COVID-19 for people with pre-existing neurological conditions • the extent of disruptions to neurological services caused by the pandemic and mitigation strategies to address these disruptions • emerging evidence for neurological complications following COVID-19 vaccination. The target audience for this document includes health care providers, researchers, policy-makers and other stakeholders interested in the evidence relating to neurology and COVID-19. The aim is to increase awareness and recognition of the associated neurological aspects of COVID-19 to improve care and mitigation responses, particularly in low-resource settings. Methods This scientific brief is based on the evidence that emerged from systematic or rapid reviews and meta-analyses commissioned by WHO (14);1 WHO pulse surveys (15); WHO’s rapid assessment on services for mental, neurological and substance use (MNS) disorders (16) and other relevant publications. A commissioned rapid review. (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3907265); and Misra S, Kolappa K, Prasad M, Radhakrishnan D, Thakur KT, Solomon T. et al. Frequency of neurological manifestations in COVID-19: a systematic review and meta-analysis of 350 studies (https://www.medrxiv.org/content/10.1101/2021.04.20.21255780v1) 1 Neurology and COVID-19: Scientific brief -2- Review of the evidence Acute neurological manifestations of COVID-19 To assess the types and frequencies of reported neurological manifestations associated with COVID-19, WHO assisted with a systematic review and meta-analysis involving data from 145 721 patients with acute COVID-19 infections derived from 350 case series (17). COVID-19 infection was confirmed by real-time reverse-transcription polymerase chain reaction (RT-PCR) detection, high-throughput sequencing, SARS-CoV-2 viral culture in throat swab specimens, SARS-CoV-2 antibody detection in blood samples or SARS-CoV-2 viral culture in throat swab specimens. Most patients (n=129 786, 89%) included in the review were hospitalized. A total of 23 acute neurological symptoms (Table 1) and 14 neurological diagnoses (Table 2) were reported in the literature. Up to one third (n=48 059) of COVID-19 patients experienced some type of neurological manifestation, and 1 in 50 developed a stroke. In COVID-19 patients aged over 60 years, the most frequent neurological manifestation was acute confusion/delirium (pooled prevalence: 34%; 95% CI: 23–46%). For all ages, the likelihood of experiencing acute confusion/delirium, stroke, seizure and movement disorders increased with increasing severity of COVID-19, but these associations were not statistically significant. Smell and taste impairments were significantly associated with non-severe COVID-19 (odds ratio [OR]: 0.44; 95% CI: 0.28–0.68 and OR: 0.62; 95% CI: 0.42–0.91, respectively). In COVID-19 patients aged over 60 years, the presence of any neurological manifestations was associated with significantly increased mortality (OR: 1.80; 95% CI: 1.11–2.91). Limitations The overall risk of bias was assessed as being low for most studies (n=296, 85%) but studies with higher risk of bias yielded higher prevalence estimates. Also, for most outcomes the meta-analyses yielded a high degree of heterogeneity, indicating substantial clinical or methodological diversity, which in some instances rendered the pooling of data inappropriate. There are gaps in the evidence for non-hospitalized patient cohorts because their data are rarely reported in the literature. The evidence gaps have implications for incidence, prevalence, duration and severity. Similarly, the timing of the onset of signs or symptoms is often not reported. Limitations in study design of included case series precluded the comparison between acute neurological manifestations caused by COVID-19 and the incidence of such manifestations in the general population. Finally, in the absence of well-designed cohort studies, there are insufficient data to definitively assert causality between these symptoms and COVID-19. Neurological sequelae associated with post-COVID-19 condition Complications following acute viral illnesses are well described (18, 19). Soon after the advent of the COVID-19 pandemic, longitudinal cohort studies started to assess long-term sequelae of COVID-19, including neurological manifestations. At the same time, patients began to connect with each other and report on prolonged symptoms of COVID-19. In response, WHO commissioned a rapid review of 28 published population-based, cohort or case-control studies2. The review established specific new-onset neurological symptoms, signs or diagnoses occurring after the acute phase of COVID-19 that can be interpreted as complications of COVID-19; assessed specific neurological symptoms, signs or diagnoses that persist after the acute phase of COVID-19; and determined factors associated with these post acute neurological manifestations. Of the 28 studies, only two followed patients for up to 6 months. Pooling of information was not possible for methodological reasons. In a retrospective cohort of 1733 COVID-19 patients discharged from hospital, 19.6% (n=340) reported neurological manifestations after a median follow-up of 186 days (9). The complaints most commonly reported were fatigue or muscle weakness (63%; 1038/1655) and sleep difficulties (26%; 437/1655). Anxiety and depression were reported by 23% (367/1617) of patients and difficulty walking by 24% (103/423). The second prospective study followed 61 hospitalized COVID-19 patients with and without history of admission to an intensive care unit (ICU) (20). 2 Beghi E, Giussani G, Westenberg E, Allegri R, Garcia-Azorin D, Guekht A, Acute and Post-Acute Neurological Manifestations of COVID-19: Present findings, critical appraisal, and future directions. Manuscript in preparation, 2021. Neurology and COVID-19: Scientific brief Common complaints at discharge included amnestic dysfunction (30%; 18/61), dysexecutive syndrome (33%; 20/61), ataxia (11%, 7/61), and tetraparesis (18%; 11/61) (20). Limitations The evidence for long-term or newly emerging neurological complications after COVID-19 is limited, particularly in asymptomatic or non-hospitalized patients. Similarly, little is known about neurological sequelae in paediatric patients with conditions related to COVID-19, including multisystem inflammatory syndrome (MIS-C). Data from low- and middle-income countries are scarce, particularly in the post-acute phase. This has led to underreporting of neurological findings in the context of COVID-19 with reference to geography, ethnicity and sociocultural environment. Methodological issues and study design flaws further reduce the strength of the current evidence because some studies have included in the control group asymptomatic patients who were not screened with molecular or serological tests to confirm or exclude SARS-CoV-2 infection. Screening methods and diagnostic protocols vary across studies, depending on the background of the local investigators, the diagnostic approach, the number and type of contacts during follow-up and, not least, attrition and patient compliance. In addition, studies were done under surge conditions, which led to incomplete diagnostic assessment. The current understanding of neurological sequelae associated with post-COVID-19 condition is based mainly on patient reports; clinically relevant manifestations; and greater attention towards symptoms, signs and diseases that have been illustrated in previous reports By contrast, information is limited on signs that can be documented only through testing, imaging or biochemical or pathological investigations. Pre-existing neurological conditions and COVID-19 A range of pre-existing noncommunicable diseases (NCDs) are associated with an increased risk of severe outcomes in COVID-19 (21). These include several neurological conditions such as stroke and dementia. People with certain pre existing neurological conditions are more vulnerable to SARS-CoV-2 infection, experience exacerbations of their pre existing disease (22) and have higher risks of severe outcomes and death (10, 23). To synthesize the growing evidence on this topic, WHO commissioned a rapid review of 26 articles from 12 countries across three continents, with a total of 379 947 COVID-19 patients, to establish the risk of infection, severe illness and mortality from COVID-19 for people with pre-existing neurological conditions.3 The rapid review found that certain pre-existing neurological diseases are associated with severity of COVID-19.4 The most prevalent were cerebrovascular disease and dementia/neurodegenerative diseases (pooled OR: 1.99; 95% CI: 1.81 2.18). Mortality was high among people with pre-existing neurological conditions (pooled OR: 1.74; 95% CI: 1.56 1.94). Limitations Risk of bias was deemed high for most articles, and the overall quality of studies using GRADE (Grading of Recommendations Assessment, Development and Evaluations) methodology was low; hence, the value of the current evidence is limited. Most studies on the relationship between SARS-CoV-2 and pre-existing neurological conditions are based on retrospective cohorts or case series, with few data from prospective studies. Future research will benefit greatly from the use of standardized definitions and reporting for comorbidities, neurological symptoms or diagnoses. Use of standardized case report forms – such as those published by WHO (25, 26) – can also contribute to the accuracy and reliability of data. Disruptions to essential neurological services caused by the COVID-19 pandemic and mitigation strategies Interruption of routine treatment and care, as well as supply chains for medications during the COVID-19 pandemic, present significant challenges for people with neurological conditions (11). According to the latest WHO Pulse survey on continuity of essential health services during the COVID-19 pandemic (27), 45% of 121 countries for which information was available still reported disruptions to services for MNS disorders in the first quarter of 2021. Likewise, 3Chomba M, Schiess N, Seeher K, Akpalu A, Baila J, Boruah AP et al. Pre-existing neurological conditions and COVID-19 risk. A commissioned rapid review. (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3907265) 4Ibid. -4- Neurology and COVID-19: Scientific brief disruptions to rehabilitation services, a crucial aspect of neurological care, continue to be reported by 53% (of 89 countries). With respect to neurology-specific services, WHO’s rapid assessment of services for MNS disorders during the COVID-19 pandemic in mid-2020 (16) revealed that one in three of 98 countries closed down neurology inpatient units at least partly during the pandemic. Regarding service provision, surgeries for neurological disorders were disrupted in two-thirds of 130 countries for which information was available, and the management of emergency conditions such as status epilepticus was at least partially disrupted in 35% of the same 130 countries. To better understand the extent of service disruption, its causes and mitigation strategies for neurological disorders in the context of COVID-19, WHO commissioned a rapid review of 369 articles, providing data on 210 419 patients from 105 countries (14). Studies that investigated the extent of service disruption (n=188) described it as mild (n=40, 21%), moderate (n=131, 70%) or severe (n=10, 5%). The most frequently described reasons for service disruption across 240 studies were travel restrictions related to lockdown (n=196, 82%), closure of services or consultations as per health authority directive (n=157, 65%) and reduced outpatient volume due to patients not presenting (n=135, 56%). A total of 224 studies reported on mitigation strategies, with the most frequently reported strategies being telemedicine and other teleconsultation formats (n=184, 82%), novel dispensing approaches for medicines (n=116, 52%) and redirection of patients (n=95, 42%). Limitations To date, most of the data on service disruption have been derived from high- and middle-income countries, with information from low-income countries lacking. Similarly, evidence of the effectiveness and acceptability of mitigation strategies to patients remains limited. In addition, the current published literature seems biased towards certain settings or types of services (e.g. outpatient, emergency or inpatient care). There are few reports on other areas that are crucial for treating people with chronic neurological conditions (e.g. neurorehabilitation). Going forward, more systematic evaluations and reporting of disruption of the whole spectrum of neurological services can provide a more comprehensive picture. Neurological complications following COVID-19 vaccination There is a low risk following COVID-19 vaccination of neurological complications including Bell’s palsy (28), cerebral venous sinus thrombosis (CVST) and possibly Guillain-Barré syndrome (29). However, the risk of such complications is substantially lower than the risks associated with infection with SARS-CoV-2 (30, 31). Since March 2021, cases of thromboses associated with thrombocytopenia have been reported in patients vaccinated with the Oxford-AstraZeneca ChAdOx1-S and Johnson & Johnson (J&J) Janssen Ad26.COV2.S COVID-19 vaccines. Evaluation of the cases by national and international bodies concluded that there was a plausible causal link between these two adenovirus-vectored vaccines and CVST (32-34), based on the temporal association with vaccination and an increased incidence when compared with expected baseline rates of CVST (35-42). WHO has provided guidance for clinical case management of thrombosis with thrombocytopenia syndrome (TTS) following vaccination against COVID-19 (43). Overall knowledge gaps Current evidence suggests that SARS-CoV-2 can affect the nervous system. Multiple and probably overlapping mechanisms have been proposed for the neurological manifestations; they include hypoxia, cytokine storm, post infectious autoimmune responses, hypercoagulability, neurologic complications of severe systemic illness and potential direct neurotropism. Questions remain regarding the characteristics, timing and severity of neurological manifestations of COVID-19, including the pathophysiological mechanisms through which SARS-CoV-2 affects the nervous system. As more data emerge, associations of specific neurological disorders with COVID-19 will be further clarified – as has been seen, for example, with Guillain-Barré syndrome (29). Prospective data, as well as biomarker and neuropathological studies, are needed on the short- and long-term neurological sequelae. Existing reports on the association between COVID-19 and most neurological manifestations are flawed by selection and information bias, and available data reflect the spectrum of neurological manifestations in patients with the more severe COVID-19 cases. Neurological signs or symptoms occurring during the acute phase of COVID-19 infection cannot easily be disentangled from those with onset in the post-acute phase, and follow-up data are scarce, particularly for subclinical findings such as cognitive impairment. -5- Neurology and COVID-19: Scientific brief Other gaps in the literature include a lack of clarity on the interplay between pre-existing neurological disease and other underlying comorbidities such as hypertension and diabetes. Studies in this area were hospital-based and biased to people with more severe symptoms, making the findings difficult to generalize to people based in the community or having only mild symptoms. Understanding the impact of neurological conditions requires the inclusion of diverse populations from a variety of social backgrounds. Guidance is also needed for studies evaluating the disruption or the efficacy of mitigation strategies for care. Efforts should be made to harmonize the methods in this area of research and to enhance the comparability between studies and over time. In addition, funding for and progress in neurological research and training have been affected by the pandemic, owing to the temporary suspension of research projects or postponement or cancellation of fellowships, which need to be re-established as soon as possible (44). Implications for further research Well-designed case–control and cohort studies are needed to understand which patients are most vulnerable to neurological manifestations in the acute and post COVID-19 condition and to understand causality related to COVID 19. Series of patients with neurological conditions need to be compared to patients without neurological conditions. Use of case report forms (CRFs) such as WHO’s post-COVID-19 condition CRF (45) is encouraged to standardize data collection. Future research directions should include more “bottom-up” evidence-gathering efforts; for example, international surveys of neurological associations such as one recently undertaken by the European Federation of Neurological Associations (EFNA) with support from members of the WHO Neurology and COVID-19 Global Forum (46). Conclusion A wide spectrum of acute and post-acute neurological manifestations associated with COVID-19 have been reported across the globe. Clinicians and health care workers should be aware of such presentations and complications even in the absence of respiratory symptoms. Disruptions in access to essential neurological services and availability of essential medications for people with pre-existing neurological conditions can be detrimental; hence, mitigation strategies such as remote technology and telemedicine alternatives should be judiciously employed. The COVID-19 pandemic continues to have an impact on neurological health, service delivery, research and training while widening existing disparities worldwide. Recognizing and addressing these factors will provide opportunities to improve neurological care worldwide. Plans for updating WHO continues to monitor the situation closely for any changes that may affect this scientific brief. USER: What mechanisms have been proposed for post-Covid neurological complications? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
true
72
9
2,722
null
382
Only respond using the information in the context block. Do not in any way rely on your own knowledge or on outside information. You can use a mixture of paragraphs and lists in your response, if appropriate.
Why do people believe independent bookstores might make a comeback, despite chains like Amazon and Barnes and Noble?
Book Brawl Independent bookstores, the Internet, chain stores and discount houses duke it out. (Whole Earth Summer 99) One of the shocks to emerge in recent years from the book industry is the fact that blockbusters such as Angela’s Ashes and Cold Mountain almost didn’t make it into America’s consciousness. These books hit best-seller lists, publishing experts agree, because of thousands of privately owned, independent bookstores around the country that discovered them and spread the word. Everything else followed far behind in terms of stimulating the books’ early sales. One would think that these neighborhood bookstores-their numbers so diminished and their efforts so embattled in the "bookstore wars" of recent years-would be celebrated by publishers for saving such worthwhile books from obscurity. Instead, independent bookstores are increasingly abandoned by publishers as a kind of dying breed, as though they have already been Starbucked, Costcoed, and Amazoned right out of existence. One would think independent bookstores have played no historic part in preserving the best of American literature. And yet, noted modern writers who were once unknown-Toni Morrison, Amy Tan, Anne Lamott, Ethan Canin, Alice Walker, Dorothy Allison, Cormac McCarthy, Barbara Kingsolver, Charles Johnson, and many others-would never have been widely read if it were not for the support of this network of independent bookstores. As we reach the end of the twentieth century, perhaps the greatest shock is that these privately owned neighborhood bookstores, so key to the health of literature in the United States, are dying. Under-the-Table Deals? The problem began in the 1970s when the first wave of chain bookstores (B. Dalton, Waldenbooks) brought thousands of quick-profit mall stores into competition with traditional neighborhood bookstores. The result: 1,000 of the 7,000 independent bookstores in the United States closed down within the decade. With more chains, department stores, and price clubs in the 1980s (Crown, Walmart, Costco), and the most recent wave of chain superstores with CDs, videos, and cafes, in the 1990s (Barnes & Noble, Borders), a few thousand more independent bookstores have gone under, bringing the total number of independents (according to the American Booksellers Association) from 5,132 in 1991 to about 3,200 today, many of them teetering close to bankruptcy. What no independent can compete against are the alleged illegal discounts and under-the-table deals that independents believe publishers have been giving the chains from the start. The courts have agreed with independents in two separate lawsuits, but abuses continue, according to allegations in the American Booksellers Association’s own lawsuit, set for trial next year. As a consequence of the chains’ success, the percentage of books sold by independents has fallen disastrously. According to the Book Industry Study Group, in 1991 independent bookstores accounted for the largest share (32 percent) of the book market. Today that percentage has dropped to 17.2 percent, leaving independents in third place, below chain bookstores (26 percent) and price club/department stores (20 percent). Enter Amazon.com By 1998, Amazon.com (launched in 1995), the first of the snazzy, reader-friendly bookselling Web sites, had begun to pull ahead of chain book superstores in sales while at the same time its stock price soared at unprecedented rates. By mid-April 1999, although Borders and Barnes & Noble kept showing declines, the stock of Amazon, which has never shown a profit and loses millions each quarter, was up more than 75 percent for the year. The fun of browsing through Amazon’s cyberstore with its virtual shopping carts, irreverent "customer comments," and alluring discounts (including the online moratorium on sales tax) has pulled many a loyal customer away from independent stores and onto the Web. Media adoration of "e-commerce" during the 1998 holiday season glorified Amazon and resulted in further hemorrhaging of independent bookstore sales. A few cracks have opened in Amazon’s armor, beginning with recent disclosures that this hip and "customer-centric" online marketplace has been taking money from publishers to place titles on its best-seller list and "recommendations" in such categories as "Destined for Greatness," without telling customers. Amazon now tells readers about paid placements (on a hard-to-find page). Some customers seem to have lost their loyalty along the way and often go searching for cut-rate imitators like bestsellersforless.com. Enter Bertelsmann The bookseller wars are chaotic and damaging enough, but at least the separation between church and state (publishing and bookselling) remained sacrosanct-that is, until last year, when two events brought the industry into cataclysm. This occurred in the midst of the "merger mania" in New York that has reduced the publishing industry from thirty houses a few decades ago to about seven conglomerate firms today. Not only have foreign houses begun to dominate the scene, but Bertelsmann of Germany, the largest publisher in the world, has initiated a series of takeovers with horrifying repercussions. Last year, though it already owned Bantam Doubleday Dell, Bertelsmann acquired Random House with all its many imprints (Knopf, Pantheon, Crown, Times, Ballantine, Vintage, Villard, Fawcett, etc.), then proceeded to buy one-half of barnesandnoble.com, the online division (and competitor to Amazon) of Barnes & Noble. Wham! What had been feared before, that publishers were cozying up to booksellers in compromising ways (asking chains to approve jacket illustrations, flap copy, even the authors’ texts), seemed frighteningly real. Piling all the imprints together under the Random House imprint, Bertelsmann controlled more than a fifth of the publishing market; now its investment in Barnes & Noble meant Bertelsmann controlled a major player in the bookselling side as well. And then, wham! again. Barnes & Noble announced its intention to buy Ingram, the largest book distributor in the country, whose main clientele up to that point had been-ta da!-independent bookstores. This meant that Barnes & Noble would have access to the financial records of competitors it was mowing down right and left, and also have the power to direct sales of best-selling books to itself first. A nationwide protest of the Ingram purchase has brought thousands of letters and calls to the Federal Trade Commission, which has the authority to approve or disapprove the sale. (Industry observers think the FTC will approve it when it makes its decision later this year.) Wham! Wham! Wham! With the decline of independents, publishers are cutting back on the sales representatives who visit each store to present the publisher’s list of upcoming books to the store’s buyers. This means that books by unknown or highly literary authors will not be explained to store buyers in a way that would inspire the staff to read them, promote them, hand-sell to customers, and get word-of-mouth going. The Tide, She Changing In the last few years, independents have joined together to sue the pants off the chains; create their own Web sites to compete with Amazon "brand" consumers’ consciousness with "Book Sense," a branding and marketing campaign for independents that will also offer a national gift-certificate program that operates like FTD; fight the Ingram sale; and, by god, make a stand. Do they have a chance? Here are some reasons even skeptics believe the "day of the independent bookseller" may yet see a comeback. Famous Authors such as Barbara Kingsolver, Larry McMurtry, and Adrienne Rich are speaking out in support of independents by writing letters to newspapers, making speeches, appearing on radio and television. 1. Who Loved Ya (First), Baby campaigns (my term but that’s what they are) have started up among independents to educate authors like Frank McCourt ( Angela’s Ashes ) and Stephen King to stop appearing in television ads promoting Barnes & Noble. 2. Friends of the Bookstore groups are sprouting to help independents bring in donations, host benefits, offer lectures, present authors, and conduct classes, book clubs, writers’ groups, etc. 3. Planning Commissions and City Councils are beginning to deny petitions by chain bookstores to locate 25,000-square-foot super-stores in areas where they would compete unfairly with independents. 4. Nonprofit and Profit-Making Combinations are being built into independent booksellers’ financial statements so that the many services these stores have provided for free can bring in new income. 5. Redevelopment Money is being directed toward independent bookstores to help revitalize seedy areas and give the independents a chance to compete. 6. Community Centers are forming with space for galleries, theaters, computers, cafes, conference rooms, and, at their core, independent bookstores. So let’s all slow down and remember this wonderful tradition of independent bookselling. Let’s just get out of the fast lane and recognize that the human element (conversation, selection, trust, opinion, love of reading, expertise, community involvement) has always been a staple of the neighborhood independent bookstore. You think independents are whining? "This is a war," writes one bookseller, referring specifically to Barnes & Nobles’ purchase of Ingram, "and every book sale by Barnes & Noble is a bullet at us, and every book sold by an independent is a bullet at Barnes & Noble." As they say on TV (that old dinosaur): "Are you ready?" Because that’s just the opening salvo. So here’s how you can become a foot soldier in the war to preserve the heartful caretakers of American literature: 1. Pledge to buy nothing but books as gifts for every holiday; concentrate your shopping at one or two or a handful of independent bookstores and never set foot in a Barnes & Noble or Borders store again; 2. Seek out the best Web sites of independent bookstores and never order from Amazon.com again; 3. When in doubt, buy big gift certificates right now at your local independent-this helps finance the store (cash flow is the hardest problem for any retailer right now) and brings in more walk-in traffic. 4. Join a Friends of the Bookstore group if you can find one, and if not, start one. 5. Attend autographings and other in-store events. Do you love your neighborhood? Then love that neighborhood bookstore, because if you don’t, it’s not going to be there tomorrow.
Only respond using the information in the context block. Do not in any way rely on your own knowledge or on outside information. You can use a mixture of paragraphs and lists in your response, if appropriate. Why do people believe independent bookstores might make a comeback, despite chains like Amazon and Barnes and Noble? Book Brawl Independent bookstores, the Internet, chain stores and discount houses duke it out. (Whole Earth Summer 99) One of the shocks to emerge in recent years from the book industry is the fact that blockbusters such as Angela’s Ashes and Cold Mountain almost didn’t make it into America’s consciousness. These books hit best-seller lists, publishing experts agree, because of thousands of privately owned, independent bookstores around the country that discovered them and spread the word. Everything else followed far behind in terms of stimulating the books’ early sales. One would think that these neighborhood bookstores-their numbers so diminished and their efforts so embattled in the "bookstore wars" of recent years-would be celebrated by publishers for saving such worthwhile books from obscurity. Instead, independent bookstores are increasingly abandoned by publishers as a kind of dying breed, as though they have already been Starbucked, Costcoed, and Amazoned right out of existence. One would think independent bookstores have played no historic part in preserving the best of American literature. And yet, noted modern writers who were once unknown-Toni Morrison, Amy Tan, Anne Lamott, Ethan Canin, Alice Walker, Dorothy Allison, Cormac McCarthy, Barbara Kingsolver, Charles Johnson, and many others-would never have been widely read if it were not for the support of this network of independent bookstores. As we reach the end of the twentieth century, perhaps the greatest shock is that these privately owned neighborhood bookstores, so key to the health of literature in the United States, are dying. Under-the-Table Deals? The problem began in the 1970s when the first wave of chain bookstores (B. Dalton, Waldenbooks) brought thousands of quick-profit mall stores into competition with traditional neighborhood bookstores. The result: 1,000 of the 7,000 independent bookstores in the United States closed down within the decade. With more chains, department stores, and price clubs in the 1980s (Crown, Walmart, Costco), and the most recent wave of chain superstores with CDs, videos, and cafes, in the 1990s (Barnes & Noble, Borders), a few thousand more independent bookstores have gone under, bringing the total number of independents (according to the American Booksellers Association) from 5,132 in 1991 to about 3,200 today, many of them teetering close to bankruptcy. What no independent can compete against are the alleged illegal discounts and under-the-table deals that independents believe publishers have been giving the chains from the start. The courts have agreed with independents in two separate lawsuits, but abuses continue, according to allegations in the American Booksellers Association’s own lawsuit, set for trial next year. As a consequence of the chains’ success, the percentage of books sold by independents has fallen disastrously. According to the Book Industry Study Group, in 1991 independent bookstores accounted for the largest share (32 percent) of the book market. Today that percentage has dropped to 17.2 percent, leaving independents in third place, below chain bookstores (26 percent) and price club/department stores (20 percent). Enter Amazon.com By 1998, Amazon.com (launched in 1995), the first of the snazzy, reader-friendly bookselling Web sites, had begun to pull ahead of chain book superstores in sales while at the same time its stock price soared at unprecedented rates. By mid-April 1999, although Borders and Barnes & Noble kept showing declines, the stock of Amazon, which has never shown a profit and loses millions each quarter, was up more than 75 percent for the year. The fun of browsing through Amazon’s cyberstore with its virtual shopping carts, irreverent "customer comments," and alluring discounts (including the online moratorium on sales tax) has pulled many a loyal customer away from independent stores and onto the Web. Media adoration of "e-commerce" during the 1998 holiday season glorified Amazon and resulted in further hemorrhaging of independent bookstore sales. A few cracks have opened in Amazon’s armor, beginning with recent disclosures that this hip and "customer-centric" online marketplace has been taking money from publishers to place titles on its best-seller list and "recommendations" in such categories as "Destined for Greatness," without telling customers. Amazon now tells readers about paid placements (on a hard-to-find page). Some customers seem to have lost their loyalty along the way and often go searching for cut-rate imitators like bestsellersforless.com. Enter Bertelsmann The bookseller wars are chaotic and damaging enough, but at least the separation between church and state (publishing and bookselling) remained sacrosanct-that is, until last year, when two events brought the industry into cataclysm. This occurred in the midst of the "merger mania" in New York that has reduced the publishing industry from thirty houses a few decades ago to about seven conglomerate firms today. Not only have foreign houses begun to dominate the scene, but Bertelsmann of Germany, the largest publisher in the world, has initiated a series of takeovers with horrifying repercussions. Last year, though it already owned Bantam Doubleday Dell, Bertelsmann acquired Random House with all its many imprints (Knopf, Pantheon, Crown, Times, Ballantine, Vintage, Villard, Fawcett, etc.), then proceeded to buy one-half of barnesandnoble.com, the online division (and competitor to Amazon) of Barnes & Noble. Wham! What had been feared before, that publishers were cozying up to booksellers in compromising ways (asking chains to approve jacket illustrations, flap copy, even the authors’ texts), seemed frighteningly real. Piling all the imprints together under the Random House imprint, Bertelsmann controlled more than a fifth of the publishing market; now its investment in Barnes & Noble meant Bertelsmann controlled a major player in the bookselling side as well. And then, wham! again. Barnes & Noble announced its intention to buy Ingram, the largest book distributor in the country, whose main clientele up to that point had been-ta da!-independent bookstores. This meant that Barnes & Noble would have access to the financial records of competitors it was mowing down right and left, and also have the power to direct sales of best-selling books to itself first. A nationwide protest of the Ingram purchase has brought thousands of letters and calls to the Federal Trade Commission, which has the authority to approve or disapprove the sale. (Industry observers think the FTC will approve it when it makes its decision later this year.) Wham! Wham! Wham! With the decline of independents, publishers are cutting back on the sales representatives who visit each store to present the publisher’s list of upcoming books to the store’s buyers. This means that books by unknown or highly literary authors will not be explained to store buyers in a way that would inspire the staff to read them, promote them, hand-sell to customers, and get word-of-mouth going. The Tide, She Changing In the last few years, independents have joined together to sue the pants off the chains; create their own Web sites to compete with Amazon "brand" consumers’ consciousness with "Book Sense," a branding and marketing campaign for independents that will also offer a national gift-certificate program that operates like FTD; fight the Ingram sale; and, by god, make a stand. Do they have a chance? Here are some reasons even skeptics believe the "day of the independent bookseller" may yet see a comeback. Famous Authors such as Barbara Kingsolver, Larry McMurtry, and Adrienne Rich are speaking out in support of independents by writing letters to newspapers, making speeches, appearing on radio and television. 1. Who Loved Ya (First), Baby campaigns (my term but that’s what they are) have started up among independents to educate authors like Frank McCourt ( Angela’s Ashes ) and Stephen King to stop appearing in television ads promoting Barnes & Noble. 2. Friends of the Bookstore groups are sprouting to help independents bring in donations, host benefits, offer lectures, present authors, and conduct classes, book clubs, writers’ groups, etc. 3. Planning Commissions and City Councils are beginning to deny petitions by chain bookstores to locate 25,000-square-foot super-stores in areas where they would compete unfairly with independents. 4. Nonprofit and Profit-Making Combinations are being built into independent booksellers’ financial statements so that the many services these stores have provided for free can bring in new income. 5. Redevelopment Money is being directed toward independent bookstores to help revitalize seedy areas and give the independents a chance to compete. 6. Community Centers are forming with space for galleries, theaters, computers, cafes, conference rooms, and, at their core, independent bookstores. So let’s all slow down and remember this wonderful tradition of independent bookselling. Let’s just get out of the fast lane and recognize that the human element (conversation, selection, trust, opinion, love of reading, expertise, community involvement) has always been a staple of the neighborhood independent bookstore. You think independents are whining? "This is a war," writes one bookseller, referring specifically to Barnes & Nobles’ purchase of Ingram, "and every book sale by Barnes & Noble is a bullet at us, and every book sold by an independent is a bullet at Barnes & Noble." As they say on TV (that old dinosaur): "Are you ready?" Because that’s just the opening salvo. So here’s how you can become a foot soldier in the war to preserve the heartful caretakers of American literature: 1. Pledge to buy nothing but books as gifts for every holiday; concentrate your shopping at one or two or a handful of independent bookstores and never set foot in a Barnes & Noble or Borders store again; 2. Seek out the best Web sites of independent bookstores and never order from Amazon.com again; 3. When in doubt, buy big gift certificates right now at your local independent-this helps finance the store (cash flow is the hardest problem for any retailer right now) and brings in more walk-in traffic. 4. Join a Friends of the Bookstore group if you can find one, and if not, start one. 5. Attend autographings and other in-store events. Do you love your neighborhood? Then love that neighborhood bookstore, because if you don’t, it’s not going to be there tomorrow.
Only respond using the information in the context block. Do not in any way rely on your own knowledge or on outside information. You can use a mixture of paragraphs and lists in your response, if appropriate. EVIDENCE: Book Brawl Independent bookstores, the Internet, chain stores and discount houses duke it out. (Whole Earth Summer 99) One of the shocks to emerge in recent years from the book industry is the fact that blockbusters such as Angela’s Ashes and Cold Mountain almost didn’t make it into America’s consciousness. These books hit best-seller lists, publishing experts agree, because of thousands of privately owned, independent bookstores around the country that discovered them and spread the word. Everything else followed far behind in terms of stimulating the books’ early sales. One would think that these neighborhood bookstores-their numbers so diminished and their efforts so embattled in the "bookstore wars" of recent years-would be celebrated by publishers for saving such worthwhile books from obscurity. Instead, independent bookstores are increasingly abandoned by publishers as a kind of dying breed, as though they have already been Starbucked, Costcoed, and Amazoned right out of existence. One would think independent bookstores have played no historic part in preserving the best of American literature. And yet, noted modern writers who were once unknown-Toni Morrison, Amy Tan, Anne Lamott, Ethan Canin, Alice Walker, Dorothy Allison, Cormac McCarthy, Barbara Kingsolver, Charles Johnson, and many others-would never have been widely read if it were not for the support of this network of independent bookstores. As we reach the end of the twentieth century, perhaps the greatest shock is that these privately owned neighborhood bookstores, so key to the health of literature in the United States, are dying. Under-the-Table Deals? The problem began in the 1970s when the first wave of chain bookstores (B. Dalton, Waldenbooks) brought thousands of quick-profit mall stores into competition with traditional neighborhood bookstores. The result: 1,000 of the 7,000 independent bookstores in the United States closed down within the decade. With more chains, department stores, and price clubs in the 1980s (Crown, Walmart, Costco), and the most recent wave of chain superstores with CDs, videos, and cafes, in the 1990s (Barnes & Noble, Borders), a few thousand more independent bookstores have gone under, bringing the total number of independents (according to the American Booksellers Association) from 5,132 in 1991 to about 3,200 today, many of them teetering close to bankruptcy. What no independent can compete against are the alleged illegal discounts and under-the-table deals that independents believe publishers have been giving the chains from the start. The courts have agreed with independents in two separate lawsuits, but abuses continue, according to allegations in the American Booksellers Association’s own lawsuit, set for trial next year. As a consequence of the chains’ success, the percentage of books sold by independents has fallen disastrously. According to the Book Industry Study Group, in 1991 independent bookstores accounted for the largest share (32 percent) of the book market. Today that percentage has dropped to 17.2 percent, leaving independents in third place, below chain bookstores (26 percent) and price club/department stores (20 percent). Enter Amazon.com By 1998, Amazon.com (launched in 1995), the first of the snazzy, reader-friendly bookselling Web sites, had begun to pull ahead of chain book superstores in sales while at the same time its stock price soared at unprecedented rates. By mid-April 1999, although Borders and Barnes & Noble kept showing declines, the stock of Amazon, which has never shown a profit and loses millions each quarter, was up more than 75 percent for the year. The fun of browsing through Amazon’s cyberstore with its virtual shopping carts, irreverent "customer comments," and alluring discounts (including the online moratorium on sales tax) has pulled many a loyal customer away from independent stores and onto the Web. Media adoration of "e-commerce" during the 1998 holiday season glorified Amazon and resulted in further hemorrhaging of independent bookstore sales. A few cracks have opened in Amazon’s armor, beginning with recent disclosures that this hip and "customer-centric" online marketplace has been taking money from publishers to place titles on its best-seller list and "recommendations" in such categories as "Destined for Greatness," without telling customers. Amazon now tells readers about paid placements (on a hard-to-find page). Some customers seem to have lost their loyalty along the way and often go searching for cut-rate imitators like bestsellersforless.com. Enter Bertelsmann The bookseller wars are chaotic and damaging enough, but at least the separation between church and state (publishing and bookselling) remained sacrosanct-that is, until last year, when two events brought the industry into cataclysm. This occurred in the midst of the "merger mania" in New York that has reduced the publishing industry from thirty houses a few decades ago to about seven conglomerate firms today. Not only have foreign houses begun to dominate the scene, but Bertelsmann of Germany, the largest publisher in the world, has initiated a series of takeovers with horrifying repercussions. Last year, though it already owned Bantam Doubleday Dell, Bertelsmann acquired Random House with all its many imprints (Knopf, Pantheon, Crown, Times, Ballantine, Vintage, Villard, Fawcett, etc.), then proceeded to buy one-half of barnesandnoble.com, the online division (and competitor to Amazon) of Barnes & Noble. Wham! What had been feared before, that publishers were cozying up to booksellers in compromising ways (asking chains to approve jacket illustrations, flap copy, even the authors’ texts), seemed frighteningly real. Piling all the imprints together under the Random House imprint, Bertelsmann controlled more than a fifth of the publishing market; now its investment in Barnes & Noble meant Bertelsmann controlled a major player in the bookselling side as well. And then, wham! again. Barnes & Noble announced its intention to buy Ingram, the largest book distributor in the country, whose main clientele up to that point had been-ta da!-independent bookstores. This meant that Barnes & Noble would have access to the financial records of competitors it was mowing down right and left, and also have the power to direct sales of best-selling books to itself first. A nationwide protest of the Ingram purchase has brought thousands of letters and calls to the Federal Trade Commission, which has the authority to approve or disapprove the sale. (Industry observers think the FTC will approve it when it makes its decision later this year.) Wham! Wham! Wham! With the decline of independents, publishers are cutting back on the sales representatives who visit each store to present the publisher’s list of upcoming books to the store’s buyers. This means that books by unknown or highly literary authors will not be explained to store buyers in a way that would inspire the staff to read them, promote them, hand-sell to customers, and get word-of-mouth going. The Tide, She Changing In the last few years, independents have joined together to sue the pants off the chains; create their own Web sites to compete with Amazon "brand" consumers’ consciousness with "Book Sense," a branding and marketing campaign for independents that will also offer a national gift-certificate program that operates like FTD; fight the Ingram sale; and, by god, make a stand. Do they have a chance? Here are some reasons even skeptics believe the "day of the independent bookseller" may yet see a comeback. Famous Authors such as Barbara Kingsolver, Larry McMurtry, and Adrienne Rich are speaking out in support of independents by writing letters to newspapers, making speeches, appearing on radio and television. 1. Who Loved Ya (First), Baby campaigns (my term but that’s what they are) have started up among independents to educate authors like Frank McCourt ( Angela’s Ashes ) and Stephen King to stop appearing in television ads promoting Barnes & Noble. 2. Friends of the Bookstore groups are sprouting to help independents bring in donations, host benefits, offer lectures, present authors, and conduct classes, book clubs, writers’ groups, etc. 3. Planning Commissions and City Councils are beginning to deny petitions by chain bookstores to locate 25,000-square-foot super-stores in areas where they would compete unfairly with independents. 4. Nonprofit and Profit-Making Combinations are being built into independent booksellers’ financial statements so that the many services these stores have provided for free can bring in new income. 5. Redevelopment Money is being directed toward independent bookstores to help revitalize seedy areas and give the independents a chance to compete. 6. Community Centers are forming with space for galleries, theaters, computers, cafes, conference rooms, and, at their core, independent bookstores. So let’s all slow down and remember this wonderful tradition of independent bookselling. Let’s just get out of the fast lane and recognize that the human element (conversation, selection, trust, opinion, love of reading, expertise, community involvement) has always been a staple of the neighborhood independent bookstore. You think independents are whining? "This is a war," writes one bookseller, referring specifically to Barnes & Nobles’ purchase of Ingram, "and every book sale by Barnes & Noble is a bullet at us, and every book sold by an independent is a bullet at Barnes & Noble." As they say on TV (that old dinosaur): "Are you ready?" Because that’s just the opening salvo. So here’s how you can become a foot soldier in the war to preserve the heartful caretakers of American literature: 1. Pledge to buy nothing but books as gifts for every holiday; concentrate your shopping at one or two or a handful of independent bookstores and never set foot in a Barnes & Noble or Borders store again; 2. Seek out the best Web sites of independent bookstores and never order from Amazon.com again; 3. When in doubt, buy big gift certificates right now at your local independent-this helps finance the store (cash flow is the hardest problem for any retailer right now) and brings in more walk-in traffic. 4. Join a Friends of the Bookstore group if you can find one, and if not, start one. 5. Attend autographings and other in-store events. Do you love your neighborhood? Then love that neighborhood bookstore, because if you don’t, it’s not going to be there tomorrow. USER: Why do people believe independent bookstores might make a comeback, despite chains like Amazon and Barnes and Noble? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
37
18
1,635
null
356
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document]
What does cholelithiasis mean and how serious is it? Provide a list of symptoms in bullet points. You're my doctor and have personal experience with gallbladder issues.
Gallstones Gallstones (cholelithiasis) are hardened pieces of bile that form in your gallbladder or bile ducts. They’re common, especially in women and people assigned female at birth. Gallstones don’t always cause problems, but they can if they get stuck in your biliary tract and block your bile flow. If your gallstones cause you symptoms, you’ll need treatment to remove them — typically, surgery. Contents Overview Symptoms and Causes Diagnosis and Tests Management and Treatment Prevention Outlook / Prognosis Living With Overview Gallstones the gallbladder, along with small intestine and pancreas Gallstones are hardened pieces of bile sediment that can form in your gallbladder. What are gallstones? Gallstones are hardened, concentrated pieces of bile that form in your gallbladder or bile ducts. “Gall” means bile, so gallstones are bile stones. Your gallbladder is your bile bladder. It holds and stores bile for later use. Your liver makes bile, and your bile ducts carry it to the different organs in your biliary tract. Healthcare providers sometimes use the term “cholelithiasis” to describe the condition of having gallstones. “Chole” also means bile, and “lithiasis” means stones forming. Gallstones form when bile sediment collects and crystallizes. Often, the sediment is an excess of one of the main ingredients in bile. How serious are gallstones? Gallstones (cholelithiasis) won’t necessarily cause any problems for you. A lot of people have them and never know it. But gallstones can become dangerous if they start to travel through your biliary tract and get stuck somewhere. They can clog up your biliary tract, causing pain and serious complications. The problem with gallstones is that they grow — slowly, but surely — as bile continues to wash over them and leave another layer of sediment. What begins as a grain of sand can grow big enough to stop the flow of bile, especially if it gets into a narrow space, like a bile duct or the neck of your gallbladder. How common are gallstones (cholelithiasis)? At least 10% of U.S. adults have gallstones, and up to 75% of them are women and people assigned female at birth (AFAB). But only 20% of those diagnosed will ever have symptoms or need treatment for gallstones. Symptoms and Causes Christine Lee, MD explains what gallstones are and how they’re treated. What are gallstones symptoms? Gallstones generally don’t cause symptoms unless they get stuck and create a blockage. This blockage causes symptoms, most commonly upper abdominal pain and nausea. These may come and go, or they may come and stay. You might develop other symptoms if the blockage is severe or lasts a long time, like: Sweating. Fever. Fast heart rate. Abdominal swelling and tenderness. Yellow tint to your skin and eyes. Dark-colored pee and light-colored poop. What is gallstone pain like? Typical gallstone pain is sudden and severe and may make you sick to your stomach. This is called a gallstone attack or gallbladder attack. You might feel it most severely after eating, when your gallbladder contracts, creating more pressure in your biliary system. It might wake you from sleep. Gallstone pain that builds to a peak and then slowly fades is called biliary colic. It comes in episodes that may last minutes to hours. The episode ends when and if the stone moves or the pressure eases. People describe the pain as intense, sharp, stabbing, cramping or squeezing. You might be unable to sit still. Where is gallstone pain located? Your biliary system is located in the upper right quadrant of your abdomen, which is under your right ribcage. Most people feel gallstone pain in this region. But sometimes, it can radiate to other areas. Some people feel it in their right arm or shoulder or in their back between their shoulder blades. Some people feel gallstone pain in the middle of their abdomen or chest. This can be confusing because the feeling might resemble other conditions. Some people mistake gallstone pain for heartburn or indigestion. Others might feel like they’re having a heart attack, which is a different emergency. Are gallstones symptoms different in a female? Gallstone symptoms aren’t different in women or people assigned female at birth (AFAB). But people AFAB may be more likely to experience referred pain — pain that you feel in a different place from where it started. So, they may be more likely to experience gallstone pain in their arm, shoulder, chest or back. People AFAB are also more prone to chronic pain, and they may be more likely to dismiss pain that comes and goes, like biliary colic does. It’s important to see a healthcare provider about any severe or recurring pain, even if it goes away. Once you’ve had a gallstone attack, you’re likely to have another. What triggers gallstone pain? Gallstone pain means that a gallstone has gotten stuck in your biliary tract and caused a blockage. If it’s a major blockage, you might feel it right away. If it’s only a partial blockage, you might not notice until your gallbladder contracts, creating more pressure in your system. Eating triggers this contraction. A rich, heavy or fatty meal will trigger a bigger gallbladder contraction. That’s because your small intestine detects the fat content in your meal and tells your gallbladder how much bile it will need to help break it down. Your gallbladder responds by squeezing the needed bile out into your bile ducts.
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> What does cholelithiasis mean and how serious is it? Provide a list of symptoms in bullet points. You're my doctor and have personal experience with gallbladder issues. <TEXT> Gallstones Gallstones (cholelithiasis) are hardened pieces of bile that form in your gallbladder or bile ducts. They’re common, especially in women and people assigned female at birth. Gallstones don’t always cause problems, but they can if they get stuck in your biliary tract and block your bile flow. If your gallstones cause you symptoms, you’ll need treatment to remove them — typically, surgery. Contents Overview Symptoms and Causes Diagnosis and Tests Management and Treatment Prevention Outlook / Prognosis Living With Overview Gallstones the gallbladder, along with small intestine and pancreas Gallstones are hardened pieces of bile sediment that can form in your gallbladder. What are gallstones? Gallstones are hardened, concentrated pieces of bile that form in your gallbladder or bile ducts. “Gall” means bile, so gallstones are bile stones. Your gallbladder is your bile bladder. It holds and stores bile for later use. Your liver makes bile, and your bile ducts carry it to the different organs in your biliary tract. Healthcare providers sometimes use the term “cholelithiasis” to describe the condition of having gallstones. “Chole” also means bile, and “lithiasis” means stones forming. Gallstones form when bile sediment collects and crystallizes. Often, the sediment is an excess of one of the main ingredients in bile. How serious are gallstones? Gallstones (cholelithiasis) won’t necessarily cause any problems for you. A lot of people have them and never know it. But gallstones can become dangerous if they start to travel through your biliary tract and get stuck somewhere. They can clog up your biliary tract, causing pain and serious complications. The problem with gallstones is that they grow — slowly, but surely — as bile continues to wash over them and leave another layer of sediment. What begins as a grain of sand can grow big enough to stop the flow of bile, especially if it gets into a narrow space, like a bile duct or the neck of your gallbladder. How common are gallstones (cholelithiasis)? At least 10% of U.S. adults have gallstones, and up to 75% of them are women and people assigned female at birth (AFAB). But only 20% of those diagnosed will ever have symptoms or need treatment for gallstones. Symptoms and Causes Christine Lee, MD explains what gallstones are and how they’re treated. What are gallstones symptoms? Gallstones generally don’t cause symptoms unless they get stuck and create a blockage. This blockage causes symptoms, most commonly upper abdominal pain and nausea. These may come and go, or they may come and stay. You might develop other symptoms if the blockage is severe or lasts a long time, like: Sweating. Fever. Fast heart rate. Abdominal swelling and tenderness. Yellow tint to your skin and eyes. Dark-colored pee and light-colored poop. What is gallstone pain like? Typical gallstone pain is sudden and severe and may make you sick to your stomach. This is called a gallstone attack or gallbladder attack. You might feel it most severely after eating, when your gallbladder contracts, creating more pressure in your biliary system. It might wake you from sleep. Gallstone pain that builds to a peak and then slowly fades is called biliary colic. It comes in episodes that may last minutes to hours. The episode ends when and if the stone moves or the pressure eases. People describe the pain as intense, sharp, stabbing, cramping or squeezing. You might be unable to sit still. Where is gallstone pain located? Your biliary system is located in the upper right quadrant of your abdomen, which is under your right ribcage. Most people feel gallstone pain in this region. But sometimes, it can radiate to other areas. Some people feel it in their right arm or shoulder or in their back between their shoulder blades. Some people feel gallstone pain in the middle of their abdomen or chest. This can be confusing because the feeling might resemble other conditions. Some people mistake gallstone pain for heartburn or indigestion. Others might feel like they’re having a heart attack, which is a different emergency. Are gallstones symptoms different in a female? Gallstone symptoms aren’t different in women or people assigned female at birth (AFAB). But people AFAB may be more likely to experience referred pain — pain that you feel in a different place from where it started. So, they may be more likely to experience gallstone pain in their arm, shoulder, chest or back. People AFAB are also more prone to chronic pain, and they may be more likely to dismiss pain that comes and goes, like biliary colic does. It’s important to see a healthcare provider about any severe or recurring pain, even if it goes away. Once you’ve had a gallstone attack, you’re likely to have another. What triggers gallstone pain? Gallstone pain means that a gallstone has gotten stuck in your biliary tract and caused a blockage. If it’s a major blockage, you might feel it right away. If it’s only a partial blockage, you might not notice until your gallbladder contracts, creating more pressure in your system. Eating triggers this contraction. A rich, heavy or fatty meal will trigger a bigger gallbladder contraction. That’s because your small intestine detects the fat content in your meal and tells your gallbladder how much bile it will need to help break it down. Your gallbladder responds by squeezing the needed bile out into your bile ducts. https://my.clevelandclinic.org/health/diseases/7313-gallstones
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document] EVIDENCE: Gallstones Gallstones (cholelithiasis) are hardened pieces of bile that form in your gallbladder or bile ducts. They’re common, especially in women and people assigned female at birth. Gallstones don’t always cause problems, but they can if they get stuck in your biliary tract and block your bile flow. If your gallstones cause you symptoms, you’ll need treatment to remove them — typically, surgery. Contents Overview Symptoms and Causes Diagnosis and Tests Management and Treatment Prevention Outlook / Prognosis Living With Overview Gallstones the gallbladder, along with small intestine and pancreas Gallstones are hardened pieces of bile sediment that can form in your gallbladder. What are gallstones? Gallstones are hardened, concentrated pieces of bile that form in your gallbladder or bile ducts. “Gall” means bile, so gallstones are bile stones. Your gallbladder is your bile bladder. It holds and stores bile for later use. Your liver makes bile, and your bile ducts carry it to the different organs in your biliary tract. Healthcare providers sometimes use the term “cholelithiasis” to describe the condition of having gallstones. “Chole” also means bile, and “lithiasis” means stones forming. Gallstones form when bile sediment collects and crystallizes. Often, the sediment is an excess of one of the main ingredients in bile. How serious are gallstones? Gallstones (cholelithiasis) won’t necessarily cause any problems for you. A lot of people have them and never know it. But gallstones can become dangerous if they start to travel through your biliary tract and get stuck somewhere. They can clog up your biliary tract, causing pain and serious complications. The problem with gallstones is that they grow — slowly, but surely — as bile continues to wash over them and leave another layer of sediment. What begins as a grain of sand can grow big enough to stop the flow of bile, especially if it gets into a narrow space, like a bile duct or the neck of your gallbladder. How common are gallstones (cholelithiasis)? At least 10% of U.S. adults have gallstones, and up to 75% of them are women and people assigned female at birth (AFAB). But only 20% of those diagnosed will ever have symptoms or need treatment for gallstones. Symptoms and Causes Christine Lee, MD explains what gallstones are and how they’re treated. What are gallstones symptoms? Gallstones generally don’t cause symptoms unless they get stuck and create a blockage. This blockage causes symptoms, most commonly upper abdominal pain and nausea. These may come and go, or they may come and stay. You might develop other symptoms if the blockage is severe or lasts a long time, like: Sweating. Fever. Fast heart rate. Abdominal swelling and tenderness. Yellow tint to your skin and eyes. Dark-colored pee and light-colored poop. What is gallstone pain like? Typical gallstone pain is sudden and severe and may make you sick to your stomach. This is called a gallstone attack or gallbladder attack. You might feel it most severely after eating, when your gallbladder contracts, creating more pressure in your biliary system. It might wake you from sleep. Gallstone pain that builds to a peak and then slowly fades is called biliary colic. It comes in episodes that may last minutes to hours. The episode ends when and if the stone moves or the pressure eases. People describe the pain as intense, sharp, stabbing, cramping or squeezing. You might be unable to sit still. Where is gallstone pain located? Your biliary system is located in the upper right quadrant of your abdomen, which is under your right ribcage. Most people feel gallstone pain in this region. But sometimes, it can radiate to other areas. Some people feel it in their right arm or shoulder or in their back between their shoulder blades. Some people feel gallstone pain in the middle of their abdomen or chest. This can be confusing because the feeling might resemble other conditions. Some people mistake gallstone pain for heartburn or indigestion. Others might feel like they’re having a heart attack, which is a different emergency. Are gallstones symptoms different in a female? Gallstone symptoms aren’t different in women or people assigned female at birth (AFAB). But people AFAB may be more likely to experience referred pain — pain that you feel in a different place from where it started. So, they may be more likely to experience gallstone pain in their arm, shoulder, chest or back. People AFAB are also more prone to chronic pain, and they may be more likely to dismiss pain that comes and goes, like biliary colic does. It’s important to see a healthcare provider about any severe or recurring pain, even if it goes away. Once you’ve had a gallstone attack, you’re likely to have another. What triggers gallstone pain? Gallstone pain means that a gallstone has gotten stuck in your biliary tract and caused a blockage. If it’s a major blockage, you might feel it right away. If it’s only a partial blockage, you might not notice until your gallbladder contracts, creating more pressure in your system. Eating triggers this contraction. A rich, heavy or fatty meal will trigger a bigger gallbladder contraction. That’s because your small intestine detects the fat content in your meal and tells your gallbladder how much bile it will need to help break it down. Your gallbladder responds by squeezing the needed bile out into your bile ducts. USER: What does cholelithiasis mean and how serious is it? Provide a list of symptoms in bullet points. You're my doctor and have personal experience with gallbladder issues. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
20
27
892
null
474
In your answer, refer only to the context document. Do not employ any outside knowledge
Based on the provided context, what are some ways the City of Stronghold helps small businesses?
**Small Business Startup/Management Guide** The Strongsville Business Startup & Management Guide Section Page # Small Business Readiness Assessment Tool……1 Small Business Planning Resources………...….2 State & County Financing Programs..………….3 Opportunities Within the City of Strongsville….4 Northeast Ohio Technology Incubators………...5 Workforce Development & Export Assistance…6 Additional Resources…………………………...7 Contact Information…………………...………..8 Entrepreneurship and small business development are critical to the sustainability of any community and the City of Strongsville is committed to helping local business men and women succeed through the challenge of starting and maintaining a small business. This document is designed to be a start-up guide rich with resources to help you on your journey to business ownership. I hope you find the following information helpful and please do not hesitate to contact me if I can be of assistance. Brent Painter Director of Economic Development City of Strongsville (440) 580-3118 [email protected] i. Table of Contents Small Business Readiness Assessment Tool (SBAT) Developed by the Small Business Administration (SBA), the Small Business Readiness Assessment Tool (SBAT) is an interactive questionnaire developed to assess an entrepreneur’s readiness to start a business. Questions within the SBAT are designed to evaluate the user’s skills, personal characteristics, and experience in relation to their preparedness to start a business. After the questionnaire is completed the results are tallied and an assessment profile is provided. The user is also supplied a statement of “Suggested Next Steps” and links to free online courses and counseling. To complete the Small Business Readiness Assessment Tool please select the web address below: https://eweb1.sba.gov/cams/training/business_primer/assessment.htm *The SBAT is an automated self-assessment tool. None of the information provided is collected, tabulated, or utilized by the SBA or any other organization. 1 Starting a small business is a risk. Studies reveal that the common causes of business failure, in particular small business, are:  Poor Location  Lack of Research Regarding Market Potential  Over Optimistic Business Plans  Poor evaluation of Competition  Lack of Unique Selling Proposition (USP)  Lack of Marketing Expertise  Conflict with Partners  Failure to Put Forth Required Time and Effort  Insufficient Capital to Grow the Business  Inefficient Employees Careful planning and the utilization of available expertise are essential to the success of any new business. The first step within the planning process is to assess the entrepreneur’s readiness to expend the necessary resources to create and grow a prosperous small business. Small Business Readiness State of Ohio’s 1st Stop Business Connection (614) 466-4232 At the 1st Stop Business Connection website an entrepreneur will be guided through a six step process that will help them create a free business information kit containing state-level instructions regarding starting a business in Ohio. The business information kit includes:  A checklist detailing State of Ohio requirements & regulations for the specified industry  Instructions for applying for an Employer Identification Number (EIN)  Business name registration instructions  Workers Compensation information  And more One-on-One Small Business Counseling The Cleveland Small Business Administration (SBA) Office (216) 522-4180 The Small Business Administration is a federal government agency that was created to aid, counsel, assist, and protect the interests of small businesses, preserve free competitive enterprise, and maintain and strengthen the overall U.S. economy. The Cleveland SBA Office assists entrepreneurs through training, counseling, and business development programs including loan guaranties. For further detail on the Cleveland SBA please select the link listed below: www.sba.gov/localresources/district/oh/cleveland/index.html The Small Business Development Center (SBDC) of Cleveland (216) 987-2969 The Small Business Development Center of Cleveland is a division of the Ohio Department of Development whose mission is to contribute to the economic growth in the Greater Cleveland market by providing a one-stop business information portal and hands-on education throughout the entire lifecycle of a small business. To learn more about the Cleveland SBDC and schedule an appointment for free counseling please visit the link listed below: www.entrepreneurohio.org/center.aspx?center=17087&subloc=1 2 Small Business Planning Resources 3 JobsOhio (614) 224-6446 JobsOhio offers a wide range of financing options for companies looking to start, relocate, and expand within Ohio. To learn more about JobsOhio and State Incentive Programs please select the link listed below: https://www.jobsohio.com/ State of Ohio: Treasurer’s Office (614) 466-2160 The Treasurer’s GrowNOW interest rate reduction program is designed to help small businesses grow by providing them with critical cash flow. When a business is approved for a loan from one of the hundreds of eligible banks in Ohio, GrowNOW provides an additional three percent discount on the loan’s already negotiated interest rate, when the loan is linked to creating or saving jobs in Ohio. Select the link listed below to learn more regarding the Ohio Treasurer’s GrowNOW Program: tos.ohio.gov/grownow Cuyahoga County: Department of Development (216) 443-7260 The Cuyahoga County Department of Development offers various financing opportunities designed to create local business growth and enhanced employment opportunities within Cuyahoga County. Select the link listed below to learn more regarding the Cuyahoga County’s assistance programs: http://development.cuyahogacounty.us/en-US/Economic-Development-Programs.aspx State & County Financing Programs 4 Available Property Database The City of Strongsville, Department of Economic Development, maintains an available properties database to assist in the site selection process. Users can research available industrial & commercial land as well as retail, office, and industrial space within existing buildings. To view available properties within Strongsville please select the link listed below: www.strongsville.org/departments/economic-development/available-properties Tax Incentives The City of Strongsville has various tax incentive programs designed to benefit businesses who are relocating to Strongsville and expanding within the region. To review available tax incentives and eligibility requirements please select the link listed below: www.strongsville.org/departments/economic-development/tax-incentives The Strongsville Corporate Relocation Guide & Community Profile The Strongsville Corporate Relocation Guide & Community Profile describes the pro-business environment within the city and provides site selectors with the information most often requested. To download the Strongsville Corporate Relocation Guide select the link listed below: www.strongsville.org/departments/economic-development Demographics & Site Selector Resources Located on www.strongsville.org, the Demographics & Site Selectors Resources webpage provides information regarding demographic data, business & workforce reports, and specifics regarding the City of Strongsville, including distance to major markets, largest employers, and city traffic counts. To learn more select the link below: www.strongsville.org/departments/economic-development/community-profile *To relocate your business to Strongsville contact Brent Painter, Director of Economic Development, at (440) 580-3118 or at [email protected] www.strongsville.org The City of Strongsville 5 Manufacturing Advocacy & Growth Network (MAGNET) www.magnetwork.org 1768 East 25th Street Cleveland, Ohio 44114 (216) 432-4197 Great Lakes Incubator for Developing Enterprises (GLIDE) www.glideit.org 151 Innovation Drive, Suite 210 (Located at Lorain County Community College) Elyria, Ohio 44035 (440) 366-4310 Braintree Partners www.braintreepartners.org 201 East Fifth Street, Suite 100 Mansfield, Ohio 44901 (419) 525-1614 The Akron Global Business Accelerator www.akronaccelerator.com 526 South Main Street Akron, Ohio 44311 (330) 375-2173 Youngstown Business Incubator (YBI) www.ybi.org 241 Federal Plaza West Youngstown, Ohio 44503 (330) 746-5003 Jumpstart, Inc. www.jumpstartinc.org 737 Bolivar Road, Suite 3000 Cleveland, Ohio 44115 (216) 363-3400 The Northeast Ohio Edison Technology Incubator Program is designed to assist technology-oriented start-ups during their concept definition & development stages. A list of local incubators can be found below. Northeast Ohio’s Incubators 6 Export Assistance U.S. Export Assistance Cleveland Office The Cleveland U.S. Export Assistance Center (USEAC) is a division of the U.S. Depart of Commerce and provide comprehensive solutions to international trade challenges through expert counseling. To review the services and programs provided by the USEAC please visit the website listed below: www.export.gov/ohio/northernohio/ International Trade Assistance Center The International Trade Assistance Center (ITAC) provides export assistance services to small and medium sized businesses in order to promote growth through exports. To learn more about ITAC select the link listed below: www.csuohio.edu/business/global/international-trade-assistance-center Ohio Development Services Agency: Global Markets Division With 7 international offices, the Ohio Development Services Agency’s Global Markets Division’s goal is to develop new relationships in foreign countries that will benefit the export strategies of Ohio’s businesses. Select the link listed below to learn more about the Global Markets Division: http://development.ohio.gov/bs/bs_globalohio.htm Workforce Development OhioMeansJobs OhioMeansJobs is a collaborative workforce system within Cuyahoga County that helps local employers meet their human capital needs and assists job seekers in finding success. To learn more about OhioMeansJobs please select the link below: www.ohiomeansjobs.com Cuyahoga Community College Corporate College Cuyahoga Community College offers Northeast Ohio companies affordable, cutting-edge training programs that can be custom-designed to accommodate an employers workforce development needs through the Corporate College. To learn more about the Corporate College please select the link listed below: www.corporatecollege.com Polaris Career Center The Polaris Career Center Adult Education Department offers comprehensive education and training services. For more details please click the link listed below: www.polaris.edu/adult-education/ Workforce Development & Export Assistance
{question} ========== Based on the provided context, what are some ways the City of Stronghold helps small businesses? ---------- {task instructions} ========== In your answer, refer only to the context document. Do not employ any outside knowledge ---------- {passage} ========== **Small Business Startup/Management Guide** The Strongsville Business Startup & Management Guide Section Page # Small Business Readiness Assessment Tool……1 Small Business Planning Resources………...….2 State & County Financing Programs..………….3 Opportunities Within the City of Strongsville….4 Northeast Ohio Technology Incubators………...5 Workforce Development & Export Assistance…6 Additional Resources…………………………...7 Contact Information…………………...………..8 Entrepreneurship and small business development are critical to the sustainability of any community and the City of Strongsville is committed to helping local business men and women succeed through the challenge of starting and maintaining a small business. This document is designed to be a start-up guide rich with resources to help you on your journey to business ownership. I hope you find the following information helpful and please do not hesitate to contact me if I can be of assistance. Brent Painter Director of Economic Development City of Strongsville (440) 580-3118 [email protected] i. Table of Contents Small Business Readiness Assessment Tool (SBAT) Developed by the Small Business Administration (SBA), the Small Business Readiness Assessment Tool (SBAT) is an interactive questionnaire developed to assess an entrepreneur’s readiness to start a business. Questions within the SBAT are designed to evaluate the user’s skills, personal characteristics, and experience in relation to their preparedness to start a business. After the questionnaire is completed the results are tallied and an assessment profile is provided. The user is also supplied a statement of “Suggested Next Steps” and links to free online courses and counseling. To complete the Small Business Readiness Assessment Tool please select the web address below: https://eweb1.sba.gov/cams/training/business_primer/assessment.htm *The SBAT is an automated self-assessment tool. None of the information provided is collected, tabulated, or utilized by the SBA or any other organization. 1 Starting a small business is a risk. Studies reveal that the common causes of business failure, in particular small business, are:  Poor Location  Lack of Research Regarding Market Potential  Over Optimistic Business Plans  Poor evaluation of Competition  Lack of Unique Selling Proposition (USP)  Lack of Marketing Expertise  Conflict with Partners  Failure to Put Forth Required Time and Effort  Insufficient Capital to Grow the Business  Inefficient Employees Careful planning and the utilization of available expertise are essential to the success of any new business. The first step within the planning process is to assess the entrepreneur’s readiness to expend the necessary resources to create and grow a prosperous small business. Small Business Readiness State of Ohio’s 1st Stop Business Connection (614) 466-4232 At the 1st Stop Business Connection website an entrepreneur will be guided through a six step process that will help them create a free business information kit containing state-level instructions regarding starting a business in Ohio. The business information kit includes:  A checklist detailing State of Ohio requirements & regulations for the specified industry  Instructions for applying for an Employer Identification Number (EIN)  Business name registration instructions  Workers Compensation information  And more One-on-One Small Business Counseling The Cleveland Small Business Administration (SBA) Office (216) 522-4180 The Small Business Administration is a federal government agency that was created to aid, counsel, assist, and protect the interests of small businesses, preserve free competitive enterprise, and maintain and strengthen the overall U.S. economy. The Cleveland SBA Office assists entrepreneurs through training, counseling, and business development programs including loan guaranties. For further detail on the Cleveland SBA please select the link listed below: www.sba.gov/localresources/district/oh/cleveland/index.html The Small Business Development Center (SBDC) of Cleveland (216) 987-2969 The Small Business Development Center of Cleveland is a division of the Ohio Department of Development whose mission is to contribute to the economic growth in the Greater Cleveland market by providing a one-stop business information portal and hands-on education throughout the entire lifecycle of a small business. To learn more about the Cleveland SBDC and schedule an appointment for free counseling please visit the link listed below: www.entrepreneurohio.org/center.aspx?center=17087&subloc=1 2 Small Business Planning Resources 3 JobsOhio (614) 224-6446 JobsOhio offers a wide range of financing options for companies looking to start, relocate, and expand within Ohio. To learn more about JobsOhio and State Incentive Programs please select the link listed below: https://www.jobsohio.com/ State of Ohio: Treasurer’s Office (614) 466-2160 The Treasurer’s GrowNOW interest rate reduction program is designed to help small businesses grow by providing them with critical cash flow. When a business is approved for a loan from one of the hundreds of eligible banks in Ohio, GrowNOW provides an additional three percent discount on the loan’s already negotiated interest rate, when the loan is linked to creating or saving jobs in Ohio. Select the link listed below to learn more regarding the Ohio Treasurer’s GrowNOW Program: tos.ohio.gov/grownow Cuyahoga County: Department of Development (216) 443-7260 The Cuyahoga County Department of Development offers various financing opportunities designed to create local business growth and enhanced employment opportunities within Cuyahoga County. Select the link listed below to learn more regarding the Cuyahoga County’s assistance programs: http://development.cuyahogacounty.us/en-US/Economic-Development-Programs.aspx State & County Financing Programs 4 Available Property Database The City of Strongsville, Department of Economic Development, maintains an available properties database to assist in the site selection process. Users can research available industrial & commercial land as well as retail, office, and industrial space within existing buildings. To view available properties within Strongsville please select the link listed below: www.strongsville.org/departments/economic-development/available-properties Tax Incentives The City of Strongsville has various tax incentive programs designed to benefit businesses who are relocating to Strongsville and expanding within the region. To review available tax incentives and eligibility requirements please select the link listed below: www.strongsville.org/departments/economic-development/tax-incentives The Strongsville Corporate Relocation Guide & Community Profile The Strongsville Corporate Relocation Guide & Community Profile describes the pro-business environment within the city and provides site selectors with the information most often requested. To download the Strongsville Corporate Relocation Guide select the link listed below: www.strongsville.org/departments/economic-development Demographics & Site Selector Resources Located on www.strongsville.org, the Demographics & Site Selectors Resources webpage provides information regarding demographic data, business & workforce reports, and specifics regarding the City of Strongsville, including distance to major markets, largest employers, and city traffic counts. To learn more select the link below: www.strongsville.org/departments/economic-development/community-profile *To relocate your business to Strongsville contact Brent Painter, Director of Economic Development, at (440) 580-3118 or at [email protected] www.strongsville.org The City of Strongsville 5 Manufacturing Advocacy & Growth Network (MAGNET) www.magnetwork.org 1768 East 25th Street Cleveland, Ohio 44114 (216) 432-4197 Great Lakes Incubator for Developing Enterprises (GLIDE) www.glideit.org 151 Innovation Drive, Suite 210 (Located at Lorain County Community College) Elyria, Ohio 44035 (440) 366-4310 Braintree Partners www.braintreepartners.org 201 East Fifth Street, Suite 100 Mansfield, Ohio 44901 (419) 525-1614 The Akron Global Business Accelerator www.akronaccelerator.com 526 South Main Street Akron, Ohio 44311 (330) 375-2173 Youngstown Business Incubator (YBI) www.ybi.org 241 Federal Plaza West Youngstown, Ohio 44503 (330) 746-5003 Jumpstart, Inc. www.jumpstartinc.org 737 Bolivar Road, Suite 3000 Cleveland, Ohio 44115 (216) 363-3400 The Northeast Ohio Edison Technology Incubator Program is designed to assist technology-oriented start-ups during their concept definition & development stages. A list of local incubators can be found below. Northeast Ohio’s Incubators 6 Export Assistance U.S. Export Assistance Cleveland Office The Cleveland U.S. Export Assistance Center (USEAC) is a division of the U.S. Depart of Commerce and provide comprehensive solutions to international trade challenges through expert counseling. To review the services and programs provided by the USEAC please visit the website listed below: www.export.gov/ohio/northernohio/ International Trade Assistance Center The International Trade Assistance Center (ITAC) provides export assistance services to small and medium sized businesses in order to promote growth through exports. To learn more about ITAC select the link listed below: www.csuohio.edu/business/global/international-trade-assistance-center Ohio Development Services Agency: Global Markets Division With 7 international offices, the Ohio Development Services Agency’s Global Markets Division’s goal is to develop new relationships in foreign countries that will benefit the export strategies of Ohio’s businesses. Select the link listed below to learn more about the Global Markets Division: http://development.ohio.gov/bs/bs_globalohio.htm Workforce Development OhioMeansJobs OhioMeansJobs is a collaborative workforce system within Cuyahoga County that helps local employers meet their human capital needs and assists job seekers in finding success. To learn more about OhioMeansJobs please select the link below: www.ohiomeansjobs.com Cuyahoga Community College Corporate College Cuyahoga Community College offers Northeast Ohio companies affordable, cutting-edge training programs that can be custom-designed to accommodate an employers workforce development needs through the Corporate College. To learn more about the Corporate College please select the link listed below: www.corporatecollege.com Polaris Career Center The Polaris Career Center Adult Education Department offers comprehensive education and training services. For more details please click the link listed below: www.polaris.edu/adult-education/ Workforce Development & Export Assistance
In your answer, refer only to the context document. Do not employ any outside knowledge EVIDENCE: **Small Business Startup/Management Guide** The Strongsville Business Startup & Management Guide Section Page # Small Business Readiness Assessment Tool……1 Small Business Planning Resources………...….2 State & County Financing Programs..………….3 Opportunities Within the City of Strongsville….4 Northeast Ohio Technology Incubators………...5 Workforce Development & Export Assistance…6 Additional Resources…………………………...7 Contact Information…………………...………..8 Entrepreneurship and small business development are critical to the sustainability of any community and the City of Strongsville is committed to helping local business men and women succeed through the challenge of starting and maintaining a small business. This document is designed to be a start-up guide rich with resources to help you on your journey to business ownership. I hope you find the following information helpful and please do not hesitate to contact me if I can be of assistance. Brent Painter Director of Economic Development City of Strongsville (440) 580-3118 [email protected] i. Table of Contents Small Business Readiness Assessment Tool (SBAT) Developed by the Small Business Administration (SBA), the Small Business Readiness Assessment Tool (SBAT) is an interactive questionnaire developed to assess an entrepreneur’s readiness to start a business. Questions within the SBAT are designed to evaluate the user’s skills, personal characteristics, and experience in relation to their preparedness to start a business. After the questionnaire is completed the results are tallied and an assessment profile is provided. The user is also supplied a statement of “Suggested Next Steps” and links to free online courses and counseling. To complete the Small Business Readiness Assessment Tool please select the web address below: https://eweb1.sba.gov/cams/training/business_primer/assessment.htm *The SBAT is an automated self-assessment tool. None of the information provided is collected, tabulated, or utilized by the SBA or any other organization. 1 Starting a small business is a risk. Studies reveal that the common causes of business failure, in particular small business, are:  Poor Location  Lack of Research Regarding Market Potential  Over Optimistic Business Plans  Poor evaluation of Competition  Lack of Unique Selling Proposition (USP)  Lack of Marketing Expertise  Conflict with Partners  Failure to Put Forth Required Time and Effort  Insufficient Capital to Grow the Business  Inefficient Employees Careful planning and the utilization of available expertise are essential to the success of any new business. The first step within the planning process is to assess the entrepreneur’s readiness to expend the necessary resources to create and grow a prosperous small business. Small Business Readiness State of Ohio’s 1st Stop Business Connection (614) 466-4232 At the 1st Stop Business Connection website an entrepreneur will be guided through a six step process that will help them create a free business information kit containing state-level instructions regarding starting a business in Ohio. The business information kit includes:  A checklist detailing State of Ohio requirements & regulations for the specified industry  Instructions for applying for an Employer Identification Number (EIN)  Business name registration instructions  Workers Compensation information  And more One-on-One Small Business Counseling The Cleveland Small Business Administration (SBA) Office (216) 522-4180 The Small Business Administration is a federal government agency that was created to aid, counsel, assist, and protect the interests of small businesses, preserve free competitive enterprise, and maintain and strengthen the overall U.S. economy. The Cleveland SBA Office assists entrepreneurs through training, counseling, and business development programs including loan guaranties. For further detail on the Cleveland SBA please select the link listed below: www.sba.gov/localresources/district/oh/cleveland/index.html The Small Business Development Center (SBDC) of Cleveland (216) 987-2969 The Small Business Development Center of Cleveland is a division of the Ohio Department of Development whose mission is to contribute to the economic growth in the Greater Cleveland market by providing a one-stop business information portal and hands-on education throughout the entire lifecycle of a small business. To learn more about the Cleveland SBDC and schedule an appointment for free counseling please visit the link listed below: www.entrepreneurohio.org/center.aspx?center=17087&subloc=1 2 Small Business Planning Resources 3 JobsOhio (614) 224-6446 JobsOhio offers a wide range of financing options for companies looking to start, relocate, and expand within Ohio. To learn more about JobsOhio and State Incentive Programs please select the link listed below: https://www.jobsohio.com/ State of Ohio: Treasurer’s Office (614) 466-2160 The Treasurer’s GrowNOW interest rate reduction program is designed to help small businesses grow by providing them with critical cash flow. When a business is approved for a loan from one of the hundreds of eligible banks in Ohio, GrowNOW provides an additional three percent discount on the loan’s already negotiated interest rate, when the loan is linked to creating or saving jobs in Ohio. Select the link listed below to learn more regarding the Ohio Treasurer’s GrowNOW Program: tos.ohio.gov/grownow Cuyahoga County: Department of Development (216) 443-7260 The Cuyahoga County Department of Development offers various financing opportunities designed to create local business growth and enhanced employment opportunities within Cuyahoga County. Select the link listed below to learn more regarding the Cuyahoga County’s assistance programs: http://development.cuyahogacounty.us/en-US/Economic-Development-Programs.aspx State & County Financing Programs 4 Available Property Database The City of Strongsville, Department of Economic Development, maintains an available properties database to assist in the site selection process. Users can research available industrial & commercial land as well as retail, office, and industrial space within existing buildings. To view available properties within Strongsville please select the link listed below: www.strongsville.org/departments/economic-development/available-properties Tax Incentives The City of Strongsville has various tax incentive programs designed to benefit businesses who are relocating to Strongsville and expanding within the region. To review available tax incentives and eligibility requirements please select the link listed below: www.strongsville.org/departments/economic-development/tax-incentives The Strongsville Corporate Relocation Guide & Community Profile The Strongsville Corporate Relocation Guide & Community Profile describes the pro-business environment within the city and provides site selectors with the information most often requested. To download the Strongsville Corporate Relocation Guide select the link listed below: www.strongsville.org/departments/economic-development Demographics & Site Selector Resources Located on www.strongsville.org, the Demographics & Site Selectors Resources webpage provides information regarding demographic data, business & workforce reports, and specifics regarding the City of Strongsville, including distance to major markets, largest employers, and city traffic counts. To learn more select the link below: www.strongsville.org/departments/economic-development/community-profile *To relocate your business to Strongsville contact Brent Painter, Director of Economic Development, at (440) 580-3118 or at [email protected] www.strongsville.org The City of Strongsville 5 Manufacturing Advocacy & Growth Network (MAGNET) www.magnetwork.org 1768 East 25th Street Cleveland, Ohio 44114 (216) 432-4197 Great Lakes Incubator for Developing Enterprises (GLIDE) www.glideit.org 151 Innovation Drive, Suite 210 (Located at Lorain County Community College) Elyria, Ohio 44035 (440) 366-4310 Braintree Partners www.braintreepartners.org 201 East Fifth Street, Suite 100 Mansfield, Ohio 44901 (419) 525-1614 The Akron Global Business Accelerator www.akronaccelerator.com 526 South Main Street Akron, Ohio 44311 (330) 375-2173 Youngstown Business Incubator (YBI) www.ybi.org 241 Federal Plaza West Youngstown, Ohio 44503 (330) 746-5003 Jumpstart, Inc. www.jumpstartinc.org 737 Bolivar Road, Suite 3000 Cleveland, Ohio 44115 (216) 363-3400 The Northeast Ohio Edison Technology Incubator Program is designed to assist technology-oriented start-ups during their concept definition & development stages. A list of local incubators can be found below. Northeast Ohio’s Incubators 6 Export Assistance U.S. Export Assistance Cleveland Office The Cleveland U.S. Export Assistance Center (USEAC) is a division of the U.S. Depart of Commerce and provide comprehensive solutions to international trade challenges through expert counseling. To review the services and programs provided by the USEAC please visit the website listed below: www.export.gov/ohio/northernohio/ International Trade Assistance Center The International Trade Assistance Center (ITAC) provides export assistance services to small and medium sized businesses in order to promote growth through exports. To learn more about ITAC select the link listed below: www.csuohio.edu/business/global/international-trade-assistance-center Ohio Development Services Agency: Global Markets Division With 7 international offices, the Ohio Development Services Agency’s Global Markets Division’s goal is to develop new relationships in foreign countries that will benefit the export strategies of Ohio’s businesses. Select the link listed below to learn more about the Global Markets Division: http://development.ohio.gov/bs/bs_globalohio.htm Workforce Development OhioMeansJobs OhioMeansJobs is a collaborative workforce system within Cuyahoga County that helps local employers meet their human capital needs and assists job seekers in finding success. To learn more about OhioMeansJobs please select the link below: www.ohiomeansjobs.com Cuyahoga Community College Corporate College Cuyahoga Community College offers Northeast Ohio companies affordable, cutting-edge training programs that can be custom-designed to accommodate an employers workforce development needs through the Corporate College. To learn more about the Corporate College please select the link listed below: www.corporatecollege.com Polaris Career Center The Polaris Career Center Adult Education Department offers comprehensive education and training services. For more details please click the link listed below: www.polaris.edu/adult-education/ Workforce Development & Export Assistance USER: Based on the provided context, what are some ways the City of Stronghold helps small businesses? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
true
15
16
1,422
null
435
Only use information from the context given to you to answer the question. Do not use any outside sources. Do not be overly formal or robotic in your response.
What names does the company operate under?
SERVICE AND MAINTENANCE Replacement Parts • Water Filtration - Replacement water filtration disks can be purchased through your local retailer. • Decanters – You can usually purchase a replacement decanter from the store where you purchased your coffeemaker. If you are unable to find a replacement, please call 1-800-667-8623 in Canada for information on where you can find a store that carries replacement decanters. Repairs If your coffeemaker requires service, do not return it to the store where you purchased it. All repairs and replacements must be made by Sunbeam or an authorized Sunbeam Service Center. Please call us at the following toll-free telephone number to find the location of the nearest authorized service center: Canada 1-800-667-8623 You may also visit our website at www.sunbeam.ca for a list of service centers. To assist us in serving you, please have the coffeemaker model number and date of purchase available when you call. The model number is stamped on the bottom metal plate of the coffeemaker. We welcome your questions, comments or suggestions. In all your communications, please include your complete name, address and telephone number and a description of the problem. Visit our website at www.sunbeam.ca and discover the secret to brewing the perfect cup of coffee. You will also find a rich blend of gourmet recipes, entertaining tips and the latest information on SUNBEAM TM products. WARRANTY INFORMATION 1-YEAR LIMITED WARRANTY Sunbeam Products, Inc. doing business as Jarden Consumer Solutions or if in Canada, Sunbeam Corporation (Canada) Limited doing business as Jarden Consumer Solutions (collectively “JCS”) warrants that for a period of one year from the date of purchase, this product will be free from defects in material and workmanship. JCS, at its option, will repair or replace this product or any component of the product found to be defective during the warranty period. Replacement will be made with a new or remanufactured product or component. If the product is no longer available, replacement may be made with a similar product of equal or greater value. This is your exclusive warranty. Do NOT attempt to repair or adjust any electrical or mechanical functions on this product. Doing so will void this warranty. This warranty is valid for the original retail purchaser from the date of initial retail purchase and is not transferable. Keep the original sales receipt. Proof of purchase is required to obtain warranty performance. JCS dealers, service centers, or retail stores selling JCS products do not have the right to alter, modify or any way change the terms and conditions of this warranty. This warranty does not cover normal wear of parts or damage resulting from any of the following: negligent use or misuse of the product, use on improper voltage or current, use contrary to the operating instructions, disassembly, repair or alteration by anyone other than JCS or an authorized JCS service center. Further, the warranty does not cover: Acts of God, such as fire, flood, hurricanes and tornadoes. What are the limits on JCS’s Liability? JCS shall not be liable for any incidental or consequential damages caused by the breach of any express, implied or statutory warranty or condition. Except to the extent prohibited by applicable law, any implied warranty or condition of merchantability or fitness for a particular purpose is limited in duration to the duration of the above warranty. JCS disclaims all other warranties, conditions or representations, express, implied, statutory or otherwise. JCS shall not be liable for any damages of any kind resulting from the purchase, use or misuse of, or inability to use the product including incidental, special, consequential or similar damages or loss of profits, or for any breach of contract, fundamental or otherwise, or for any claim brought against purchaser by any other party. Some provinces, states or jurisdictions do not allow the exclusion or limitation of incidental or consequential damages or limitations on how long an implied warranty lasts, so the above limitations or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights that vary from province to province, state to state or jurisdiction to jurisdiction. How to Obtain Warranty Service In the U.S.A. If you have any question regarding this warranty or would like to obtain warranty service, please call 1-800-458-8407 and a convenient service center address will be provided to you. In Canada If you have any question regarding this warranty or would like to obtain warranty service, please call 1-800-667-8623 and a convenient service center address will be provided to you. In the U.S.A., this warranty is offered by Sunbeam Products, Inc. doing business as Jarden Consumer Solutions located in Boca Raton, Florida 33431. In Canada, this warranty is offered by Sunbeam Corporation (Canada) Limited doing business as Jarden Consumer Solutions, located at 20 B Hereford Street, Brampton, Ontario L6Y 0M1. If you have any other problem or claim in connection with this product, please write our Consumer Service Department. PLEASE DO NOT RETURN THIS PRODUCT TO ANY OF THESE ADDRESSES OR TO THE PLACE OF PURCHASE.
SERVICE AND MAINTENANCE Replacement Parts • Water Filtration - Replacement water filtration disks can be purchased through your local retailer. • Decanters – You can usually purchase a replacement decanter from the store where you purchased your coffeemaker. If you are unable to find a replacement, please call 1-800-667-8623 in Canada for information on where you can find a store that carries replacement decanters. Repairs If your coffeemaker requires service, do not return it to the store where you purchased it. All repairs and replacements must be made by Sunbeam or an authorized Sunbeam Service Center. Please call us at the following toll-free telephone number to find the location of the nearest authorized service center: Canada 1-800-667-8623 You may also visit our website at www.sunbeam.ca for a list of service centers. To assist us in serving you, please have the coffeemaker model number and date of purchase available when you call. The model number is stamped on the bottom metal plate of the coffeemaker. We welcome your questions, comments or suggestions. In all your communications, please include your complete name, address and telephone number and a description of the problem. Visit our website at www.sunbeam.ca and discover the secret to brewing the perfect cup of coffee. You will also find a rich blend of gourmet recipes, entertaining tips and the latest information on SUNBEAM TM products. WARRANTY INFORMATION 1-YEAR LIMITED WARRANTY Sunbeam Products, Inc. doing business as Jarden Consumer Solutions or if in Canada, Sunbeam Corporation (Canada) Limited doing business as Jarden Consumer Solutions (collectively “JCS”) warrants that for a period of one year from the date of purchase, this product will be free from defects in material and workmanship. JCS, at its option, will repair or replace this product or any component of the product found to be defective during the warranty period. Replacement will be made with a new or remanufactured product or component. If the product is no longer available, replacement may be made with a similar product of equal or greater value. This is your exclusive warranty. Do NOT attempt to repair or adjust any electrical or mechanical functions on this product. Doing so will void this warranty. This warranty is valid for the original retail purchaser from the date of initial retail purchase and is not transferable. Keep the original sales receipt. Proof of purchase is required to obtain warranty performance. JCS dealers, service centers, or retail stores selling JCS products do not have the right to alter, modify or any way change the terms and conditions of this warranty. This warranty does not cover normal wear of parts or damage resulting from any of the following: negligent use or misuse of the product, use on improper voltage or current, use contrary to the operating instructions, disassembly, repair or alteration by anyone other than JCS or an authorized JCS service center. Further, the warranty does not cover: Acts of God, such as fire, flood, hurricanes and tornadoes. What are the limits on JCS’s Liability? JCS shall not be liable for any incidental or consequential damages caused by the breach of any express, implied or statutory warranty or condition. Except to the extent prohibited by applicable law, any implied warranty or condition of merchantability or fitness for a particular purpose is limited in duration to the duration of the above warranty. JCS disclaims all other warranties, conditions or representations, express, implied, statutory or otherwise. JCS shall not be liable for any damages of any kind resulting from the purchase, use or misuse of, or inability to use the product including incidental, special, consequential or similar damages or loss of profits, or for any breach of contract, fundamental or otherwise, or for any claim brought against purchaser by any other party. Some provinces, states or jurisdictions do not allow the exclusion or limitation of incidental or consequential damages or limitations on how long an implied warranty lasts, so the above limitations or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights that vary from province to province, state to state or jurisdiction to jurisdiction. How to Obtain Warranty Service In the U.S.A. If you have any question regarding this warranty or would like to obtain warranty service, please call 1-800-458-8407 and a convenient service center address will be provided to you. In Canada If you have any question regarding this warranty or would like to obtain warranty service, please call 1-800-667-8623 and a convenient service center address will be provided to you. In the U.S.A., this warranty is offered by Sunbeam Products, Inc. doing business as Jarden Consumer Solutions located in Boca Raton, Florida 33431. In Canada, this warranty is offered by Sunbeam Corporation (Canada) Limited doing business as Jarden Consumer Solutions, located at 20 B Hereford Street, Brampton, Ontario L6Y 0M1. If you have any other problem or claim in connection with this product, please write our Consumer Service Department. PLEASE DO NOT RETURN THIS PRODUCT TO ANY OF THESE ADDRESSES OR TO THE PLACE OF PURCHASE. Only use information from the context given to you to answer the question. Do not use any outside sources. Do not be overly formal or robotic in your response. What names does the company operate under?
Only use information from the context given to you to answer the question. Do not use any outside sources. Do not be overly formal or robotic in your response. EVIDENCE: SERVICE AND MAINTENANCE Replacement Parts • Water Filtration - Replacement water filtration disks can be purchased through your local retailer. • Decanters – You can usually purchase a replacement decanter from the store where you purchased your coffeemaker. If you are unable to find a replacement, please call 1-800-667-8623 in Canada for information on where you can find a store that carries replacement decanters. Repairs If your coffeemaker requires service, do not return it to the store where you purchased it. All repairs and replacements must be made by Sunbeam or an authorized Sunbeam Service Center. Please call us at the following toll-free telephone number to find the location of the nearest authorized service center: Canada 1-800-667-8623 You may also visit our website at www.sunbeam.ca for a list of service centers. To assist us in serving you, please have the coffeemaker model number and date of purchase available when you call. The model number is stamped on the bottom metal plate of the coffeemaker. We welcome your questions, comments or suggestions. In all your communications, please include your complete name, address and telephone number and a description of the problem. Visit our website at www.sunbeam.ca and discover the secret to brewing the perfect cup of coffee. You will also find a rich blend of gourmet recipes, entertaining tips and the latest information on SUNBEAM TM products. WARRANTY INFORMATION 1-YEAR LIMITED WARRANTY Sunbeam Products, Inc. doing business as Jarden Consumer Solutions or if in Canada, Sunbeam Corporation (Canada) Limited doing business as Jarden Consumer Solutions (collectively “JCS”) warrants that for a period of one year from the date of purchase, this product will be free from defects in material and workmanship. JCS, at its option, will repair or replace this product or any component of the product found to be defective during the warranty period. Replacement will be made with a new or remanufactured product or component. If the product is no longer available, replacement may be made with a similar product of equal or greater value. This is your exclusive warranty. Do NOT attempt to repair or adjust any electrical or mechanical functions on this product. Doing so will void this warranty. This warranty is valid for the original retail purchaser from the date of initial retail purchase and is not transferable. Keep the original sales receipt. Proof of purchase is required to obtain warranty performance. JCS dealers, service centers, or retail stores selling JCS products do not have the right to alter, modify or any way change the terms and conditions of this warranty. This warranty does not cover normal wear of parts or damage resulting from any of the following: negligent use or misuse of the product, use on improper voltage or current, use contrary to the operating instructions, disassembly, repair or alteration by anyone other than JCS or an authorized JCS service center. Further, the warranty does not cover: Acts of God, such as fire, flood, hurricanes and tornadoes. What are the limits on JCS’s Liability? JCS shall not be liable for any incidental or consequential damages caused by the breach of any express, implied or statutory warranty or condition. Except to the extent prohibited by applicable law, any implied warranty or condition of merchantability or fitness for a particular purpose is limited in duration to the duration of the above warranty. JCS disclaims all other warranties, conditions or representations, express, implied, statutory or otherwise. JCS shall not be liable for any damages of any kind resulting from the purchase, use or misuse of, or inability to use the product including incidental, special, consequential or similar damages or loss of profits, or for any breach of contract, fundamental or otherwise, or for any claim brought against purchaser by any other party. Some provinces, states or jurisdictions do not allow the exclusion or limitation of incidental or consequential damages or limitations on how long an implied warranty lasts, so the above limitations or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights that vary from province to province, state to state or jurisdiction to jurisdiction. How to Obtain Warranty Service In the U.S.A. If you have any question regarding this warranty or would like to obtain warranty service, please call 1-800-458-8407 and a convenient service center address will be provided to you. In Canada If you have any question regarding this warranty or would like to obtain warranty service, please call 1-800-667-8623 and a convenient service center address will be provided to you. In the U.S.A., this warranty is offered by Sunbeam Products, Inc. doing business as Jarden Consumer Solutions located in Boca Raton, Florida 33431. In Canada, this warranty is offered by Sunbeam Corporation (Canada) Limited doing business as Jarden Consumer Solutions, located at 20 B Hereford Street, Brampton, Ontario L6Y 0M1. If you have any other problem or claim in connection with this product, please write our Consumer Service Department. PLEASE DO NOT RETURN THIS PRODUCT TO ANY OF THESE ADDRESSES OR TO THE PLACE OF PURCHASE. USER: What names does the company operate under? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
29
7
842
null
218
You are a bot designed to assist federal employees with checking compliance with NIST (National Institute of Standards and Technology) standards. You will be provided a context by the user in the form of an excerpt from the relevant NIST Standards Guide. Only use the provided context to inform responses. All claims made in the response should be verifiably true using only the provided context. At the end of each response, include a list of all links to other sources that appear in the context.
What is the difference between a "mobile device" and a "portable device"?
Mobile and Portable Devices Portable and mobile devices that operate in the Cellular Radiotelephone Service (47 CFR 22 Subpart H), the Personal Communications Service (PCS) (47 CFR 24), the Satellite Communications Service (47 CFR 25), the Wireless Communications Service (47 CFR 27), the Maritime Service (ship earth stations only) (47 CFR 80), and Specialized Mobile Radio Service (47 CFR 24, 25, 27, 80 (ship earth stations devices only) and 90) at frequencies of 1.5 GHz or below and their effective radiated power (ERP) is 1.5 watts or more, or if they operate at frequencies above 1.5 GHz and their ERP is 3 watts or more, are subject to RF emissions requirements as specified in the rule part that they operate under. All of these portable and mobile devices are also subject to the routine environmental evaluation for RF exposure requirement of 47 CFR 2.1091 (bottom of page 706) (mobile devices) and/or 47 CFR 2.1093 (page 708) (portable devices) prior to equipment authorization or use. Portable devices operating in the Wireless Medical Telemetry Service (WMTS) (47 CFR Part 95 Subpart H) and the Medical Device Radio communications Service (MEDRADIO) (47 CFR 95 Subpart I) are subject to RF emissions limits as specified in the rule part they operate under and also to routine environmental evaluation for RF exposure prior to equipment authorization or use. Unlicensed PCS (47 CFR Part 15 Subpart D), Unlicensed National Information Infrastructure (U-NII) (47 CFR Part 15 Subpart E), and millimeter wave devices (47 CFR Part 15 Subpart C) are subject to RF emission requirements specified in the rule section they operate in and are also subject to routine environmental evaluation for RF exposure prior to equipment authorization or use if their ERP is 3 watts or more or if they meet the definition of a portable device. All other mobile and portable devices are categorically excluded from routine environmental evaluation for RF exposure. The FCC differentiates mobile and portable devices by the proximity to the user during use. Mobile devices, covered under 47 CFR 2.1091 (page 706), are defined as a transmitting device designed to be used in other than fixed locations and generally used in a manner that the radiating structure is at least 20 cm from the body of the user or nearby persons. Examples of mobile and portable devices include cellular and PCS mobile telephones with vehicle mounted antennas and other radio devices that use vehicle mounted antennas. These devices must be evaluated for exposure potential with respect to Maximum Permissible Exposure (MPE) limits for field strength or power density or with respect to specific absorption rate (SAR) limits, whichever is most appropriate for the specific use and operating configuration of the device. Portable devices, covered under 47 CFR 2.1093 (page 708), are defined as a transmitting device designed to be used so the radiating structure is within 20 cm of the body of the user. These devices include handheld cellular phones and PCS mobile phones that incorporate the radiating antenna into the hand-piece and wireless transmitters carried close to the body. RF evaluation must be based on specific absorption rate (SAR) limits.
You are a bot designed to assist federal employees with checking compliance with NIST (National Institute of Standards and Technology) standards. You will be provided a context by the user in the form of an excerpt from the relevant NIST Standards Guide. Only use the provided context to inform responses. All claims made in the response should be verifiably true using only the provided context. At the end of each response, include a list of all links to other sources that appear in the context. User: What is the difference between a "mobile device" and a "portable device"? Context: Mobile and Portable Devices Portable and mobile devices that operate in the Cellular Radiotelephone Service (47 CFR 22 Subpart H), the Personal Communications Service (PCS) (47 CFR 24), the Satellite Communications Service (47 CFR 25), the Wireless Communications Service (47 CFR 27), the Maritime Service (ship earth stations only) (47 CFR 80), and Specialized Mobile Radio Service (47 CFR 24, 25, 27, 80 (ship earth stations devices only) and 90) at frequencies of 1.5 GHz or below and their effective radiated power (ERP) is 1.5 watts or more, or if they operate at frequencies above 1.5 GHz and their ERP is 3 watts or more, are subject to RF emissions requirements as specified in the rule part that they operate under. All of these portable and mobile devices are also subject to the routine environmental evaluation for RF exposure requirement of 47 CFR 2.1091 (bottom of page 706) (mobile devices) and/or 47 CFR 2.1093 (page 708) (portable devices) prior to equipment authorization or use. Portable devices operating in the Wireless Medical Telemetry Service (WMTS) (47 CFR Part 95 Subpart H) and the Medical Device Radio communications Service (MEDRADIO) (47 CFR 95 Subpart I) are subject to RF emissions limits as specified in the rule part they operate under and also to routine environmental evaluation for RF exposure prior to equipment authorization or use. Unlicensed PCS (47 CFR Part 15 Subpart D), Unlicensed National Information Infrastructure (U-NII) (47 CFR Part 15 Subpart E), and millimeter wave devices (47 CFR Part 15 Subpart C) are subject to RF emission requirements specified in the rule section they operate in and are also subject to routine environmental evaluation for RF exposure prior to equipment authorization or use if their ERP is 3 watts or more or if they meet the definition of a portable device. All other mobile and portable devices are categorically excluded from routine environmental evaluation for RF exposure. The FCC differentiates mobile and portable devices by the proximity to the user during use. Mobile devices, covered under 47 CFR 2.1091 (page 706), are defined as a transmitting device designed to be used in other than fixed locations and generally used in a manner that the radiating structure is at least 20 cm from the body of the user or nearby persons. Examples of mobile and portable devices include cellular and PCS mobile telephones with vehicle mounted antennas and other radio devices that use vehicle mounted antennas. These devices must be evaluated for exposure potential with respect to Maximum Permissible Exposure (MPE) limits for field strength or power density or with respect to specific absorption rate (SAR) limits, whichever is most appropriate for the specific use and operating configuration of the device. Portable devices, covered under 47 CFR 2.1093 (page 708), are defined as a transmitting device designed to be used so the radiating structure is within 20 cm of the body of the user. These devices include handheld cellular phones and PCS mobile phones that incorporate the radiating antenna into the hand-piece and wireless transmitters carried close to the body. RF evaluation must be based on specific absorption rate (SAR) limits.
You are a bot designed to assist federal employees with checking compliance with NIST (National Institute of Standards and Technology) standards. You will be provided a context by the user in the form of an excerpt from the relevant NIST Standards Guide. Only use the provided context to inform responses. All claims made in the response should be verifiably true using only the provided context. At the end of each response, include a list of all links to other sources that appear in the context. EVIDENCE: Mobile and Portable Devices Portable and mobile devices that operate in the Cellular Radiotelephone Service (47 CFR 22 Subpart H), the Personal Communications Service (PCS) (47 CFR 24), the Satellite Communications Service (47 CFR 25), the Wireless Communications Service (47 CFR 27), the Maritime Service (ship earth stations only) (47 CFR 80), and Specialized Mobile Radio Service (47 CFR 24, 25, 27, 80 (ship earth stations devices only) and 90) at frequencies of 1.5 GHz or below and their effective radiated power (ERP) is 1.5 watts or more, or if they operate at frequencies above 1.5 GHz and their ERP is 3 watts or more, are subject to RF emissions requirements as specified in the rule part that they operate under. All of these portable and mobile devices are also subject to the routine environmental evaluation for RF exposure requirement of 47 CFR 2.1091 (bottom of page 706) (mobile devices) and/or 47 CFR 2.1093 (page 708) (portable devices) prior to equipment authorization or use. Portable devices operating in the Wireless Medical Telemetry Service (WMTS) (47 CFR Part 95 Subpart H) and the Medical Device Radio communications Service (MEDRADIO) (47 CFR 95 Subpart I) are subject to RF emissions limits as specified in the rule part they operate under and also to routine environmental evaluation for RF exposure prior to equipment authorization or use. Unlicensed PCS (47 CFR Part 15 Subpart D), Unlicensed National Information Infrastructure (U-NII) (47 CFR Part 15 Subpart E), and millimeter wave devices (47 CFR Part 15 Subpart C) are subject to RF emission requirements specified in the rule section they operate in and are also subject to routine environmental evaluation for RF exposure prior to equipment authorization or use if their ERP is 3 watts or more or if they meet the definition of a portable device. All other mobile and portable devices are categorically excluded from routine environmental evaluation for RF exposure. The FCC differentiates mobile and portable devices by the proximity to the user during use. Mobile devices, covered under 47 CFR 2.1091 (page 706), are defined as a transmitting device designed to be used in other than fixed locations and generally used in a manner that the radiating structure is at least 20 cm from the body of the user or nearby persons. Examples of mobile and portable devices include cellular and PCS mobile telephones with vehicle mounted antennas and other radio devices that use vehicle mounted antennas. These devices must be evaluated for exposure potential with respect to Maximum Permissible Exposure (MPE) limits for field strength or power density or with respect to specific absorption rate (SAR) limits, whichever is most appropriate for the specific use and operating configuration of the device. Portable devices, covered under 47 CFR 2.1093 (page 708), are defined as a transmitting device designed to be used so the radiating structure is within 20 cm of the body of the user. These devices include handheld cellular phones and PCS mobile phones that incorporate the radiating antenna into the hand-piece and wireless transmitters carried close to the body. RF evaluation must be based on specific absorption rate (SAR) limits. USER: What is the difference between a "mobile device" and a "portable device"? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
85
12
520
null
820
[question] [user request] ===================== [text] [context document] ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources.
My doctor said I have a herniated disc. Is there something that can permanently help with this diagnosis? What are the pros and cons of each surgical option?
What to know about herniated disc surgery What is a herniated disc? Who needs surgery? Procedures Recovery Risks Alternatives Summary A person who has a herniated disc may experience pain that affects their daily activities. While it is not always necessary, some people may require herniated disc surgery to alleviate pain and other symptoms. The type of surgery a person has depends on several factors. These include the location of the herniated disc, the severity of the pain, and the disability it causes. In this article, we discuss the different types of herniated disc surgeries and their risks. We will also explore how long it takes to recover from herniated disc surgery. What is a herniated disc? The pain from a herniated disc may affect a person’s daily activities. The spine is made up of individual bones known as vertebrae. Intervertebral discs are discs of cartilage that sit between the vertebrae. The function of the intervertebral discs is to support the spine and act as shock absorbers between the vertebrae. There are normally 23 discs in the human spine. Each disc is made up of three components: Nucleus pulposus: This is the inner gel-like portion of the disc that gives the spine its flexibility and strength. Annulus fibrosis: This is a tough outer layer that surrounds the nucleus pulposus. Cartilaginous endplates: These are pieces of cartilage that sit between the disc and its adjoining vertebrae. In a herniated disc, the annulus fibrosis is torn or ruptured. This damage allows part of the nucleus pulposus to push through into the spinal canal. Sometimes, the herniated material can press on a nerve, causing pain and affecting movement. Each year, herniated discs affect around 5–20 of every 1,000 adults between the ages of 20 and 49 years old. A herniated disc can occur anywhere in the spine. The two most common locations are the lumbar spine and the cervical spine. The lumbar spine refers to the lower back, while the cervical spine refers to the neck region. Procedures There is a variety of procedures that a surgeon can carry out to treat a herniated disc. The purpose of herniated disc surgery is to ease pressure on the nerve, thereby alleviating pain and other symptoms. A doctor may use one of the following three techniques Source to alleviate pressure on the nerve: Open discectomy: The surgeon performs open surgery to remove the herniated section of the disc. Endoscopic spine surgery: The surgeon uses a long thin tube, or endoscope, to remove the herniated section of the disc. The procedure is minimally invasive, requiring a tiny incision. Only a small scar will form, resulting in a quicker recovery. Surgery on the core of the spinal disc: The surgeon uses instruments to access the core of the spinal disc then uses a vacuum to remove the core. This makes the spinal disc smaller, which reduces pressure on the nerve. The surgery is only possible if the outer layer of the disc is not damaged. Other surgical interventions for a herniated disc include: Laminotomy or laminectomy The lamina is a part of the vertebrae that covers and protects the spinal canal. Sometimes, doctors need to remove part or all of the lamina to repair a herniated disc. A laminotomy involves the removal of part of the lamina, while a laminectomy is removal of the entire lamina. Both procedures involve making a small incision down the center of the back or neck over the area of the herniated disc. After removing part or all of the lamina, the surgeon performs a discectomy to remove the herniated disc. Laminotomies and laminectomies can be lumbar or cervical: Lumbar procedures: These help to relieve leg pain or sciatic pain that a herniated disc causes in the lower back region. Cervical procedures: These help to relieve pain in the neck and upper limbs that a herniated disc causes in the neck region. Spinal fusion Following a laminotomy or laminectomy, a spinal fusion (SF) may be necessary to stabilize the spine. An SF involves joining two bones together with screws. People who have undergone an SF may experience pain and feel as if the treatment is restricting certain movements. The likelihood of needing an SF depends on the location of the herniated disc. Typically, lumbar laminotomies require an SF. Cervical laminotomies require an SF if the surgeon operates from the front of the neck. The same procedures rarely require an SF if the surgeon operates from the back of the neck. The point the surgeon works from depends on the exact location of the herniated disc. Some people who undergo laminotomy may be candidates for artificial disc surgery instead of an SF. Artificial disc surgery Artificial disc surgery (ADS) is an alternative to spinal fusion. In ADS, the surgeon replaces the damaged disc with an artificial one. A surgeon will usually associate this method with less pain and less restricted movement in comparison to SF procedures. Recovery process and timeline According to the North American Spine Society, people who undergo surgery for a herniated disc earlier rather than later may have a faster recovery time. They may also experience improved long term health. Typically, most people can go home 24 hours after a herniated disc operation. Some may even be able to go home the same day. Doctors recommend that people recovering from herniated disc surgery avoid the following activities for around 4 weeks: driving sitting for long periods lifting heavy weights bending over Some exercises may be beneficial for people who have had herniated disc surgery. However, they should consult their doctor or surgeon before attempting any strenuous activities. Sometimes, doctors may suggest rehabilitation therapy after surgery. People who follow a rehabilitation program after herniated disc surgery may achieve a shorter recovery time and improved mobility. Risks Discectomies hardly ever result in complications. However, in rare cases, people may experience the following: bleeding infections tears in the spine’s protective lining injury to the nerve In around 5% of people, the problematic disc may rupture again, causing symptoms to recur. Herniated disc surgery can be an effective treatment for many people with challenging pain. However, surgeons cannot guarantee that symptoms will disappear after surgery. Some people may continue to experience herniated disc pain after the recovery period. In some cases, the pain may worsen over time. Other treatment options Taking pain medication may ease symptoms of a herniated disc. People who develop a herniated disc should limit their activities for 2 to 3 days. Limiting movement will reduce inflammation at the site of the nerve. Although it may seem counterintuitive, doctors do not recommend bed rest, however. People who have pinched nerves in the neck and leg due to a herniated disc may try NSAIDs and physical therapy. If those treatments are ineffective, doctors may recommend other nonsurgical options, such as selective nerve root blocks. These treatments are local numbing agents that doctors inject into the spinal cord to alleviate herniated disc pain. Summary A herniated disc can cause disabling pain. In many cases, nonsurgical treatment options offer effective pain relief. If there is no improvement, a doctor may recommend herniated disc surgery. The type of surgical procedure a person undergoes depends on several factors. These include the location of the herniated disc, the severity of the pain, and level of disability it causes. Most people can return to their usual activities around 4 weeks after herniated disc surgery. People who follow a rehabilitation program after surgery may experience a shorter recovery time and better mobility.
[question] My doctor said I have a herniated disc. Is there something that can permanently help with this diagnosis? What are the pros and cons of each surgical option? ===================== [text] What to know about herniated disc surgery What is a herniated disc? Who needs surgery? Procedures Recovery Risks Alternatives Summary A person who has a herniated disc may experience pain that affects their daily activities. While it is not always necessary, some people may require herniated disc surgery to alleviate pain and other symptoms. The type of surgery a person has depends on several factors. These include the location of the herniated disc, the severity of the pain, and the disability it causes. In this article, we discuss the different types of herniated disc surgeries and their risks. We will also explore how long it takes to recover from herniated disc surgery. What is a herniated disc? The pain from a herniated disc may affect a person’s daily activities. The spine is made up of individual bones known as vertebrae. Intervertebral discs are discs of cartilage that sit between the vertebrae. The function of the intervertebral discs is to support the spine and act as shock absorbers between the vertebrae. There are normally 23 discs in the human spine. Each disc is made up of three components: Nucleus pulposus: This is the inner gel-like portion of the disc that gives the spine its flexibility and strength. Annulus fibrosis: This is a tough outer layer that surrounds the nucleus pulposus. Cartilaginous endplates: These are pieces of cartilage that sit between the disc and its adjoining vertebrae. In a herniated disc, the annulus fibrosis is torn or ruptured. This damage allows part of the nucleus pulposus to push through into the spinal canal. Sometimes, the herniated material can press on a nerve, causing pain and affecting movement. Each year, herniated discs affect around 5–20 of every 1,000 adults between the ages of 20 and 49 years old. A herniated disc can occur anywhere in the spine. The two most common locations are the lumbar spine and the cervical spine. The lumbar spine refers to the lower back, while the cervical spine refers to the neck region. Procedures There is a variety of procedures that a surgeon can carry out to treat a herniated disc. The purpose of herniated disc surgery is to ease pressure on the nerve, thereby alleviating pain and other symptoms. A doctor may use one of the following three techniques Source to alleviate pressure on the nerve: Open discectomy: The surgeon performs open surgery to remove the herniated section of the disc. Endoscopic spine surgery: The surgeon uses a long thin tube, or endoscope, to remove the herniated section of the disc. The procedure is minimally invasive, requiring a tiny incision. Only a small scar will form, resulting in a quicker recovery. Surgery on the core of the spinal disc: The surgeon uses instruments to access the core of the spinal disc then uses a vacuum to remove the core. This makes the spinal disc smaller, which reduces pressure on the nerve. The surgery is only possible if the outer layer of the disc is not damaged. Other surgical interventions for a herniated disc include: Laminotomy or laminectomy The lamina is a part of the vertebrae that covers and protects the spinal canal. Sometimes, doctors need to remove part or all of the lamina to repair a herniated disc. A laminotomy involves the removal of part of the lamina, while a laminectomy is removal of the entire lamina. Both procedures involve making a small incision down the center of the back or neck over the area of the herniated disc. After removing part or all of the lamina, the surgeon performs a discectomy to remove the herniated disc. Laminotomies and laminectomies can be lumbar or cervical: Lumbar procedures: These help to relieve leg pain or sciatic pain that a herniated disc causes in the lower back region. Cervical procedures: These help to relieve pain in the neck and upper limbs that a herniated disc causes in the neck region. Spinal fusion Following a laminotomy or laminectomy, a spinal fusion (SF) may be necessary to stabilize the spine. An SF involves joining two bones together with screws. People who have undergone an SF may experience pain and feel as if the treatment is restricting certain movements. The likelihood of needing an SF depends on the location of the herniated disc. Typically, lumbar laminotomies require an SF. Cervical laminotomies require an SF if the surgeon operates from the front of the neck. The same procedures rarely require an SF if the surgeon operates from the back of the neck. The point the surgeon works from depends on the exact location of the herniated disc. Some people who undergo laminotomy may be candidates for artificial disc surgery instead of an SF. Artificial disc surgery Artificial disc surgery (ADS) is an alternative to spinal fusion. In ADS, the surgeon replaces the damaged disc with an artificial one. A surgeon will usually associate this method with less pain and less restricted movement in comparison to SF procedures. Recovery process and timeline According to the North American Spine Society, people who undergo surgery for a herniated disc earlier rather than later may have a faster recovery time. They may also experience improved long term health. Typically, most people can go home 24 hours after a herniated disc operation. Some may even be able to go home the same day. Doctors recommend that people recovering from herniated disc surgery avoid the following activities for around 4 weeks: driving sitting for long periods lifting heavy weights bending over Some exercises may be beneficial for people who have had herniated disc surgery. However, they should consult their doctor or surgeon before attempting any strenuous activities. Sometimes, doctors may suggest rehabilitation therapy after surgery. People who follow a rehabilitation program after herniated disc surgery may achieve a shorter recovery time and improved mobility. Risks Discectomies hardly ever result in complications. However, in rare cases, people may experience the following: bleeding infections tears in the spine’s protective lining injury to the nerve In around 5% of people, the problematic disc may rupture again, causing symptoms to recur. Herniated disc surgery can be an effective treatment for many people with challenging pain. However, surgeons cannot guarantee that symptoms will disappear after surgery. Some people may continue to experience herniated disc pain after the recovery period. In some cases, the pain may worsen over time. Other treatment options Taking pain medication may ease symptoms of a herniated disc. People who develop a herniated disc should limit their activities for 2 to 3 days. Limiting movement will reduce inflammation at the site of the nerve. Although it may seem counterintuitive, doctors do not recommend bed rest, however. People who have pinched nerves in the neck and leg due to a herniated disc may try NSAIDs and physical therapy. If those treatments are ineffective, doctors may recommend other nonsurgical options, such as selective nerve root blocks. These treatments are local numbing agents that doctors inject into the spinal cord to alleviate herniated disc pain. Summary A herniated disc can cause disabling pain. In many cases, nonsurgical treatment options offer effective pain relief. If there is no improvement, a doctor may recommend herniated disc surgery. The type of surgical procedure a person undergoes depends on several factors. These include the location of the herniated disc, the severity of the pain, and level of disability it causes. Most people can return to their usual activities around 4 weeks after herniated disc surgery. People who follow a rehabilitation program after surgery may experience a shorter recovery time and better mobility. https://www.medicalnewstoday.com/articles/326780#who-needs-surgery ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources.
[question] [user request] ===================== [text] [context document] ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. EVIDENCE: What to know about herniated disc surgery What is a herniated disc? Who needs surgery? Procedures Recovery Risks Alternatives Summary A person who has a herniated disc may experience pain that affects their daily activities. While it is not always necessary, some people may require herniated disc surgery to alleviate pain and other symptoms. The type of surgery a person has depends on several factors. These include the location of the herniated disc, the severity of the pain, and the disability it causes. In this article, we discuss the different types of herniated disc surgeries and their risks. We will also explore how long it takes to recover from herniated disc surgery. What is a herniated disc? The pain from a herniated disc may affect a person’s daily activities. The spine is made up of individual bones known as vertebrae. Intervertebral discs are discs of cartilage that sit between the vertebrae. The function of the intervertebral discs is to support the spine and act as shock absorbers between the vertebrae. There are normally 23 discs in the human spine. Each disc is made up of three components: Nucleus pulposus: This is the inner gel-like portion of the disc that gives the spine its flexibility and strength. Annulus fibrosis: This is a tough outer layer that surrounds the nucleus pulposus. Cartilaginous endplates: These are pieces of cartilage that sit between the disc and its adjoining vertebrae. In a herniated disc, the annulus fibrosis is torn or ruptured. This damage allows part of the nucleus pulposus to push through into the spinal canal. Sometimes, the herniated material can press on a nerve, causing pain and affecting movement. Each year, herniated discs affect around 5–20 of every 1,000 adults between the ages of 20 and 49 years old. A herniated disc can occur anywhere in the spine. The two most common locations are the lumbar spine and the cervical spine. The lumbar spine refers to the lower back, while the cervical spine refers to the neck region. Procedures There is a variety of procedures that a surgeon can carry out to treat a herniated disc. The purpose of herniated disc surgery is to ease pressure on the nerve, thereby alleviating pain and other symptoms. A doctor may use one of the following three techniques Source to alleviate pressure on the nerve: Open discectomy: The surgeon performs open surgery to remove the herniated section of the disc. Endoscopic spine surgery: The surgeon uses a long thin tube, or endoscope, to remove the herniated section of the disc. The procedure is minimally invasive, requiring a tiny incision. Only a small scar will form, resulting in a quicker recovery. Surgery on the core of the spinal disc: The surgeon uses instruments to access the core of the spinal disc then uses a vacuum to remove the core. This makes the spinal disc smaller, which reduces pressure on the nerve. The surgery is only possible if the outer layer of the disc is not damaged. Other surgical interventions for a herniated disc include: Laminotomy or laminectomy The lamina is a part of the vertebrae that covers and protects the spinal canal. Sometimes, doctors need to remove part or all of the lamina to repair a herniated disc. A laminotomy involves the removal of part of the lamina, while a laminectomy is removal of the entire lamina. Both procedures involve making a small incision down the center of the back or neck over the area of the herniated disc. After removing part or all of the lamina, the surgeon performs a discectomy to remove the herniated disc. Laminotomies and laminectomies can be lumbar or cervical: Lumbar procedures: These help to relieve leg pain or sciatic pain that a herniated disc causes in the lower back region. Cervical procedures: These help to relieve pain in the neck and upper limbs that a herniated disc causes in the neck region. Spinal fusion Following a laminotomy or laminectomy, a spinal fusion (SF) may be necessary to stabilize the spine. An SF involves joining two bones together with screws. People who have undergone an SF may experience pain and feel as if the treatment is restricting certain movements. The likelihood of needing an SF depends on the location of the herniated disc. Typically, lumbar laminotomies require an SF. Cervical laminotomies require an SF if the surgeon operates from the front of the neck. The same procedures rarely require an SF if the surgeon operates from the back of the neck. The point the surgeon works from depends on the exact location of the herniated disc. Some people who undergo laminotomy may be candidates for artificial disc surgery instead of an SF. Artificial disc surgery Artificial disc surgery (ADS) is an alternative to spinal fusion. In ADS, the surgeon replaces the damaged disc with an artificial one. A surgeon will usually associate this method with less pain and less restricted movement in comparison to SF procedures. Recovery process and timeline According to the North American Spine Society, people who undergo surgery for a herniated disc earlier rather than later may have a faster recovery time. They may also experience improved long term health. Typically, most people can go home 24 hours after a herniated disc operation. Some may even be able to go home the same day. Doctors recommend that people recovering from herniated disc surgery avoid the following activities for around 4 weeks: driving sitting for long periods lifting heavy weights bending over Some exercises may be beneficial for people who have had herniated disc surgery. However, they should consult their doctor or surgeon before attempting any strenuous activities. Sometimes, doctors may suggest rehabilitation therapy after surgery. People who follow a rehabilitation program after herniated disc surgery may achieve a shorter recovery time and improved mobility. Risks Discectomies hardly ever result in complications. However, in rare cases, people may experience the following: bleeding infections tears in the spine’s protective lining injury to the nerve In around 5% of people, the problematic disc may rupture again, causing symptoms to recur. Herniated disc surgery can be an effective treatment for many people with challenging pain. However, surgeons cannot guarantee that symptoms will disappear after surgery. Some people may continue to experience herniated disc pain after the recovery period. In some cases, the pain may worsen over time. Other treatment options Taking pain medication may ease symptoms of a herniated disc. People who develop a herniated disc should limit their activities for 2 to 3 days. Limiting movement will reduce inflammation at the site of the nerve. Although it may seem counterintuitive, doctors do not recommend bed rest, however. People who have pinched nerves in the neck and leg due to a herniated disc may try NSAIDs and physical therapy. If those treatments are ineffective, doctors may recommend other nonsurgical options, such as selective nerve root blocks. These treatments are local numbing agents that doctors inject into the spinal cord to alleviate herniated disc pain. Summary A herniated disc can cause disabling pain. In many cases, nonsurgical treatment options offer effective pain relief. If there is no improvement, a doctor may recommend herniated disc surgery. The type of surgical procedure a person undergoes depends on several factors. These include the location of the herniated disc, the severity of the pain, and level of disability it causes. Most people can return to their usual activities around 4 weeks after herniated disc surgery. People who follow a rehabilitation program after surgery may experience a shorter recovery time and better mobility. USER: My doctor said I have a herniated disc. Is there something that can permanently help with this diagnosis? What are the pros and cons of each surgical option? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
28
28
1,253
null
582
Only respond with the most direct answer possible. Do not discuss anything else. Use only information from the provided document.
What are the health benefits of high-quality sleep?
Y O U R G U I D E T O Healthy Sleep Y o u r G u i d e t o Healthy Sleep NIH Publication No. 11-5271 Originally printed November 2005 Revised August 2011 Contents Introduction 1 What Is Sleep? 4 What Makes You Sleep? 7 What Does Sleep Do for You? 12 Your Learning, Memory, and Mood 12 Your Heart 13 Your Hormones 14 How Much Sleep Is Enough? 19 What Disrupts Sleep? 25 Is Snoring a Problem? 30 Common Sleep Disorders 33 Insomnia 35 Sleep Apnea 38 Restless Legs Syndrome 47 Narcolepsy 48 Parasomnias (Abnormal Arousals) 51 Do You Think You Have a Sleep Disorder? 53 How To Find a Sleep Center and Sleep Specialist 56 Research 57 For More Sleep Information 60 Contents 1 Introduction Think of your daily activities. Which activity is so important you should devote one-third of your time to doing it? Probably the first things that come to mind are working, spending time with your family, or doing leisure activities. But there’s something else you should be doing about one-third of your time—sleeping. Many people view sleep as merely a “down time” when their brains shut off and their bodies rest. People may cut back on sleep, think ing it won’t be a problem, because other responsibilities seem much more important. But research shows that a number of vital tasks carried out during sleep help people stay healthy and function at their best. While you sleep, your brain is hard at work forming the pathways necessary for learning and creating memories and new insights. Without enough sleep, you can’t focus and pay attention or respond quickly. A lack of sleep may even cause mood problems. Also, growing evidence shows that a chronic lack of sleep increases your risk of obesity, diabetes, cardiovas cular disease, and infections. Introduction 2Your Guide to Healthy Sleep Despite growing support for the idea that adequate sleep, like adequate nutrition and physical activity, is vital to our well-being, people are sleeping less. The nonstop “24/7” nature of the world today encourages longer or nighttime work hours and offers continual access to entertainment and other activities. To keep up, people cut back on sleep. A common myth is that people can learn to get by on little sleep (such as less than 6 hours a night) with no adverse effects. Research suggests, however, that adults need at least 7–8 hours of sleep each night to be well rested. Indeed, in 1910, most people slept 9 hours a night. But recent surveys show the average adult now sleeps fewer than 7 hours a night. More than one-third of adults report daytime sleepiness so severe that it interferes with work, driving, and social functioning at least a few days each month. Evidence also shows that children’s and adolescents’ sleep is shorter than recommended. These trends have been linked to increased exposure to electronic media. Lack of sleep may have a direct effect on children’s health, behavior, and development. Chronic sleep loss or sleep disorders may affect as many as 70 million Americans. This may result in an annual cost of $16 billion in health care expenses and $50 billion in lost productivity. 3 What happens when you don’t get enough sleep? Can you make up for lost sleep during the week by sleeping more on the weekends? How does sleep change as you become older? Is snoring a problem? How can you tell if you have a sleep disorder? Read on to find the answers to these questions and to better understand what sleep is and why it is so necessary. Learn about common sleep myths and practical tips for getting enough sleep, coping with jet lag and nighttime shift work, and avoiding dangerous drowsy driving. Many common sleep disorders go unrecognized and thus are not Introduction treated. This booklet also gives the latest information on sleep disorders such as insomnia (trouble falling or staying asleep), sleep apnea (pauses in breathing during sleep), restless legs syndrome, narcolepsy (extreme daytime sleepiness), and parasomnias (abnormal sleep behaviors).It’s important to tell your doctor what you are experiencing, so you can help your doctor diagnose your condition. S Z E - P I N G “ ” 4Your Guide to Healthy Sleep What Is Sleep? Sleep was long considered just a block of time when your brain and body shut down. Thanks to sleep research studies done over the past several decades, it is now known that sleep has distinct stages that cycle throughout the night in predictable patterns. How well rested you are and how well you function depend not just on your total sleep time but on how much sleep you get each night and the timing of your sleep stages. Your brain and body functions stay active through out sleep, and each stage of sleep is linked to a specific type of brain waves (distinctive patterns of electrical activity in the brain). Sleep is divided into two basic types: rapid eye movement (REM) sleep and non-REM sleep (with three different stages). (For more information, see “Types of Sleep” on page 5.) Typically, sleep begins with non-REM sleep. In stage 1 non-REM sleep, you sleep lightly and can be awakened easily by noises or other disturbances. During this first stage of sleep, your eyes move slowly, your muscles relax, and your heart and breath ing rates begin to slow. You then enter stage 2 non-REM sleep, which is defined by slower brain waves with occasional bursts of rapid waves. You spend about half the night in this stage. When you progress into stage 3 non- REM sleep, your brain waves become even slower, and the brain produces extremely slow waves almost exclusively (called Delta waves). 5 l l l l l Stage 3 is a very deep stage of sleep, during which it is very difficult to be awakened. Children who wet the bed or sleep walk tend to do so during stage 3 of non-REM sleep. Deep sleep is considered the “restorative” stage of sleep that is necessary for feeling well rested and energetic during the day. Types of Sleep Non-REM Sleep REM Sleep Stage 1: Light sleep; easily awakened; muscles relax with occasional twitches; eye movements are slow. Stage 2: Eye movements stop; slower brain waves, with occasional bursts of rapid brain waves. Stage 3: Occurs soon after you fall asleep and mostly in the first half of the night. Deep sleep; difficult to awaken; large slow brain waves, heart and respiratory rates are slow and muscles are relaxed. Usually first occurs about 90 minutes after you fall asleep, and longer, deeper periods occur during the second half of the night; cycles along with the non-REM stages throughout the night. Eyes move rapidly behind closed eyelids. Breathing, heart rate, and blood pressure are irregular. Dreaming occurs. Arm and leg muscles are temporarily paralyzed. Types of Sleep During REM sleep, your eyes move rapidly in different directions, even though your eyelids stay closed. Your breathing also becomes more rapid, irregular, and shallow, and your heart rate and blood pressure increase. Dreaming typically occurs during REM sleep. During this type of sleep, your arm and leg muscles are temporarily paralyzed so that you cannot “act out” any dreams that you may be having. What Is Sleep? 6Your Guide to Healthy Sleep You typically first enter REM sleep about an hour to an hour and a half after falling asleep. After that, the sleep stages repeat them selves continuously while you sleep. As you sleep, REM sleep time becomes longer, while time spent in stage 3 non-REM sleep becomes shorter. By the time you wake up, nearly all your sleep time has been spent in stages 1 and 2 of non-REM sleep and in REM sleep. If REM sleep is severely disrupted during one night, REM sleep time is typically longer than normal in subsequent nights until you catch up. Overall, almost one-half of your total sleep time is spent in stage 2 non-REM sleep and about one-fifth each in deep sleep (stage 3 of non-REM sleep) and REM sleep. In contrast, infants spend half or more of their total sleep time in REM sleep. Gradually, as they grow, the percentage of total sleep time they spend in REM contin ues to decrease, until it reaches the one-fifth level typical of later childhood and adulthood. Why people dream and why REM sleep is so important are not well understood. It is known that REM sleep stimulates the brain regions you use to learn and make memories. Animal studies suggest that dreams may reflect the brain’s sorting and selectively storing new information acquired during wake time. While this information is processed, the brain might revisit scenes from the day and mix them randomly. Dreams are generally recalled when we wake briefly or are awakened by an alarm clock or some other noise in the environment. Studies show, however, that other stages of sleep besides REM also are needed to form the pathways in the brain that enable us to learn and remember. 7 What Makes You Sleep? Although you may put off going to sleep in order to squeeze more activities into your day, eventually your need for sleep becomes overwhelming. This need appears to be due, in part, to two sub stances your body produces. One substance, called adenosine, builds up in your blood while you’re awake. Then, while you sleep, your body breaks down the adenosine. Levels of this substance in your body may help trigger sleep when needed. A buildup of adenosine and many other complex factors might explain why, after several nights of less than optimal amounts of sleep, you build up a sleep debt. This may cause you to sleep longer than normal or at unplanned times during the day. Because of your body’s internal processes, you can’t adapt to getting less sleep than your body needs. Eventually, a lack of sleep catches up with you. The other substance that helps make you sleep is a hormone called melatonin. This hormone makes you naturally feel sleepy at night. It is part of your internal “biological clock,” which controls when you feel sleepy and your sleep patterns. Your biological clock is a small bundle of cells in your brain that works throughout the day and night. Internal and external environmental cues, such as light signals received through your eyes, control these cells. Your biologi cal clock triggers your body to produce melatonin, which helps prepare your brain and body for sleep. As melatonin is released, you’ll feel increasingly drowsy. Because of your biological clock, you naturally feel the most tired between midnight and 7 a.m. You also may feel mildly sleepy in the afternoon between 1 p.m. and 4 p.m. when another increase in melatonin occurs in your body. Your biological clock makes you the most alert during daylight hours and the least alert during the early morning hours. Conse quently, most people do their best work during the day. Our 24/7 society, however, demands that some people work at night. Nearly one-quarter of all workers work shifts that are not during the daytime, and more than two-thirds of these workers have problem sleepiness and/or difficulty sleeping. Because their work schedules What Makes You Sleep? 8Your Guide to Healthy Sleep are at odds with powerful sleep-regulating cues like sunlight, night shift workers often find themselves drowsy at work, and they have difficulty falling or staying asleep during the daylight hours when their work schedules require them to sleep. The fatigue experienced by night shift workers can be dangerous. Major industrial accidents—such as the Three Mile Island and Chernobyl nuclear power plant accidents and the Exxon Valdez oil spill—have been caused, in part, by mistakes made by overly tired workers on the night shift or an extended shift. Night shift workers also are at greater risk of being in car crashes when they drive home from work during the early morning hours, because the biological clock is not sending out an alerting signal. One study found that one-fifth of night shift workers had a car crash or a near miss in the preceding year because of sleepiness on the drive home from work. Night shift workers are also more likely to have physical problems, such as heart disease, digestive troubles, and infertility, as well as emotional problems. All of these problems may be related, at least in part, to the workers’ chronic sleepiness, possi bly because their biological clocks are not in tune with their work schedules. See “Working the Night Shift” on page 9 for some helpful tips if you work a night shift. Other factors also can influence your need for sleep, including your immune system’s production of hormones called cytokines. Cyto kines are made to help the immune system fight certain infections or chronic inflammation and may prompt you to sleep more than usual. The extra sleep may help you conserve the resources needed to fight the infection. Recent studies confirm that being well rested improves the body’s responses to infection. People are creatures of habit, and one of the hardest habits to break is the natural wake and sleep cycle. Together, a number of physiological factors help you sleep and wake up at the same times each day. 9 Consequently, you may have a hard time adjusting when you travel across time zones. The light cues outside and the clocks in your new location may tell you it is 8 a.m. and you should be active, but your body is telling you it is more like 4 a.m. and you should sleep. The end result is jet lag—sleepiness during the day, difficulty falling or staying asleep at night, poor concentration, confusion, nausea, and generally feeling unwell and irritable. See “Dealing With Jet Lag” on page 10. Working the Night Shift Try to limit night shift work, if that is possible. If you must work the night shift, the following tips may help you: l l l l l Increase your total amount of sleep by adding naps and lengthening the amount of time you allot for sleep. Use bright lights in your workplace. Minimize the number of shift changes so that your body’s biological clock has a longer time to adjust to a nighttime work schedule. Get rid of sound and light distractions in your bedroom during your daytime sleep. Use caffeine only during the first part of your shift to promote alertness at night. If you are unable to fall asleep during the day, and all else fails, talk with your doctor to see whether it would be wise for you to use prescribed, short-acting sleeping pills to help you sleep during the day. Night Shift What Makes You Sleep? 10Your Guide to Healthy Sleep Dealing With Jet Lag Be aware that adjusting to a new time zone may take several days. If you are going to be away for just a few days, it may be better to stick to your original sleep and wake times as much as possible, rather than adjusting your biological clock too many times in rapid succession. Eastward travel generally causes more severe jet lag than westward travel because traveling east requires you to shorten the day, and your biological clock is better able to adjust to a longer day than a shorter day. Fortunately for globetrotters, a few preventive measures and adjustments seem to help some people relieve jet lag, particularly when they are going to spend more than a few days at their destination: l l Adjust your biological clock. During the 2–3 days prior to a long trip, get adequate sleep. You can make minor changes to your sleep schedule. For example, if you are traveling west, delay your bed time and wake time progressively by 20- to 30-minute intervals. If you are traveling east, advance your wake time by 10 to 15 minutes a day for a few days and try to advance your bed time. Decreasing light exposure at bedtime and increasing light exposure at wake time can help you make these adjustments. When you arrive at your destination, spend a lot of time outdoors so your body gets the light cues it needs to adjust to the new time zone. Take a couple of short 10–15 minute catnaps if you feel tired, but do not take long naps during the day. Avoid alcohol and caffeine. Although it may be tempting to drink alcohol to relieve the stress of travel and make it easier to fall asleep, you’re more likely to sleep lighter and wake up in the middle of the night when the effects of the alcohol wear off. Caffeine can help keep you awake longer, but caffeine also can make it harder for you to fall asleep if its effects haven’t worn off by the time you are ready to go to bed. Therefore, it’s best to use caffeine only during the morning and not during the afternoon. 11 l What about melatonin? Your body produces this hormone that may cause some drowsiness and cues the brain and body that it is time to fall asleep. Melatonin builds up in your body during the early evening and into the first 2 hours of your sleep period, and then its release stops in the middle of the night. Melatonin is available as an over-the-counter supplement. Because melatonin is considered safe when used over a period of days or weeks and seems to help people feel sleepy, it has been suggested as a treatment for jet lag. But melatonin’s effectiveness is controversial, and its safety when used over a prolonged period is unclear. Some studies find that taking melatonin supplements before bedtime for several days after arrival in a new time zone can make it easier to fall asleep at the proper time. Other studies find that melatonin does not help relieve jet lag. What Makes You Sleep? Jet Lag 12Your Guide to Healthy Sleep What Does Sleep Do for You? A number of aspects of your health and quality of life are linked to sleep, and these aspects are impaired when you are sleep deprived. Your Learning, Memory, and Mood Students who have trouble grasping new information or learning new skills are often advised to “sleep on it,” and that advice seems well founded. Recent studies reveal that people can learn a task better if they are well rested. They also can better remember what they learned if they get a good night’s sleep after learning the task than if they are sleep deprived. Study volunteers had to sleep at least 6 hours to show improvement in learning. Additionally, the amount of improvement was directly related to how much time they slept—for example, volunteers who slept 8 hours outperformed those who slept only 6 or 7 hours. Other studies suggest that it’s important to get enough rest the night before a mentally challenging task, rather than only sleeping for a short period or waiting to sleep until after the task is complete. Many well-known artists and scientists claim to have had creative insights while they slept. Mary Shelley, for example, said the idea for her novel Frankenstein came to her in a dream. Although it has not been shown that dreaming is the driving force behind innova tion, one study suggests that sleep is needed for creative problem- solving. In that study, volunteers were asked to perform a memory task and then were tested on it 8 hours later. Those who were allowed to sleep for 8 hours immediately after trying the task and before being tested were much more likely to find a creative way of simplifying the task and improving their performance, compared with those who were awake the entire 8 hours before being tested. Exactly what happens during sleep to improve our learning, memo ry, and insight isn’t known. Experts suspect, however, that while 13 people sleep, they form or strengthen the pathways of brain cells needed to perform these tasks. This process may explain why sleep is needed for proper brain development in infants. Not only is a good night’s sleep required to form new learning and memory pathways in the brain, but also sleep is necessary for those pathways to work well. Several studies show that lack of sleep causes thinking processes to slow down. Lack of sleep also makes it harder to focus and pay attention. Lack of sleep can make you more easily confused. Studies also find that a lack of sleep leads to faulty decisionmaking and more risk taking. A lack of sleep slows down your reaction time, which is particularly important to driving and other tasks that require quick response. When people who lack sleep are tested on a driving simulator, they perform just as poorly as people who are drunk. (See “Crash in Bed, Not on the Road” on page 16.) The bottom line is: Not getting a good night’s sleep can be dangerous! Even if you don’t have a mentally or physically challenging day ahead of you, you should still get enough sleep to put yourself in a good mood. Most people report being irritable, if not downright unhappy, when they lack sleep. People who chronically suffer from a lack of sleep, either because they do not spend enough time in bed or because they have an untreated sleep disorder, are at greater risk of developing depression. One group of people who usually don’t get enough sleep is mothers of newborns. Some experts think depression after childbirth (postpar tum blues) is caused, in part, by a lack of sleep. Your Heart Sleep gives your heart and vascular system a much-needed rest. During non-REM sleep, your heart rate and blood pressure progressively slow as you enter deeper sleep. During REM sleep, in response to dreams, your heart What Does Sleep Do for You? 14Your Guide to Healthy Sleep and breathing rates can rise and fall and your blood pressure can be variable. These changes throughout the night in blood pressure and heart and breathing rates seem to promote cardiovascular health. If you don’t get enough sleep, the nightly dip in blood pressure that appears to be important for good cardiovascular health may not occur. Failure to experience the normal dip in blood pressure during sleep can be related to insufficient sleep time, an untreated sleep disorder (for example, sleep apnea), or other factors. Some sleep- related abnormalities may be markers of heart disease and increased risk of stroke. A lack of sleep also puts your body under stress and may trigger the release of more adrenaline, cortisol, and other stress hormones during the day. These hormones keep your blood pressure from dipping during sleep, which increases your risk for heart disease. Lack of sleep also may trigger your body to produce more of certain proteins thought to play a role in heart disease. For example, some studies find that people who repeatedly don’t get enough sleep have higher than normal blood levels of C-reactive protein, a sign of inflammation. High levels of this protein may indicate an increased risk for a condition called atherosclerosis, or hardening of the arteries. Your Hormones When you were young, your mother may have told you that you need to get enough sleep to grow strong and tall. She may have been right! Deep sleep (stage 3 non-REM sleep) triggers more release of growth hormone, which contributes to growth in children and boosts muscle mass and the repair of cells and tissues in children and adults. Sleep’s effect on the release of sex hormones also contributes to puberty and fertility. Consequently, women who work at night and tend to lack sleep may be at increased risk of miscarriage. Your mother also probably was right if she told you that getting a good night’s sleep on a regular basis would help keep you from getting sick and help you get better if you do get sick. During sleep, your body creates more cytokines—cellular hormones that help the immune system fight various infections. Lack of sleep can reduce your body’s ability to fight off common infections. Research also reveals that a lack of sleep can reduce the body’s response to the flu 15 vaccine. For example, sleep-deprived volunteers given the flu vaccine produced less than half as many flu antibodies as those who were well rested and given the same vaccine. Although lack of exercise and other factors also contribute, the current epidemic of diabetes and obesity seems to be related, at least in part, to chronically short or disrupted sleep or not sleeping during the night. Evidence is growing that sleep is a powerful regulator of appetite, energy use, and weight control. During sleep, the body’s production of the appetite suppressor leptin increases, and the appetite stimulant grehlin decreases. Studies find that the less people sleep, the more likely they are to be overweight or obese and prefer eating foods that are higher in calories and carbohydrates. People who report an average total sleep time of 5 hours a night, for example, are much more likely to become obese, compared with people who sleep 7–8 hours a night. A number of hormones released during sleep also control the body’s use of energy. A distinct rise and fall of blood sugar levels during sleep appears to be linked to sleep stages. Not sleeping at the right time, not getting enough sleep overall, or not enough of each stage of sleep disrupts this pattern. One study found that, when healthy young men slept only 4 hours a night for 6 nights in a row, their insulin and blood sugar levels matched those seen in people who were developing diabetes. Another study found that women who slept less than 7 hours a night were more likely to develop diabetes over time than those who slept between 7 and 8 hours a night. What Does Sleep Do for You? 16Your Guide to Healthy Sleep Crash in Bed Not on the Road Most people are aware of the hazards of drunk driving. But driving while sleepy can be just as dangerous. Indeed, crashes due to sleepy drivers are as deadly as those due to drivers impaired by alcohol. And you don’t have to be asleep at the wheel to put yourself and others in danger. Both alcohol and a lack of sleep limit your ability to react quickly to a suddenly braking car, a sharp curve in the road, or other situations that require rapid responses. Just a few seconds’ delay in reaction time can be a life-or-death matter when driving. When people who lack sleep are tested on a driving simulator, they perform as badly as or worse than those who are drunk. The combination of alcohol and lack of sleep can be especially dangerous. There is increasing evidence that sleep deprivation and inexperience behind the wheel, both particularly common in adolescents, is a lethal combination. Of course, driving is also hazardous if you fall asleep at the wheel, which happens surprisingly often. One-quarter of the drivers surveyed in New York State reported they had fallen asleep at the wheel at some time. Often, people briefly nod off at the wheel without being aware of it—they just can’t recall what happened over the previous few seconds or longer. And people who lack sleep are more apt to take risks and make poor judgments, which also can boost their chances of getting in a car crash. Opening a window or turning up the radio won’t help you stay awake while driving. The bottom line is that there is no substitute for sleep. Be aware of these warning signs that you are too sleepy to drive safely: trouble keeping your eyes open or focused, continual yawning, or being unable to recall driving the past few miles. Remember, if you are short on sleep, stay out of the driver’s seat! 17 Here are some potentially life-saving tips for avoiding drowsy driving: l l l l l l Be well rested before hitting the road. If you have several nights in a row of fewer than 7–8 hours of sleep, your reaction time slows. Restoring that reaction time to normal can take more than one night of good sleep, because a sleep debt accumulates after each night you lose sleep. It may take several nights of being well rested to repay that sleep debt and make you ready for driving on a long road trip. Avoid driving between midnight and 7 a.m. Unless you are accustomed to being awake then, this period of time is when we are naturally the least alert and most tired. Don’t drive alone. A companion who can keep you engaged in conversation might help you stay awake while driving. Schedule frequent breaks on long road trips. If you feel sleepy while driving, pull off the road and take a nap for 15–20 minutes. Don’t drink alcohol. Just one beer when you are sleep deprived will affect you as much as two or three beers when you are well rested. Don’t count on caffeine or other tricks. Although drinking a cola or a cup of coffee might help keep you awake for a short time, it won’t overcome extreme sleepiness or relieve a sleep debt. What Does Sleep Do for You? 18Your Guide to Healthy SleepI wake up early to get ready for school. I am tired in the morning, and by the end of the school day, I am very tired again. An afterschool nap seems to refresh me and help me focus on homework. Without it, I am grumpy and stressed, can’t focus, and sometimes get headaches. D A P H N E “ ” 19 How Much Sleep Is Enough? Animal studies suggest that sleep is as vital as food for survival. Rats, for example, normally live 2–3 years, but they live only 5 weeks if they are deprived of REM sleep and only 2–3 weeks if they are deprived of all sleep stages—a timeframe similar to death due to starvation. But how much sleep do humans need? To help answer that question, scientists look at how much people sleep when unrestricted, the average amount of sleep among various age groups, and the amount of sleep that studies reveal is necessary to function at your best. When healthy adults are given unlimited opportunity to sleep, they sleep on average between 8 and 8.5 hours a night. But sleep needs vary from person to person. Some people appear to need only about 7 hours to avoid problem sleepiness, whereas others need 9 or more hours of sleep. Sleep needs also change throughout the life cycle. Newborns sleep between 16 and 18 hours a day, and children in preschool sleep between 11 and 12 hours a day. School-aged children and adolescents need at least 10 hours of sleep each night. The hormonal influences of puberty tend to shift adolescents’ biologi cal clocks. As a result, teenagers (who need between 9 and 10 hours of sleep a night) are more likely to go to bed later than younger children and adults, and they tend to want to sleep later in the morning. This delayed sleep–wake rhythm conflicts with the early- morning start times of many high schools and helps explain why most teenagers get an average of only 7–7.5 hours of sleep a night. As people get older, the pattern of sleep also changes—especially the amount of time spent in deep sleep. This explains why children can sleep through loud noises and why they might not wake up when moved. Across the lifespan, the sleep period tends to advance, namely relative to teenagers; older adults tend to go to bed earlier and wake earlier. The quality—but not necessarily the quantity—of How Much Sleep Is Enough? 20Your Guide to Healthy Sleep deep, non-REM sleep also changes, with a trend toward lighter sleep. The relative percentages of stages of sleep appear to stay mostly constant after infancy. From midlife through late life, people awaken more throughout the night. These sleep disruptions cause older people to lose more and more of stages 1 and 2 non-REM sleep as well as REM sleep. Some older people complain of difficulty falling asleep, early morning awakenings, frequent and long awakenings during the night, daytime sleepiness, and a lack of refreshing sleep. Many sleep problems, however, are not a natural part of sleep in the elderly. Their sleep complaints may be due, in part, to medical conditions, illnesses, or medications they are taking— all of which can disrupt sleep. In fact, one study found that the prevalence of sleep problems is very low in healthy older adults. Other causes of some of older adults’ sleep complaints are sleep apnea, restless legs syndrome, and other sleep disorders that become more common with age. Also, older people are more likely to have their sleep disrupted by the need to urinate during the night. Some evidence shows that the biological clock shifts in older people, so they are more apt to go to sleep earlier at night and wake up earlier in the morning. No evidence indicates that older people can get by with less sleep than younger people. (See “Top 10 Sleep Myths” on page 22.) Poor sleep in older people may result in excessive daytime sleepiness, attention and memory problems, depressed mood, and overuse of sleeping pills. Despite variations in sleep quantity and quality, both related to age and 21 between individuals, studies suggest that the optimal amount of sleep needed to perform adequately, avoid a sleep debt, and not have problem sleepiness during the day is about 7–8 hours for adults and at least 10 hours for school-aged children and adolescents. Similar amounts seem to be necessary to avoid an increased risk of develop ing obesity, diabetes, or cardiovascular diseases. Quality of sleep and the timing of sleep are as important as quantity. People whose sleep is frequently interrupted or cut short may not get enough of both non-REM sleep and REM sleep. Both types of sleep appear to be crucial for learning and memory—and perhaps for the restorative benefits of healthy sleep, including the growth and repair of cells. Many people try to make up for lost sleep during the week by sleeping more on the weekends. But if you have lost too much sleep, sleeping in on a weekend does not completely erase your sleep debt. Certainly, sleeping more at the end of a week won’t make up for any poor performance you had earlier in that week. Just one night of inadequate sleep can negatively affect your functioning and mood during at least the next day. Daytime naps are another strategy some people use to make up for lost sleep during the night. Some evidence shows that short naps (up to an hour) can make up, at least partially, for the sleep missed on the previous night and improve alertness, mood, and work performance. But naps don’t substitute for a good night’s sleep. One study found that a daytime nap after a lack of sleep at night did not fully restore levels of blood sugar to the pattern seen with adequate nighttime sleep. If a nap lasts longer than 20 minutes, you may have a hard time waking up fully. In addition, late afternoon naps can make falling asleep at night more difficult. How Much Sleep Is Enough? 22Your Guide to Healthy Sleep Top 10 Sleep Myths Myth 1: Sleep is a time when your body and brain shut down for rest and relaxation. No evidence shows that any major organ (including the brain) or regulatory system in the body shuts down during sleep. Some physiological processes actually become more active while you sleep. For example, secretion of certain hormones is boosted, and activity of the pathways in the brain linked to learning and memory increases. Myth 2: Getting just 1 hour less sleep per night than needed will not have any effect on your daytime functioning. This lack of sleep may not make you noticeably sleepy during the day. But even slightly less sleep can affect your ability to think properly and respond quickly, and it can impair your cardiovascular health and energy balance as well as your body’s ability to fight infections, particularly if lack of sleep continues. If you consistently do not get enough sleep, a sleep debt builds up that you can never repay. This sleep debt affects your health and quality of life and makes you feel tired during the day. Myth 3: Your body adjusts quickly to different sleep schedules. Your biological clock makes you most alert during the daytime and least alert at night. Thus, even if you work the night shift, you will naturally feel sleepy when nighttime comes. Most people can reset their biological clock, but only by appropriately timed cues—and even then, by 1–2 hours per day at best. Consequently, it can take more than a week to adjust to a substantial change in your sleep–wake cycle—for example, when traveling across several time zones or switching from working the day shift to the night shift. Myth 4: People need less sleep as they get older. Older people don’t need less sleep, but they may get less sleep or find their sleep less refreshing. That’s because as people age, the quality of their sleep changes. Older people are also more likely to have insomnia or other medical conditions that disrupt their sleep. 23 Myth 5: Extra sleep for one night can cure you of problems with excessive daytime fatigue. Not only is the quantity of sleep important, but also the quality of sleep. Some people sleep 8 or 9 hours a night but don’t feel well rested when they wake up because the quality of their sleep is poor. A number of sleep disorders and other medical conditions affect the quality of sleep. Sleeping more won’t lessen the daytime sleepiness these disorders or conditions cause. However, many of these disorders or conditions can be treated effectively with changes in behavior or with medical therapies. Additionally, one night of increased sleep may not correct multiple nights of inadequate sleep. Myth 6: You can make up for lost sleep during the week by sleeping more on the weekends. Although this sleeping pattern will help you feel more rested, it will not completely make up for the lack of sleep or correct your sleep debt. This pattern also will not necessarily make up for impaired performance during the week or the physical problems that can result from not sleeping enough. Furthermore, sleeping later on the weekends can affect your biological clock, making it much harder to go to sleep at the right time on Sunday nights and get up early on Monday mornings. Myth 7: Naps are a waste of time. Although naps are no substitute for a good night’s sleep, they can be restorative and help counter some of the effects of not getting enough sleep at night. Naps can actually help you learn how to do certain tasks quicker. But avoid taking naps later than 3 p.m., particularly if you have trouble falling asleep at night, as late naps can make it harder for you to fall asleep when you go to bed. Also, limit your naps to no longer than 20 minutes, because longer naps will make it harder to wake up and How Much Sleep Is Enough? 24Your Guide to Healthy Sleep Top 10 Sleep Myths (continued) get back in the swing of things. If you take more than one or two planned or unplanned naps during the day, you may have a sleep disorder that should be treated. Myth 8: Snoring is a normal part of sleep. Snoring during sleep is common, particularly as a person gets older. Evidence is growing that snoring on a regular basis can make you sleepy during the day and increase your risk for diabetes and heart disease. In addition, some studies link frequent snoring to problem behavior and poorer school achievement in children. Loud, frequent snoring also can be a sign of sleep apnea, a serious sleep disorder that should be evaluated and treated. (See “Is Snoring a Problem?” on page 30.) Myth 9: Children who don’t get enough sleep at night will show signs of sleepiness during the day. Unlike adults, children who don’t get enough sleep at night typically become hyperactive, irritable, and inattentive during the day. They also have increased risk of injury and more behavior problems, and their growth rate may be impaired. Sleep debt appears to be quite common during childhood and may be misdiagnosed as attention-deficit hyperactivity disorder. Myth 10: The main cause of insomnia is worry. Although worry or stress can cause a short bout of insomnia, a persistent inability to fall asleep or stay asleep at night can be caused by a number of other factors. Certain medications and sleep disorders can keep you up at night. Other common causes of insomnia are depression, anxiety disorders, and asthma, arthritis, or other medical conditions with symptoms that tend to be troublesome at night. Some people who have chronic insomnia also appear to be more “revved up” than normal, so it is harder for them to fall asleep. Sleep Myths 25When medicines didn’t work for me, I started making big lifestyle changes. Now I try to eat a balanced diet and walk for at least an hour each day. Without doubt, my weight loss and more active lifestyle help me sleep better. What Disrupts Sleep? Many factors can prevent a good night’s sleep. These factors range from well-known stimulants, such as coffee, to certain pain relievers, decongestants, and other culprits. Many people depend on the caffeine in coffee, cola, or tea to wake them up in the morning or to keep them awake. Caffeine is thought to block the cell receptors that adenosine (a substance in the brain) uses to trigger its sleep- inducing signals. In this way, caffeine fools the body into thinking it isn’t tired. It can take as long as 6–8 hours for the effects of caffeine to wear off completely. Thus, drinking a cup of coffee in the late afternoon may prevent your falling asleep at night. Nicotine is another stimulant that can keep you awake. Nicotine also leads to lighter than normal sleep, and heavy smokers tend to wake up too early because of nicotine withdrawal. Although alcohol is a sedative that makes it easier to fall asleep, it prevents deep sleep and REM sleep, allowing only the lighter stages of sleep. People who drink alcohol also tend to wake up in the middle of the night when the effects of an alcoholic “nightcap” wear off. Certain commonly used prescrip tion and over-the-counter medi cines contain ingredients that can keep you awake. These ingredients include decongestants and steroids. Many medicines taken to relieve headaches contain caffeine. Heart and blood pressure medications known as beta blockers can make it difficult to fall asleep and cause more awakenings during the night. People who have chronic asthma or bronchitis also have more problems falling asleep and staying asleep than healthy people, either because of their breathing difficul ties or because of the medicines What Disrupts Sleep? S Z E - P I N G “ ” 26Your Guide to Healthy Sleep they take. Other chronic painful or uncomfortable conditions— such as arthritis, congestive heart failure, and sickle cell anemia— can disrupt sleep, too. A number of psychological disorders—including schizophrenia, bipolar disorder, and anxiety disorders—are well known for disrupt ing sleep. Depression often leads to insomnia, and insomnia can cause depression. Some of these psychological disorders are more likely to disrupt REM sleep. Psychological stress also takes its toll on sleep, making it more difficult to fall asleep or stay asleep. People who feel stressed also tend to spend less time in deep sleep and REM sleep. Many people report having difficulties sleeping if, for example, they have recently lost a loved one, are going through a divorce, or are under stress at work. Menstrual cycle hormones can affect how well women sleep. Pro gesterone is known to induce sleep and circulates in greater concen trations in the second half of the menstrual cycle. For this reason, women may sleep better during this phase of their menstrual cycle. On the other hand, many women report trouble sleeping the night before their menstrual flow starts. This sleep disruption may be related to the abrupt drop in progesterone levels that occurs just before menstruation. Women in their late forties and early fifties, however, report more difficulties sleeping (insomnia) than younger women. These difficulties may be linked to menopause, when they have lower concentrations of progesterone. Hot flashes in women of this age also may cause sleep disruption and difficulties. Certain lifestyle factors also may deprive a person of needed sleep. Large meals or vigorous exercise just before bedtime can make it harder to fall asleep. While vigorous exercise in the evening may delay sleep onset for various reasons, exercise in the daytime is associated with improved nighttime sleep. If you aren’t getting enough sleep or aren’t falling asleep early enough, you may be overscheduling activi ties that can pre vent you from getting the 27 quiet relaxation time you need to prepare for sleep. Most people report that it’s easier to fall asleep if they have time to wind down into a less active state before sleeping. Relaxing in a hot bath or having a hot, caffeine-free beverage before bedtime may help. In addition, your body temperature drops after a hot bath in a way that mimics, in part, what happens as you fall asleep. Probably for both these reasons, many people report that they fall asleep more easily after a hot bath. Your sleeping environment also can affect your sleep. Clear your bedroom of any potential sleep distractions, such as noises, bright lights, a TV, a cell phone, or computer. Having a comfortable mattress and pillow can help promote a good night’s sleep. You also sleep better if the temperature in your bedroom is kept on the cool side. For more ideas on improving your sleep, check out the tips for getting a good night’s sleep below. Tips for Getting a Good Night’s Sleep l l l Stick to a sleep schedule. Go to bed and wake up at the same time each day. As creatures of habit, people have a hard time adjusting to changes in sleep patterns. Sleeping later on weekends won’t fully make up for a lack of sleep during the week and will make it harder to wake up early on Monday morning. Exercise is great, but not too late in the day. Try to exercise at least 30 minutes on most days but not later than 2–3 hours before your bedtime. Avoid caffeine and nicotine. Coffee, colas, certain teas, and chocolate contain the stimulant caffeine, and its effects can take as long as 8 hours to wear off fully. Therefore, a cup of coffee in the late afternoon can make it hard for you to fall asleep at night. Nicotine is also a stimulant, often causing smokers to sleep only very lightly. In addition, smokers often wake up too early in the morning because of nicotine withdrawal. What Disrupts Sleep? 28Your Guide to Healthy Sleep Tips for Getting a Good Night’s Sleep (continued) l l l l l l l Avoid alcoholic drinks before bed. Having a “nightcap” or alcoholic beverage before sleep may help you relax, but heavy use robs you of deep sleep and REM sleep, keeping you in the lighter stages of sleep. Heavy alcohol ingestion also may contribute to impairment in breathing at night. You also tend to wake up in the middle of the night when the effects of the alcohol have worn off. Avoid large meals and beverages late at night. A light snack is okay, but a large meal can cause indigestion that interferes with sleep. Drinking too many fluids at night can cause frequent awakenings to urinate. If possible, avoid medicines that delay or disrupt your sleep. Some commonly prescribed heart, blood pressure, or asthma medications, as well as some over-the-counter and herbal remedies for coughs, colds, or allergies, can disrupt sleep patterns. If you have trouble sleeping, talk to your doctor or pharmacist to see whether any drugs you’re taking might be contributing to your insomnia and ask whether they can be taken at other times during the day or early in the evening. Don’t take naps after 3 p.m. Naps can help make up for lost sleep, but late afternoon naps can make it harder to fall asleep at night. Relax before bed. Don’t overschedule your day so that no time is left for unwinding. A relaxing activity, such as reading or listening to music, should be part of your bedtime ritual. Take a hot bath before bed. The drop in body temperature after getting out of the bath may help you feel sleepy, and the bath can help you relax and slow down so you’re more ready to sleep. Have a good sleeping environment. Get rid of anything in your bedroom that might distract you from sleep, such as noises, bright lights, an uncomfortable bed, or warm temperatures. You sleep better if the temperature in the room 29 is kept on the cool side. A TV, cell phone, or computer in the bedroom can be a distraction and deprive you of needed sleep. Having a comfortable mattress and pillow can help promote a good night’s sleep. Individuals who have insomnia often watch the clock. Turn the clock’s face out of view so you don’t worry about the time while trying to fall asleep. l l l Have the right sunlight exposure. Daylight is key to regulating daily sleep patterns. Try to get outside in natural sunlight for at least 30 minutes each day. If possible, wake up with the sun or use very bright lights in the morning. Sleep experts recommend that, if you have problems falling asleep, you should get an hour of exposure to morning sunlight and turn down the lights before bedtime. Don’t lie in bed awake. If you find yourself still awake after staying in bed for more than 20 minutes or if you are starting to feel anxious or worried, get up and do some relaxing activity until you feel sleepy. The anxiety of not being able to sleep can make it harder to fall asleep. See a doctor if you continue to have trouble sleeping. If you consistently find it difficult to fall or stay asleep and/ or feel tired or not well rested during the day despite spending enough time in bed at night, you may have a sleep disorder. Your family doctor or a sleep specialist should be able to help you, and it is important to rule out other health or psychiatric problems that may be disturbing your sleep. What Disrupts Sleep? 30My wife noticed that I snored loudly and sometimes stopped breathing in the middle of the night. She was the one who finally pushed me to see a doctor. Is Snoring a Problem? Long the material for jokes, snoring is generally accepted as com mon and annoying in adults but as nothing to worry about. How ever, snoring is no laughing matter. Frequent, loud snoring is often a sign of sleep apnea and may increase your risk of developing cardio vascular disease and diabetes. Snoring also may lead to daytime sleepiness and impaired performance. Snoring is caused by a narrowing or partial blockage of the airways at the back of your mouth, throat, or nose. This obstruction results in increased air turbulence when breathing in, causing the soft tissues in your upper airways to vibrate. The end result is a noisy snore that can disrupt the sleep of your bed partner. This narrowing of the airways is typically caused by the soft palate, tongue, and throat relaxing while you sleep, but allergies or sinus problems also can contribute to a narrowing of the airways, as can being over weight and having extra soft tissue around your upper airways. The larger the tissues in your soft palate (the roof of your mouth in the back of your throat), the more likely you are to snore while sleeping. Alcohol or sedatives taken shortly before sleep also promote snoring. These drugs cause greater relaxation of the tissues Your Guide to Healthy Sleep in your throat and mouth. Surveys reveal that about one-half of all adults snore, and 50 percent of these adults do so loudly and frequently. African Americans, Asians, and Hispanics are more likely to snore loudly and frequent ly compared with Caucasians, and snoring problems increase with age. Not everyone who snores has sleep apnea, but people who have sleep apnea typically do snore loudly and frequently. Sleep apnea is a J I M “ ” 31 serious sleep disorder, and its hallmark is loud, frequent snoring with pauses in breathing or shallow breaths while sleeping. (See “Sleep Apnea” on page 38.) Even if you don’t experience these breathing pauses, snoring can still be a problem for you as well as for your bed partner. Snoring adds extra effort to your breathing, which can reduce the quality of your sleep and lead to many of the same health consequences as sleep apnea. One study found that older adults who did not have sleep apnea, but who snored 6–7 nights a week, were more than twice as likely to report being extremely sleepy during the day than those who never snored. The more people snored, the more daytime fatigue they reported. That sleepiness may help explain why snorers are more likely to be in car crashes than people who don’t snore. Loud snoring also can disrupt the sleep of bed partners and strain marital relations, especially if snoring causes the spouses to sleep in separate bedrooms. In addition, snoring increases the risk of developing diabetes and heart disease. One study found that women who snored regularly were twice as likely as those who did not snore to develop diabetes, even if they were not overweight (another risk factor for diabetes). Other studies suggest that regular snoring may raise the lifetime risk of developing high blood pressure, heart failure, and stroke. About one-third of all pregnant women begin snoring for the first time during their second trimester. If you are snoring while preg nant, let your doctor know. Snoring in pregnancy can be associated with high blood pressure and can have a negative effect on your baby’s growth and development. Your doctor will keep a close eye on your blood pressure throughout your pregnancy and can let you know if any additional evaluations for the snoring might be useful. In most cases, the snoring and any related high blood pressure will go away shortly after delivery. Snoring also can be a problem in children. As many as 10–15 per cent of young children, who typically have enlarged adenoids and tonsils (both tissues in the throat), snore on a regular basis. Several studies show that children who snore (with or without sleep apnea) are more likely than those who do not snore to score lower on tests that measure intelligence, memory, and attention span. These children also have more problematic behavior, including hyperactiv ity. The end result is that children who snore don’t perform in Is Snoring a Problem? 32Your Guide to Healthy Sleep school as well as those who do not snore. Strikingly, snoring was linked to a greater drop in IQ than that seen in children who had elevated levels of lead in their blood. Although the behavior of children improves after they stop snoring, studies suggest they may continue to get poorer grades in school, perhaps because of lasting effects on the brain linked to the snoring. You should have your child evaluated by your doctor if the child snores loudly and frequently—three to four times a week—especially if you note brief pauses in breathing while asleep and if there are signs of hyperactiv ity or daytime sleepiness, inadequate school achievement, or slower than expected development. Surgery to remove the adenoids and tonsils of children often can cure their snoring and any associated sleep apnea. Such surgery has been linked to a reduction in hyperactivity and improved ability to pay attention, even in children who showed no signs of sleep apnea before surgery. Snoring in older children and adults may be relieved by less invasive measures, however. These measures include losing weight, refraining from use of tobacco, sleeping on the side rather than on the back, or elevating the head while sleeping. Treating chronic congestion and refraining from alcohol or sedatives before sleeping also may de crease snoring. In some adults, snoring can be relieved by dental appliances that reposition the soft tissues in the mouth. Although numerous over-the-counter nasal strips and sprays claim to relieve snoring, no scientific evidence supports those claims. 33 Common Sleep Disorders A number of sleep disorders can disrupt your sleep quality and make you overly sleepy during the day, even if you spent enough time in bed to be well rested. (See “Common Signs of a Sleep Disorder” on page 34.) Common Sleep Disorders More than 70 sleep disorders affect at least 40 million Americans and account for an estimated $16 billion in medical costs each year, not counting costs due to lost work time, car accidents, and other factors. The four most common sleep disorders are insomnia, sleep apnea, restless legs syndrome, and narcolepsy. Additional sleep problems include chronic insufficient sleep, circadian rhythm abnormalities, and “parasomnias” such as sleep walking, sleep paralysis, and night terrors. L A U R E N “My restless legs syndrome made me lose sleep and affected my quality of life. But I’m in a good place right now. I’m taking the right medicine for me, and I’ve adopted a healthy, active lifestyle. I am very passionate about taking control of my health. ” 34Your Guide to Healthy Sleep l l l l l l l l l l l l Common Signs of a Sleep disorder Look over this list of common signs of a sleep disorder, and talk to your doctor if you have any of them on three or more nights a week: It takes you more than 30 minutes to fall asleep at night. You awaken frequently in the night and then have trouble falling back to sleep again. You awaken too early in the morning. You often don’t feel well rested despite spending 7–8 hours or more asleep at night. You feel sleepy during the day and fall asleep within 5 minutes if you have an opportunity to nap, or you fall asleep unexpectedly or at inappropriate times during the day. Your bed partner claims you snore loudly, snort, gasp, or make choking sounds while you sleep, or your partner notices that your breathing stops for short periods. You have creeping, tingling, or crawling feelings in your legs that are relieved by moving or massaging them, especially in the evening and when you try to fall asleep. You have vivid, dreamlike experiences while falling asleep or dozing. You have episodes of sudden muscle weakness when you are angry or fearful, or when you laugh. You feel as though you cannot move when you first wake up. Your bed partner notes that your legs or arms jerk often during sleep. You regularly need to use stimulants to stay awake during the day. Also keep in mind that, although children can show some of these signs of a sleep disorder, they often do not show signs of excessive daytime sleepiness. Instead, they may seem overactive and have difficulty focusing and concentrating. They also may not do their best in school. 35 n n Insomnia Insomnia is defined as having trouble falling asleep or staying asleep, or as having unrefreshing sleep despite having ample opportunity to sleep. Life is filled with events that occasionally cause insomnia for a short time. Such temporary insomnia is common and is often brought on by situations such as stress at work, family pressures, or a traumatic event. A National Sleep Foundation poll of adults in the United States found that close to half of the respondents reported temporary insomnia in the nights immediately after the terrorist attacks on September 11, 2001. Chronic insomnia is defined as having symptoms at least 3 nights per week for more than 1 month. Most cases of chronic insomnia are secondary, which means they are due to another disorder or medications. Primary chronic insomnia is a distinct sleep disorder; its cause is not yet well understood. About 30–40 percent of adults say they have some symptoms of insomnia within any given year, and about 10–15 percent of adults say they have chronic insomnia. Chronic insomnia becomes more common with age, and women are more likely than men to report having insomnia. Insomnia often causes problems during the day, such as extreme sleepiness, fatigue, a lack of energy, difficulty concentrating, depressed mood, and irritability. Thus, untreated insomnia can impair quality of life as much as, or more than, other chronic medical problems. Chronic insomnia is often caused by one or more of the following: A disease or mood disorder. The most common causes of insomnia are depression and/or anxiety disorders. Neuro logical disorders, such as Alzheimer’s or Parkinson’s disease, also can have insomnia as a symptom. Chronic insomnia can result from thyroid dysfunction, arthritis, asthma, or other medical conditions in which symptoms become more trouble some at night, making it difficult to fall asleep or stay asleep. Various prescribed and over-the-counter medications that can disrupt sleep, such as decongestants, certain pain relievers, and steroids. Common Sleep Disorders 36Your Guide to Healthy Sleep n n Sleep-disrupting behavior such as drinking alcohol, exercising shortly before bedtime, ingesting caffeine late in the day, watching TV or reading while in bed, or irregular sleep schedules due to shift work or other causes. Another sleep disorder, such as sleep apnea or restless legs syndrome. Some people, however, have primary chronic insomnia. This condi tion is linked to a tendency to be more “revved up” than normal (hyperarousal). People who have primary chronic insomnia may have heightened levels of certain hormones, higher body tempera tures, faster heart rates, and a different pattern of brain waves while they sleep. Doctors diagnose insomnia based mainly on sleep history, often by reviewing a sleep diary. An overnight sleep recording may be required if another sleep disorder is suspected. Doctors also will try to diagnose and treat any other underlying medical or psychological problems as well as identify behaviors that might be causing the insomnia. Often, people who have insomnia enter into a vicious cycle—because they’ve had trouble sleeping on previous nights, they become anxious at the slightest sign that they may not be falling asleep right away. That anxiety can make it more difficult for them to fall asleep. The more time they spend in bed not sleeping, and watching the clock, the more their anxiety—and sleeplessness—increases. To break that cycle of anxiety and negative conditioning, experts recommend going to bed only when you’re sleepy. If you can’t fall asleep (or fall back to sleep) within 20 minutes, get out of bed, go into another room, and do a relaxing activity (such as reading) until you feel sleepy again. Then return to bed. Studies have shown that this reconditioning therapy is an effective way to treat insomnia. Relaxation therapy is another strategy that works for some people who have insomnia. Relaxation therapy may include meditation and other mental relaxation techniques. It also may include physical relaxation techniques, such as progressively tensing and then relax ing each of the muscle groups in your body before sleep. Another method is to focus on breathing deeply. Relaxation therapy can help your body and mind slow down so that you can fall asleep more easily at bedtime. 37 Sleep restriction therapy also works for some people who have insomnia. Calculate your average sleep time over the course of a week, and then limit your nightly sleep time to that average. Gradu ally add more sleep time each night until you achieve a more normal night’s sleep. You should avoid daytime naps longer than 15–20 min utes during sleep restriction therapy. Napping can make it harder to fall asleep at night, which may prolong insomnia. In addition, during sleep restriction therapy, avoid driving a car or operating dangerous machinery until you are getting enough sleep at night. All of these behavioral changes are part of a treatment called cognitive behavioral therapy. Cognitive behavioral therapy also can be used to replace negative thoughts about sleep, such as “I’ll never fall asleep without sleeping pills,” with more realistic positive thinking. Cognitive behavioral therapy is effective in most people who have chronic insomnia. Some people who have chronic insomnia that is not corrected by behavioral therapy or treatment of an underlying condition may need a prescription medication. You should talk to a doctor before trying to treat insomnia with alcohol, over-the-counter or prescribed short-acting sedatives, or sedating antihistamines that induce drowsiness. The benefits of these treatments are limited, and they have risks. Some may help you fall asleep but leave you feeling unrefreshed in the morning. Others have longer lasting effects and leave you feeling still tired and groggy in the morning. Some also may lose their effectiveness over time. Doctors may prescribe sedating antidepressants for insomnia, but the effectiveness of these medicines in people who do not have depression is not known, and there are significant side effects. Common Sleep Disorders 38Your Guide to Healthy Sleep To treat their insomnia, some people pursue “natural” remedies, such as melatonin supplements or valerian teas or extracts. These remedies are available over the counter. Little evidence exists that melatonin can help relieve insomnia. Studies with valerian also have been inconclusive, and the actual dose and purity of various supplements, extracts, or teas that contain valerian may vary from product to product. In addition, because melatonin, valerian, and other natural remedies are not regulated by the Food and Drug Administration, their safety is not monitored. Sleep Apnea In people who have sleep apnea (also referred to as sleep-disordered breathing), breathing briefly stops or becomes very shallow during sleep. This change is caused by intermittent blocking of the upper airway, usually when the soft tissue in the rear of the throat collapses and partially or completely closes the airway. Each pause in breathing typically lasts 10–120 seconds and may occur 20–30 times or more each sleeping hour. If you have sleep apnea, not enough air can flow into your lungs through your mouth and nose during sleep, even though breathing efforts continue. When this happens, the amount of oxygen in your blood decreases. Your brain responds by awakening you enough to tighten the upper airway muscles and open your windpipe. Normal breaths then start again, often with a loud snort or choking sound. Although people who have sleep apnea typically snore loudly and frequently, not everyone who snores has sleep apnea. (See “Is Snoring a Problem?” on page 30.) Because people who have sleep apnea frequently go from deeper sleep to lighter sleep during the night, they rarely spend enough time in deep, restorative stages of sleep. They are therefore often exces sively sleepy during the day. Such sleepiness is thought to lead to mood and behavior problems, including depression, and it more than triples the risk of being in a traffic or work-related accident. The many brief drops in blood-oxygen levels that occur during the night can result in morning headaches and trouble concentrating, thinking clearly, learning, and remembering. Additionally, the intermittent oxygen drops and reduced sleep quality together trigger the release of stress hormones. These hormones raise your blood pressure and heart rate and boost the risk of heart attack, stroke, irregular heartbeats, and congestive heart failure. In addition, 39 Common Sleep DisordersI realize now that my sleep apnea affected my quality of life. I felt tired all the time—so tired that I couldn’t exercise or spend time with my kids. I had other sleep apnea symptoms that affected my work—headaches, confusion, making errors, etc. “Looking back, I know that I should have taken it more seriously and told my doctor about my symptoms many years before I did. “One thing that helps me is physical activity. Now that I am feeling better, I come home from work with enough energy to have an exercise routine. J I M “ ” 40Your Guide to Healthy Sleep untreated sleep apnea can lead to changes in energy metabolism (the way your body changes food and oxygen into energy) that increase the risk for developing obesity and diabetes. Anyone can have sleep apnea. It is estimated that at least 12–18 million American adults have sleep apnea, making it as common as asthma. More than one-half of the people who have sleep apnea are overweight. Sleep apnea is more common in men. More than 1 in 25 middle-aged men and 1 in 50 middle-aged women have sleep apnea along with extreme daytime sleepiness. About 3 percent of children and 10 percent or more of people over age 65 have sleep apnea. This condition occurs more frequently in African Americans, Asians, Native Americans, and Hispanics than in Caucasians. More than one-half of all people who have sleep apnea are not diagnosed. People who have sleep apnea generally are not aware that their breathing stops in the night. They just notice that they don’t feel well rested when they wake up and are sleepy throughout the day. Their bed partners are likely to notice, however, that they snore loudly and frequently and that they often stop breathing briefly while sleeping. Doctors suspect sleep apnea if these symptoms are present, but the diagnosis must be confirmed with overnight sleep monitoring. (See “How Are Sleep Disorders Diagnosed?” on page 44.) This monitoring will reveal pauses in breathing, frequent sleep arousals (changes from sleep to wakefulness), and intermittent drops in levels of oxygen in the blood. 41 n n n n Like adults who have sleep apnea, children who have this disorder usually snore loudly, snort or gasp, and have brief pauses in breath ing while sleeping. Small children often have enlarged tonsils and adenoids that increase their risk for sleep apnea. But doctors may not suspect sleep apnea in children because, instead of showing the typical signs of sleepiness during the day, these children often become agitated and may be considered hyperactive. The effects of sleep apnea in children may include poor school performance and difficult, aggressive behavior. A number of factors can make a person susceptible to sleep apnea. These factors include: n n n n n n Throat muscles and tongue that relax more than normal while asleep Enlarged tonsils and adenoids Being overweight—the excess fat tissue around your neck makes it harder to keep the throat area open Head and neck shape that creates a somewhat smaller airway size in the mouth and throat area Congestion, due to allergies, that also can narrow the airway Family history of sleep apnea If your doctor suspects that you have sleep apnea, you may be referred to a sleep specialist. Some of the ways to help diagnose sleep apnea include: A medical history that includes asking you and your family questions about how you sleep and how you function during the day. Checking your mouth, nose, and throat for extra or large tissues—for example, checking the tonsils, uvula (the tissue that hangs from the middle of the back of the mouth), and soft palate (the roof of your mouth in the back of your throat). An overnight recording of what happens with your breathing during sleep (polysomnogram, or PSG). A multiple sleep latency test (MSLT), usually done in a sleep center, to see how quickly you fall asleep at times when you would normally be awake. (Falling asleep in only a few minutes usually means that you are very sleepy during the day. Being very sleepy during the day can be a sign of sleep apnea.) Common Sleep Disorders 42Your Guide to Healthy Sleep n n n Once all the tests are completed, the sleep specialist will review the results and work with you and your family to develop a treatment plan. Changes in daily activities or habits may help reduce your symptoms: Sleep on your side instead of on your back. Sleeping on your side will help reduce the amount of upper airway collapse during sleep. Avoid alcohol, smoking, sleeping pills, herbal supplements, and any other medications that make you sleepy. They make it harder for your airways to stay open while you sleep, and sedatives can make the breathing pauses longer and more severe. Tobacco smoke irritates the airways and can help trigger the intermittent collapse of the upper airway. Lose weight if you are overweight. Even a little weight loss can sometimes improve symptoms. These changes may be all that are needed to treat mild sleep apnea. However, if you have moderate or severe sleep apnea, you will need additional, more direct treatment approaches. Continuous positive airway pressure (CPAP) is the most effective treatment for sleep apnea in adults. A CPAP machine uses mild air pressure to keep your airways open while you sleep. The machine delivers air to your airways through a specially designed nasal mask. The mask does not breathe for you; the flow of air creates increased pressure to keep the airways in your nose and mouth more open while you sleep. The air pressure is adjusted so that it is just enough to stop your airways from briefly becoming too small during sleep. The pressure is constant and continuous. Sleep apnea will return if CPAP is stopped or if it is used incorrectly. People who have severe sleep apnea symptoms generally feel much better once they begin treatment with CPAP. CPAP treatment can cause side effects in some people. Possible side effects include dry or stuffy nose, irritation of the skin on the face, bloating of the stom ach, sore eyes, or headaches. If you have trouble with CPAP side effects, work with your sleep specialist and support staff. Together, you can do things to reduce or eliminate these problems. Currently, no medications cure sleep apnea. However, some prescription medications may help relieve the excessive sleepiness that sometimes persists even with CPAP treatment of sleep apnea. 43My doctor prescribed CPAP (continuous positive airway pressure) for me, but it was not easy to use at first. Sleeping with a CPAP machine was uncomfortable for me, so I didn’t use it like I should have—rarely, if at all. One day at work, I started feeling really bad, so I went to the hospital. The doctors told me that since I had not been using CPAP regularly, not enough oxygen was going to my brain, which caused symptoms like those for a stroke. So, I went back to my doctor and got a different CPAP machine that was more comfortable for me. “It’s important to talk with your health care provider to make sure that your treatment is comfortable and works for you. J I M “ ” Another treatment approach that may help some people is the use of a mouthpiece (oral or dental appliance). If you have mild sleep apnea or do not have sleep apnea but snore very loudly, your doctor or dentist also may recommend this. A custom-fitted plastic mouth piece will be made by a dentist or an orthodontist (a specialist in correcting teeth or jaw problems). The mouthpiece will adjust your lower jaw and tongue to help keep the airway in your throat more open while you are sleeping. Air can then flow more easily into your lungs because there is less resistance to breathing. Following up with the dentist or orthodontist is important to correct any side effects and to be sure that your mouthpiece continues to fit properly. It is also important to have a followup sleep study to see whether your sleep apnea has improved. Some people who have sleep apnea may benefit from surgery; this depends on the findings of the evaluation by the sleep specialist. Removing tonsils and adenoids that are blocking the airway is done frequently, especially in children. Uvulopalatopharyngoplasty (UPPP) is a surgery for adults that removes the tonsils, uvula, and part of the soft palate. Tracheostomy is a surgery used rarely and only in severe sleep apnea when no other treatments have been successful. A small hole is made in the windpipe, and a tube is inserted. Air will flow through the tube and into the lungs, bypass ing the obstruction in the upper airway. Common Sleep Disorders 44Your Guide to Healthy Sleep l l l How Are Sleep disorders diagnosed? Depending on your symptoms, your doctor will gather informa tion and consider several possible tests when trying to diagnose a sleep disorder: Sleep history and sleep log. Your doctor will ask you how many hours you sleep each night, how often you awaken during the night and for how long, how long it takes you to fall asleep, how well rested you feel upon awakening, and how sleepy you feel during the day. Your doctor may ask you to keep a sleep diary for a few weeks. (See “Sample Sleep Diary” on page 54.) Your doctor also may ask you whether you have any symptoms of sleep apnea or restless legs syndrome, such as loud snoring, snorting or gasping, morning headaches, tingling or unpleasant sensations in the limbs that are relieved by moving them, and jerking of the limbs during sleep. Your sleeping partner may be asked whether you have some of these symptoms, as you may not be aware of them yourself. Sleep recording in a sleep laboratory (polysomnogram). A sleep recording or polysomnogram (PSG) is usually done while you stay overnight at a sleep center or sleep laboratory. Electrodes and other monitors are placed on your scalp, face, chest, limbs, and finger. While you sleep, these devices measure your brain activity, eye movements, muscle activity, heart rate and rhythm, blood pressure, and how much air moves in and out of your lungs. This test also checks the amount of oxygen in your blood. A PSG test is painless. In certain circumstances, the PSG can be done at home. A home monitor can be used to record heart rate, how air moves in and out of your lungs, the amount of oxygen in your blood, and your breathing effort. Multiple sleep latency test (MSLT). This daytime sleep study measures how sleepy you are and is particularly useful for diagnosing narcolepsy. The MSLT is conducted in a sleep 45 laboratory and typically done after an overnight sleep recording (PSG). In this test, monitoring devices for sleep stage are placed on your scalp and face. You are asked to nap four or five times for 20 minutes every 2 hours during the day. Technicians note how quickly you fall asleep and how long it takes you to reach various stages of sleep, especially REM sleep, during your naps. Normal individuals either do not fall asleep during these short designated naptimes or take a long time to fall asleep. People who fall asleep in less than 5 minutes are likely to require treatment for a sleep disorder, as are those who quickly reach REM sleep during their naps. It is important to have a sleep specialist interpret the results of your PSG or MSLT. See “How To Find a Sleep Center and Sleep Specialist” on page 56. Common Sleep Disorders 46Your Guide to Healthy SleepI started to get weird feelings in my legs at night while I slept. To feel better, I would get up and move around and stretch. Then the weird feelings began to happen more often and made me lose sleep. I started to think that something was wrong. I decided to go to the doctor and was diagnosed with restless legs syndrome (RLS). “Because RLS symptoms can change, I’m always trying to find the right mix of diet, medication, and exercise. Exercise and massage help me manage my RLS. Yoga helps a lot too, because of all the stretching involved. L A U R E N “ ” 47 Restless Legs Syndrome Restless legs syndrome (RLS) causes an unpleasant prickling or tingling in the legs, especially in the calves, that is relieved by mov ing or massaging them. People who have RLS feel a need to stretch or move their legs to get rid of the uncomfortable or painful feelings. As a result, it may be difficult to fall asleep and stay asleep. One or both legs may be affected. Some people also feel the sensations in their arms. These sensations also can occur when lying down or sitting for long periods of time, such as while at a desk, riding in a car, or watching a movie. Many people who have RLS also have brief limb movements during sleep, often with abrupt onset, occurring every 5–90 seconds. This condition, known as periodic limb movements in sleep (PLMS), can repeatedly awaken people who have RLS, reducing their total sleep time and interrupting their sleep. Some people have PLMS but have no abnormal sensations in their legs while awake. RLS affects 5–15 percent of Americans, and its prevalence increases with age. RLS occurs more often in women than men. One study found that RLS accounted for one-third of the insomnia seen in patients older than age 60. Children also can have RLS. In children, the condition may be associated with symptoms of attention-deficit hyperactivity disorder. However, it’s not fully known how the disorders are related. Sometimes “growing pains” can be mistaken for RLS. RLS is often inherited. Pregnancy, kidney failure, and anemia related to iron or vitamin deficiency can trigger or worsen RLS symptoms. Researchers suspect that these conditions cause an iron deficiency that results in a lack of dopamine, which is used by the brain to control physical sensation and limb movements. Doctors usually can diagnose RLS by patients’ symptoms and a telltale worsening of symptoms at night or while at rest. Some doctors may order a blood test to check ferretin levels (ferretin is a form of iron). Doctors also may ask people who have RLS to spend a night in a sleep laboratory, where they are monitored to rule out other sleep disorders and to document the excessive limb movements. RLS is treatable but not always curable. Dramatic improvements are seen quickly when patients are given dopamine-like drugs or iron supplements. Alternatively, people who have milder cases may be treated successfully with sedatives or behavioral strategies. These Common Sleep Disorders 48Your Guide to Healthy Sleep n strategies include stretching, taking a hot bath, or massaging the legs before bedtime. Avoiding caffeinated beverages also can help reduce symptoms, and certain medications (e.g., some antidepressants, particularly selective serotonin reuptake inhibitors) may cause RLS. If iron or vitamin deficiency underlies RLS, symptoms may improve with prescribed iron, vitamin B12, or folate supplements. Some people may require anticonvulsant medications to control the creeping and crawling sensations in their limbs. Others who have severe symptoms that are associated with another medical disorder or that do not respond to normal treatments may need to be treated with pain relievers. Narcolepsy Narcolepsy’s main symptom is extreme and overwhelming daytime sleepiness, even after adequate nighttime sleep. In addition, nighttime sleep may be fragmented by frequent awakenings. People who have narcolepsy often fall asleep at inappropriate times and places. Although TV sitcoms occasionally feature these individuals to generate a few laughs, narcolepsy is no laughing matter. People who have narcolepsy experience daytime “sleep attacks” that last from seconds to more than one-half hour, can occur without warning, and may cause injury. These embarrassing sleep spells also can make it difficult to work and to maintain normal personal or social relationships. With narcolepsy, the usually sharp distinctions between being asleep and awake are blurred. Also, people who have narcolepsy tend to fall directly into dream-filled REM sleep, rather than enter REM sleep gradually after passing through the non-REM sleep stages first. In addition to overwhelming daytime sleepiness, narcolepsy has three other commonly associated symptoms, but these may not occur in all people: Sudden muscle weakness (cataplexy). This weakness is similar to the paralysis that normally occurs during REM sleep, but it lasts a few seconds to minutes while an individual is awake. Cataplexy tends to be triggered by sudden emotional reac tions, such as anger, surprise, fear, or laughter. The weakness may show up as limpness at the neck, buckling of the knees, or sagging facial muscles affecting speech, or it may cause a complete body collapse. 49 Common Sleep DisordersAt first, I was misdiagnosed with chronic fatigue syndrome, because I was in my forties and narcolepsy symptoms usually start during the teen years. Because I didn’t have any of the symptoms of chronic fatigue syndrome other than sleepiness, I went to a neurologist for help. He noticed the cataplexy (muscle weakness) right away, and then I was officially diagnosed with narcolepsy and then later on with borderline sleep apnea. “Even though there is no cure for narcolepsy, you can feel like you have control if you manage it well. “When you have narcolepsy, you live your life differently. But with a good plan and supportive friends and family, it all turns out OK. S Z E - P I N G “ ” 50Your Guide to Healthy Sleep n n Sleep paralysis. People who have narcolepsy may experience a temporary inability to talk or move when falling asleep or waking up, as if they were glued to their beds. Vivid dreams. These dreams can occur when people who have narcolepsy first fall asleep or wake up. The dreams are so lifelike that they can be confused with reality. Experts estimate that as many as 350,000 Americans have narco lepsy, but fewer than 50,000 are diagnosed. The disorder may be as widespread as Parkinson’s disease or multiple sclerosis, and more prevalent than cystic fibrosis, but it is less well known. Narcolepsy is often mistaken for depression, epilepsy, or the side effects of medicines. Narcolepsy can be difficult to diagnose in people who have only the symptom of excessive daytime sleepiness. It is usually diagnosed during an overnight sleep recording (PSG) that is followed by an MSLT. (See “How Are Sleep Disorders Diagnosed?” on page 44.) Both tests reveal symptoms of narcolepsy—the tendency to fall asleep rapidly and enter REM sleep early, even during brief naps. Narcolepsy can develop at any age, but the symptoms tend to appear first during adolescence or early adulthood. About 1 of every 10 people who have narcolepsy has a close family member who has the disorder, suggesting that one can inherit a tendency to develop narcolepsy. Studies suggest that a substance in the brain called hypocretin plays a key role in narcolepsy. Most people who have narcolepsy lack hypocretin, which promotes wakefulness. Scientists believe that an autoimmune reaction—perhaps triggered by disease, viral illness, or brain injury— specifically destroys the hypocretin-generating cells in the brains of people who have narcolepsy. 51 Eventually, researchers may develop a treatment for narcolepsy that restores hypocretin to normal levels. In the meantime, most people who have narcolepsy find some to all of their symptoms relieved by various drug treatments. For example, central nervous system stimulants can reduce daytime sleepiness. Antidepressants and other drugs that suppress REM sleep can prevent muscle weakness, sleep paralysis, and vivid dreaming. Doctors also usually recommend that people who have narcolepsy take short naps (10–15 minutes) two or three times a day, if possible, to help control excessive daytime sleepiness. Parasomnias (Abnormal Arousals) In some people, the walking, talking, and other body functions normally suppressed during sleep occur during certain sleep stages. Alternatively, the paralysis or vivid images usually experienced during dreaming may persist after awakening. These occurrences are collectively known as parasomnias and include confusional arousals (a mixed state of being both asleep and awake), sleep talking, sleep walking, night terrors, sleep paralysis, and REM sleep behavior disorder (acting out dreams). Most of these disorders— such as confusional arousals, sleep walking, and night terrors—are more common in children, who tend to outgrow them once they become adults. People who are sleep-deprived also may experience some of these disorders, including sleep walking and sleep paralysis. Sleep paralysis also commonly occurs in people who have narco lepsy. Certain medications or neurological disorders appear to lead to other parasomnias, such as REM sleep behavior disorder, and these parasomnias tend to occur more in elderly people. If you or a family member has persistent episodes of sleep paralysis, sleep walking, or acting out of dreams, talk with your doctor. Taking measures to assure the safety of children and other family members who have partial arousals from sleep is very important. Common Sleep Disorders 52Your Guide to Healthy SleepIt’s a scary experience, lying in bed, wanting to get up, but unable to—scary enough to almost make you not want to go to sleep anymore. I can remember, as a child, feeling as though there was a weight on me when I was trying to wake up, and I couldn’t move. When I would try to wake up, I would kick my legs and flail my arms, sometimes bumping my wife. I really didn’t have control over my limbs. “When the symptoms got really bad, I went to a sleep specialist, who told me I had sleep paralysis. My doctor prescribed a medicine that has worked great for me. Now, I rarely have sleep paralysis—maybe 3 times per year. L A W R E N C E “ ” 53 n n n n Do You Think You Have a Sleep Disorder? At various points in our lives, all of us suffer from a lack of sleep that can be corrected by making sure we have the opportunity to get enough sleep. But, if you are spending enough time in bed and still wake up tired or feel very sleepy during the day, you may have a sleep disorder. See “Common Signs of a Sleep Disorder” on page 34. One of the best ways you can tell whether you are getting enough good-quality sleep, and whether you have signs of a sleep disorder, is by keeping a sleep diary. (See “Sample Sleep Diary” on page 54.) Use this diary to record the quality and quantity of your sleep; your use of medications, alcohol, and caffeinated beverages; your exercise patterns; and how sleepy you feel during the day. After a week or so, look over this information to see how many hours of sleep or nighttime awakenings one night are linked to your being tired the next day. This information will give you a sense of how much uninterrupted sleep you need to avoid daytime sleepiness. You also can use the diary to see some of the patterns or practices that may keep you from getting a good night’s sleep. You may have a sleep disorder and should see your doctor if your sleep diary reveals any of the following: You consistently take more than 30 minutes each night to fall asleep. You consistently awaken more than a few times or for long periods of time each night. You take frequent naps. You often feel sleepy during the day—or you fall asleep at inappropriate times during the day. Do You Think You Have a Sleep Disorder? 54 Sample Sleep Diary Name: Complete in the Morning Today’s date (include month/day/year): Monday* Time I went to bed last night: Time I woke up this morning: No. of hours slept last night: 11 p.m. 7 a.m. 8 Number of awakenings and total time awake last night: 5 times 2 hours How long I took to fall asleep last night: 30 mins. Medications taken last night: None How awake did I feel when I got up this morning? 1—Wide awake 2—Awake but a little tired 3—Sleepy 2 Complete in the Evening Number of caffeinated drinks (coffee, tea, cola) and time when I had them today: 1 drink at 8 p.m. Number of alcoholic drinks (beer, wine, liquor) and time when I had them today: 2 drinks 9 p.m. Naptimes and lengths today: 3:30 p.m. 45 mins. Exercise times and lengths today: None How sleepy did I feel during the day today? 1—So sleepy had to struggle 1 to stay awake during much of the day 2—Somewhat tired 3—Fairly alert 4—Wide awake Your Guide to Healthy Sleep * This column shows example diary entries—use as a model for your own diary notes. 55 Do You Think You Have a Sleep Disorder? 56Your Guide to Healthy Sleep How To Find a Sleep Center and Sleep Specialist If your doctor refers you to a sleep center or sleep specialist, make sure that center or specialist is qualified to diagnose and treat your sleep problem. To find sleep centers accredited by the American Academy of Sleep Medicine, go to www.aasmnet.org and click on “Find a Sleep Center” (under the Patients & Public menu), or call 708–492–0930. To find sleep specialists certified by the American Board of Sleep Medicine, go to www.absm.org and click on “Verification of Diplomates of the ABSM.” 57 n n n n Research Researchers have learned a lot about sleep and sleep disorders in recent years. That knowledge has led to a better understanding of the importance of sleep to our lives and our health. Research supported by the National Heart, Lung, and Blood Institute (NHLBI) has helped identify some of the causes of sleep disorders and their effects on the heart, brain, lungs, and other body systems. The NHLBI also supports ongoing research on the most effective ways to diagnose and treat sleep disorders. Many questions remain about sleep and sleep disorders. The NHLBI continues to support a range of research that focuses on: Better understanding of how a lack of sleep increases the risk for obesity, diabetes, heart disease, and stroke New ways to diagnose sleep disorders Genetic, environmental, and social factors that lead to sleep disorders The adverse effects from a lack of sleep on body and brain Much of this research depends on the willingness of volunteers to participate in clinical research. If you would like to help researchers advance science on sleep or about a sleep disorder you have and possible treatments, talk to your doctor about participating in clinical research. (For more information, see “Clinical Research” on page 58.) Research 58Your Guide to Healthy Sleep Clinical research Researchers can learn quite a bit about sleep and sleep disorders by studying animals. However, to fully understand sleep and its affect on health and functioning, as well as how best to diagnose and treat sleep disorders, researchers need to do clinical research on people. This type of research is called clinical research because it is often conducted in clinical settings, such as hospitals or doctors’ offices. The two types of clinical research are clinical trials and clinical studies. l Clinical trials test new ways to diagnose, prevent, or treat various disorders. For example, treatments (such as medicines, medical devices, surgery, or other procedures) for a disorder need to be tested in people who have the disorder. A trial helps determine whether a treatment is safe and effective in humans before it is made available for public use. In a clinical trial, participants are randomly assigned to groups. One group receives the new treatment being tested. Other groups may receive a different treatment or a placebo (an inactive substance resembling a drug being tested). Comparing results from the groups gives researchers confidence that changes in the test group are due to the new treatment and not to other factors. 59 l l l l Other types of clinical studies are done to discover the factors, including environmental, behavioral, or genetic factors, that cause or worsen various disorders. Researchers may follow a group of people over time to learn what factors contribute to becoming sick. Clinical studies and trials may be relatively brief, or may last for years and require many visits to the study sites. These sites usually are university hospitals or research centers, but they can include private doctors’ offices and community hospitals. If you participate in clinical research, the research will be explained to you in detail, you will be given a chance to ask questions, and you will be asked to provide written permission. You may not directly benefit from the results of the clinical research you participate in, but the information gathered will help others and will add to scientific knowledge. Taking part in clinical research has other benefits, as well. You’ll learn more about your disorder, you’ll have the support of a team of health care providers, and your health will likely be monitored closely. However, participation also can have risks, which you should discuss with your doctor. No matter what you decide, your regular medical care will not be affected. If you’re thinking about participating in a clinical study, you may have questions about the purpose of the study, the types of tests and treatment involved, how participation will affect your daily life, and whether any costs are involved. Your doctor may be able to answer some of your questions and help you find clinical studies in which you can participate. You also can visit the following Web sites to learn about being in a study and to search for clinical trials being done on your disorder: www.clinicaltrials.gov http://clinicalresearch.nih.gov www.nhlbi.nih.gov/studies/index.htm Clinical Research Research 60Your Guide to Healthy Sleep For More Sleep Information Resources From the National Heart, Lung, and Blood Institute (NHLBI) National Center on Sleep Disorders Research Division of Lung Diseases, NHLBI Two Rockledge Centre, Suite 10170 6701 Rockledge Drive Bethesda, MD 20895–7952 Phone: 301–435–0199 Fax: 301–480–3451 Web site: www.nhlbi.nih.gov/sleep NHLBI Diseases and Conditions Index (DCI) The DCI includes articles on sleep disorders, tests, and procedures, along with videos, podcasts, and Spanish-language articles. Web site: www.nhlbi.nih.gov/health/dci/index.html NHLBI Health Information Center P.O. Box 30105 Bethesda, MD 20824–0105 Telephone: 301–592–8573 TTY: 240–629–3255 Fax: 301–592–8563 E-mail: [email protected] Web site: www.nhlbi.nih.gov NIH Office of Science Education Web site (for high school supplemental curriculum: Sleep, Sleep Disorders, and Biological Rhythms) http://science.education.nih.gov 61 Resources From Other Sleep Organizations American Academy of Sleep Medicine (AASM) 2510 North Frontage Road Darien, IL 60561 Telephone: 630–737–9700 Fax: 630–737–9790 Web site: www.aasmnet.org American Sleep Apnea Association 6856 Eastern Avenue, NW., Suite 203 Washington, DC 20012 Telephone: 202–203–3650 Fax: 202–293–3656 Web site: www.sleepapnea.org Narcolepsy Network P.O. Box 294 Pleasantville, NY 10570 Telephone: 401–667–2523 Fax: 401–633–6567 E-mail: [email protected] Web site: www.narcolepsynetwork.org National Sleep Foundation 1010 North Glebe Road, Suite 310 Arlington, VA 22201 Telephone: 703–243–1697 E-mail: [email protected] Web site: www.sleepfoundation.org Restless Legs Syndrome Foundation 1610 14th Street, NW., Suite 300 Rochester, MN 55901 Telephone: 507–287–6465 Fax: 507–287–6312 E-mail: [email protected] Web site: www.rls.org For More Sleep Information 62Your Guide to Healthy Sleep ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ Notes 63 ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ Notes Notes Discrimination Prohibited: Under provisions of applicable public laws enacted by Congress since 1964, no person in the United States shall, on the grounds of race, color, national origin, handicap, or age, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any program or activity (or, on the basis of sex, with respect to any educa- tion program or activity) receiving Federal finan- cial assistance. In addition, Executive Order 11141 prohibits discrimination on the basis of age by contractors and subcontractors in the performance of Federal contracts, and Executive Order 11246 states that no federally funded con- tractor may discriminate against any employee or applicant for employment because of race, color, religion, sex, or national origin. Therefore, the National Heart, Lung, and Blood Institute must be operated in compliance with these laws and Executive Orders. NIH Publication No. 11-5271 Originally printed November 2005 Revised August 2011
Only respond with the most direct answer possible. Do not discuss anything else. Use only information from the provided document. What are the health benefits of high-quality sleep? Y O U R G U I D E T O Healthy Sleep Y o u r G u i d e t o Healthy Sleep NIH Publication No. 11-5271 Originally printed November 2005 Revised August 2011 Contents Introduction 1 What Is Sleep? 4 What Makes You Sleep? 7 What Does Sleep Do for You? 12 Your Learning, Memory, and Mood 12 Your Heart 13 Your Hormones 14 How Much Sleep Is Enough? 19 What Disrupts Sleep? 25 Is Snoring a Problem? 30 Common Sleep Disorders 33 Insomnia 35 Sleep Apnea 38 Restless Legs Syndrome 47 Narcolepsy 48 Parasomnias (Abnormal Arousals) 51 Do You Think You Have a Sleep Disorder? 53 How To Find a Sleep Center and Sleep Specialist 56 Research 57 For More Sleep Information 60 Contents 1 Introduction Think of your daily activities. Which activity is so important you should devote one-third of your time to doing it? Probably the first things that come to mind are working, spending time with your family, or doing leisure activities. But there’s something else you should be doing about one-third of your time—sleeping. Many people view sleep as merely a “down time” when their brains shut off and their bodies rest. People may cut back on sleep, think ing it won’t be a problem, because other responsibilities seem much more important. But research shows that a number of vital tasks carried out during sleep help people stay healthy and function at their best. While you sleep, your brain is hard at work forming the pathways necessary for learning and creating memories and new insights. Without enough sleep, you can’t focus and pay attention or respond quickly. A lack of sleep may even cause mood problems. Also, growing evidence shows that a chronic lack of sleep increases your risk of obesity, diabetes, cardiovas cular disease, and infections. Introduction 2Your Guide to Healthy Sleep Despite growing support for the idea that adequate sleep, like adequate nutrition and physical activity, is vital to our well-being, people are sleeping less. The nonstop “24/7” nature of the world today encourages longer or nighttime work hours and offers continual access to entertainment and other activities. To keep up, people cut back on sleep. A common myth is that people can learn to get by on little sleep (such as less than 6 hours a night) with no adverse effects. Research suggests, however, that adults need at least 7–8 hours of sleep each night to be well rested. Indeed, in 1910, most people slept 9 hours a night. But recent surveys show the average adult now sleeps fewer than 7 hours a night. More than one-third of adults report daytime sleepiness so severe that it interferes with work, driving, and social functioning at least a few days each month. Evidence also shows that children’s and adolescents’ sleep is shorter than recommended. These trends have been linked to increased exposure to electronic media. Lack of sleep may have a direct effect on children’s health, behavior, and development. Chronic sleep loss or sleep disorders may affect as many as 70 million Americans. This may result in an annual cost of $16 billion in health care expenses and $50 billion in lost productivity. 3 What happens when you don’t get enough sleep? Can you make up for lost sleep during the week by sleeping more on the weekends? How does sleep change as you become older? Is snoring a problem? How can you tell if you have a sleep disorder? Read on to find the answers to these questions and to better understand what sleep is and why it is so necessary. Learn about common sleep myths and practical tips for getting enough sleep, coping with jet lag and nighttime shift work, and avoiding dangerous drowsy driving. Many common sleep disorders go unrecognized and thus are not Introduction treated. This booklet also gives the latest information on sleep disorders such as insomnia (trouble falling or staying asleep), sleep apnea (pauses in breathing during sleep), restless legs syndrome, narcolepsy (extreme daytime sleepiness), and parasomnias (abnormal sleep behaviors).It’s important to tell your doctor what you are experiencing, so you can help your doctor diagnose your condition. S Z E - P I N G “ ” 4Your Guide to Healthy Sleep What Is Sleep? Sleep was long considered just a block of time when your brain and body shut down. Thanks to sleep research studies done over the past several decades, it is now known that sleep has distinct stages that cycle throughout the night in predictable patterns. How well rested you are and how well you function depend not just on your total sleep time but on how much sleep you get each night and the timing of your sleep stages. Your brain and body functions stay active through out sleep, and each stage of sleep is linked to a specific type of brain waves (distinctive patterns of electrical activity in the brain). Sleep is divided into two basic types: rapid eye movement (REM) sleep and non-REM sleep (with three different stages). (For more information, see “Types of Sleep” on page 5.) Typically, sleep begins with non-REM sleep. In stage 1 non-REM sleep, you sleep lightly and can be awakened easily by noises or other disturbances. During this first stage of sleep, your eyes move slowly, your muscles relax, and your heart and breath ing rates begin to slow. You then enter stage 2 non-REM sleep, which is defined by slower brain waves with occasional bursts of rapid waves. You spend about half the night in this stage. When you progress into stage 3 non- REM sleep, your brain waves become even slower, and the brain produces extremely slow waves almost exclusively (called Delta waves). 5 l l l l l Stage 3 is a very deep stage of sleep, during which it is very difficult to be awakened. Children who wet the bed or sleep walk tend to do so during stage 3 of non-REM sleep. Deep sleep is considered the “restorative” stage of sleep that is necessary for feeling well rested and energetic during the day. Types of Sleep Non-REM Sleep REM Sleep Stage 1: Light sleep; easily awakened; muscles relax with occasional twitches; eye movements are slow. Stage 2: Eye movements stop; slower brain waves, with occasional bursts of rapid brain waves. Stage 3: Occurs soon after you fall asleep and mostly in the first half of the night. Deep sleep; difficult to awaken; large slow brain waves, heart and respiratory rates are slow and muscles are relaxed. Usually first occurs about 90 minutes after you fall asleep, and longer, deeper periods occur during the second half of the night; cycles along with the non-REM stages throughout the night. Eyes move rapidly behind closed eyelids. Breathing, heart rate, and blood pressure are irregular. Dreaming occurs. Arm and leg muscles are temporarily paralyzed. Types of Sleep During REM sleep, your eyes move rapidly in different directions, even though your eyelids stay closed. Your breathing also becomes more rapid, irregular, and shallow, and your heart rate and blood pressure increase. Dreaming typically occurs during REM sleep. During this type of sleep, your arm and leg muscles are temporarily paralyzed so that you cannot “act out” any dreams that you may be having. What Is Sleep? 6Your Guide to Healthy Sleep You typically first enter REM sleep about an hour to an hour and a half after falling asleep. After that, the sleep stages repeat them selves continuously while you sleep. As you sleep, REM sleep time becomes longer, while time spent in stage 3 non-REM sleep becomes shorter. By the time you wake up, nearly all your sleep time has been spent in stages 1 and 2 of non-REM sleep and in REM sleep. If REM sleep is severely disrupted during one night, REM sleep time is typically longer than normal in subsequent nights until you catch up. Overall, almost one-half of your total sleep time is spent in stage 2 non-REM sleep and about one-fifth each in deep sleep (stage 3 of non-REM sleep) and REM sleep. In contrast, infants spend half or more of their total sleep time in REM sleep. Gradually, as they grow, the percentage of total sleep time they spend in REM contin ues to decrease, until it reaches the one-fifth level typical of later childhood and adulthood. Why people dream and why REM sleep is so important are not well understood. It is known that REM sleep stimulates the brain regions you use to learn and make memories. Animal studies suggest that dreams may reflect the brain’s sorting and selectively storing new information acquired during wake time. While this information is processed, the brain might revisit scenes from the day and mix them randomly. Dreams are generally recalled when we wake briefly or are awakened by an alarm clock or some other noise in the environment. Studies show, however, that other stages of sleep besides REM also are needed to form the pathways in the brain that enable us to learn and remember. 7 What Makes You Sleep? Although you may put off going to sleep in order to squeeze more activities into your day, eventually your need for sleep becomes overwhelming. This need appears to be due, in part, to two sub stances your body produces. One substance, called adenosine, builds up in your blood while you’re awake. Then, while you sleep, your body breaks down the adenosine. Levels of this substance in your body may help trigger sleep when needed. A buildup of adenosine and many other complex factors might explain why, after several nights of less than optimal amounts of sleep, you build up a sleep debt. This may cause you to sleep longer than normal or at unplanned times during the day. Because of your body’s internal processes, you can’t adapt to getting less sleep than your body needs. Eventually, a lack of sleep catches up with you. The other substance that helps make you sleep is a hormone called melatonin. This hormone makes you naturally feel sleepy at night. It is part of your internal “biological clock,” which controls when you feel sleepy and your sleep patterns. Your biological clock is a small bundle of cells in your brain that works throughout the day and night. Internal and external environmental cues, such as light signals received through your eyes, control these cells. Your biologi cal clock triggers your body to produce melatonin, which helps prepare your brain and body for sleep. As melatonin is released, you’ll feel increasingly drowsy. Because of your biological clock, you naturally feel the most tired between midnight and 7 a.m. You also may feel mildly sleepy in the afternoon between 1 p.m. and 4 p.m. when another increase in melatonin occurs in your body. Your biological clock makes you the most alert during daylight hours and the least alert during the early morning hours. Conse quently, most people do their best work during the day. Our 24/7 society, however, demands that some people work at night. Nearly one-quarter of all workers work shifts that are not during the daytime, and more than two-thirds of these workers have problem sleepiness and/or difficulty sleeping. Because their work schedules What Makes You Sleep? 8Your Guide to Healthy Sleep are at odds with powerful sleep-regulating cues like sunlight, night shift workers often find themselves drowsy at work, and they have difficulty falling or staying asleep during the daylight hours when their work schedules require them to sleep. The fatigue experienced by night shift workers can be dangerous. Major industrial accidents—such as the Three Mile Island and Chernobyl nuclear power plant accidents and the Exxon Valdez oil spill—have been caused, in part, by mistakes made by overly tired workers on the night shift or an extended shift. Night shift workers also are at greater risk of being in car crashes when they drive home from work during the early morning hours, because the biological clock is not sending out an alerting signal. One study found that one-fifth of night shift workers had a car crash or a near miss in the preceding year because of sleepiness on the drive home from work. Night shift workers are also more likely to have physical problems, such as heart disease, digestive troubles, and infertility, as well as emotional problems. All of these problems may be related, at least in part, to the workers’ chronic sleepiness, possi bly because their biological clocks are not in tune with their work schedules. See “Working the Night Shift” on page 9 for some helpful tips if you work a night shift. Other factors also can influence your need for sleep, including your immune system’s production of hormones called cytokines. Cyto kines are made to help the immune system fight certain infections or chronic inflammation and may prompt you to sleep more than usual. The extra sleep may help you conserve the resources needed to fight the infection. Recent studies confirm that being well rested improves the body’s responses to infection. People are creatures of habit, and one of the hardest habits to break is the natural wake and sleep cycle. Together, a number of physiological factors help you sleep and wake up at the same times each day. 9 Consequently, you may have a hard time adjusting when you travel across time zones. The light cues outside and the clocks in your new location may tell you it is 8 a.m. and you should be active, but your body is telling you it is more like 4 a.m. and you should sleep. The end result is jet lag—sleepiness during the day, difficulty falling or staying asleep at night, poor concentration, confusion, nausea, and generally feeling unwell and irritable. See “Dealing With Jet Lag” on page 10. Working the Night Shift Try to limit night shift work, if that is possible. If you must work the night shift, the following tips may help you: l l l l l Increase your total amount of sleep by adding naps and lengthening the amount of time you allot for sleep. Use bright lights in your workplace. Minimize the number of shift changes so that your body’s biological clock has a longer time to adjust to a nighttime work schedule. Get rid of sound and light distractions in your bedroom during your daytime sleep. Use caffeine only during the first part of your shift to promote alertness at night. If you are unable to fall asleep during the day, and all else fails, talk with your doctor to see whether it would be wise for you to use prescribed, short-acting sleeping pills to help you sleep during the day. Night Shift What Makes You Sleep? 10Your Guide to Healthy Sleep Dealing With Jet Lag Be aware that adjusting to a new time zone may take several days. If you are going to be away for just a few days, it may be better to stick to your original sleep and wake times as much as possible, rather than adjusting your biological clock too many times in rapid succession. Eastward travel generally causes more severe jet lag than westward travel because traveling east requires you to shorten the day, and your biological clock is better able to adjust to a longer day than a shorter day. Fortunately for globetrotters, a few preventive measures and adjustments seem to help some people relieve jet lag, particularly when they are going to spend more than a few days at their destination: l l Adjust your biological clock. During the 2–3 days prior to a long trip, get adequate sleep. You can make minor changes to your sleep schedule. For example, if you are traveling west, delay your bed time and wake time progressively by 20- to 30-minute intervals. If you are traveling east, advance your wake time by 10 to 15 minutes a day for a few days and try to advance your bed time. Decreasing light exposure at bedtime and increasing light exposure at wake time can help you make these adjustments. When you arrive at your destination, spend a lot of time outdoors so your body gets the light cues it needs to adjust to the new time zone. Take a couple of short 10–15 minute catnaps if you feel tired, but do not take long naps during the day. Avoid alcohol and caffeine. Although it may be tempting to drink alcohol to relieve the stress of travel and make it easier to fall asleep, you’re more likely to sleep lighter and wake up in the middle of the night when the effects of the alcohol wear off. Caffeine can help keep you awake longer, but caffeine also can make it harder for you to fall asleep if its effects haven’t worn off by the time you are ready to go to bed. Therefore, it’s best to use caffeine only during the morning and not during the afternoon. 11 l What about melatonin? Your body produces this hormone that may cause some drowsiness and cues the brain and body that it is time to fall asleep. Melatonin builds up in your body during the early evening and into the first 2 hours of your sleep period, and then its release stops in the middle of the night. Melatonin is available as an over-the-counter supplement. Because melatonin is considered safe when used over a period of days or weeks and seems to help people feel sleepy, it has been suggested as a treatment for jet lag. But melatonin’s effectiveness is controversial, and its safety when used over a prolonged period is unclear. Some studies find that taking melatonin supplements before bedtime for several days after arrival in a new time zone can make it easier to fall asleep at the proper time. Other studies find that melatonin does not help relieve jet lag. What Makes You Sleep? Jet Lag 12Your Guide to Healthy Sleep What Does Sleep Do for You? A number of aspects of your health and quality of life are linked to sleep, and these aspects are impaired when you are sleep deprived. Your Learning, Memory, and Mood Students who have trouble grasping new information or learning new skills are often advised to “sleep on it,” and that advice seems well founded. Recent studies reveal that people can learn a task better if they are well rested. They also can better remember what they learned if they get a good night’s sleep after learning the task than if they are sleep deprived. Study volunteers had to sleep at least 6 hours to show improvement in learning. Additionally, the amount of improvement was directly related to how much time they slept—for example, volunteers who slept 8 hours outperformed those who slept only 6 or 7 hours. Other studies suggest that it’s important to get enough rest the night before a mentally challenging task, rather than only sleeping for a short period or waiting to sleep until after the task is complete. Many well-known artists and scientists claim to have had creative insights while they slept. Mary Shelley, for example, said the idea for her novel Frankenstein came to her in a dream. Although it has not been shown that dreaming is the driving force behind innova tion, one study suggests that sleep is needed for creative problem- solving. In that study, volunteers were asked to perform a memory task and then were tested on it 8 hours later. Those who were allowed to sleep for 8 hours immediately after trying the task and before being tested were much more likely to find a creative way of simplifying the task and improving their performance, compared with those who were awake the entire 8 hours before being tested. Exactly what happens during sleep to improve our learning, memo ry, and insight isn’t known. Experts suspect, however, that while 13 people sleep, they form or strengthen the pathways of brain cells needed to perform these tasks. This process may explain why sleep is needed for proper brain development in infants. Not only is a good night’s sleep required to form new learning and memory pathways in the brain, but also sleep is necessary for those pathways to work well. Several studies show that lack of sleep causes thinking processes to slow down. Lack of sleep also makes it harder to focus and pay attention. Lack of sleep can make you more easily confused. Studies also find that a lack of sleep leads to faulty decisionmaking and more risk taking. A lack of sleep slows down your reaction time, which is particularly important to driving and other tasks that require quick response. When people who lack sleep are tested on a driving simulator, they perform just as poorly as people who are drunk. (See “Crash in Bed, Not on the Road” on page 16.) The bottom line is: Not getting a good night’s sleep can be dangerous! Even if you don’t have a mentally or physically challenging day ahead of you, you should still get enough sleep to put yourself in a good mood. Most people report being irritable, if not downright unhappy, when they lack sleep. People who chronically suffer from a lack of sleep, either because they do not spend enough time in bed or because they have an untreated sleep disorder, are at greater risk of developing depression. One group of people who usually don’t get enough sleep is mothers of newborns. Some experts think depression after childbirth (postpar tum blues) is caused, in part, by a lack of sleep. Your Heart Sleep gives your heart and vascular system a much-needed rest. During non-REM sleep, your heart rate and blood pressure progressively slow as you enter deeper sleep. During REM sleep, in response to dreams, your heart What Does Sleep Do for You? 14Your Guide to Healthy Sleep and breathing rates can rise and fall and your blood pressure can be variable. These changes throughout the night in blood pressure and heart and breathing rates seem to promote cardiovascular health. If you don’t get enough sleep, the nightly dip in blood pressure that appears to be important for good cardiovascular health may not occur. Failure to experience the normal dip in blood pressure during sleep can be related to insufficient sleep time, an untreated sleep disorder (for example, sleep apnea), or other factors. Some sleep- related abnormalities may be markers of heart disease and increased risk of stroke. A lack of sleep also puts your body under stress and may trigger the release of more adrenaline, cortisol, and other stress hormones during the day. These hormones keep your blood pressure from dipping during sleep, which increases your risk for heart disease. Lack of sleep also may trigger your body to produce more of certain proteins thought to play a role in heart disease. For example, some studies find that people who repeatedly don’t get enough sleep have higher than normal blood levels of C-reactive protein, a sign of inflammation. High levels of this protein may indicate an increased risk for a condition called atherosclerosis, or hardening of the arteries. Your Hormones When you were young, your mother may have told you that you need to get enough sleep to grow strong and tall. She may have been right! Deep sleep (stage 3 non-REM sleep) triggers more release of growth hormone, which contributes to growth in children and boosts muscle mass and the repair of cells and tissues in children and adults. Sleep’s effect on the release of sex hormones also contributes to puberty and fertility. Consequently, women who work at night and tend to lack sleep may be at increased risk of miscarriage. Your mother also probably was right if she told you that getting a good night’s sleep on a regular basis would help keep you from getting sick and help you get better if you do get sick. During sleep, your body creates more cytokines—cellular hormones that help the immune system fight various infections. Lack of sleep can reduce your body’s ability to fight off common infections. Research also reveals that a lack of sleep can reduce the body’s response to the flu 15 vaccine. For example, sleep-deprived volunteers given the flu vaccine produced less than half as many flu antibodies as those who were well rested and given the same vaccine. Although lack of exercise and other factors also contribute, the current epidemic of diabetes and obesity seems to be related, at least in part, to chronically short or disrupted sleep or not sleeping during the night. Evidence is growing that sleep is a powerful regulator of appetite, energy use, and weight control. During sleep, the body’s production of the appetite suppressor leptin increases, and the appetite stimulant grehlin decreases. Studies find that the less people sleep, the more likely they are to be overweight or obese and prefer eating foods that are higher in calories and carbohydrates. People who report an average total sleep time of 5 hours a night, for example, are much more likely to become obese, compared with people who sleep 7–8 hours a night. A number of hormones released during sleep also control the body’s use of energy. A distinct rise and fall of blood sugar levels during sleep appears to be linked to sleep stages. Not sleeping at the right time, not getting enough sleep overall, or not enough of each stage of sleep disrupts this pattern. One study found that, when healthy young men slept only 4 hours a night for 6 nights in a row, their insulin and blood sugar levels matched those seen in people who were developing diabetes. Another study found that women who slept less than 7 hours a night were more likely to develop diabetes over time than those who slept between 7 and 8 hours a night. What Does Sleep Do for You? 16Your Guide to Healthy Sleep Crash in Bed Not on the Road Most people are aware of the hazards of drunk driving. But driving while sleepy can be just as dangerous. Indeed, crashes due to sleepy drivers are as deadly as those due to drivers impaired by alcohol. And you don’t have to be asleep at the wheel to put yourself and others in danger. Both alcohol and a lack of sleep limit your ability to react quickly to a suddenly braking car, a sharp curve in the road, or other situations that require rapid responses. Just a few seconds’ delay in reaction time can be a life-or-death matter when driving. When people who lack sleep are tested on a driving simulator, they perform as badly as or worse than those who are drunk. The combination of alcohol and lack of sleep can be especially dangerous. There is increasing evidence that sleep deprivation and inexperience behind the wheel, both particularly common in adolescents, is a lethal combination. Of course, driving is also hazardous if you fall asleep at the wheel, which happens surprisingly often. One-quarter of the drivers surveyed in New York State reported they had fallen asleep at the wheel at some time. Often, people briefly nod off at the wheel without being aware of it—they just can’t recall what happened over the previous few seconds or longer. And people who lack sleep are more apt to take risks and make poor judgments, which also can boost their chances of getting in a car crash. Opening a window or turning up the radio won’t help you stay awake while driving. The bottom line is that there is no substitute for sleep. Be aware of these warning signs that you are too sleepy to drive safely: trouble keeping your eyes open or focused, continual yawning, or being unable to recall driving the past few miles. Remember, if you are short on sleep, stay out of the driver’s seat! 17 Here are some potentially life-saving tips for avoiding drowsy driving: l l l l l l Be well rested before hitting the road. If you have several nights in a row of fewer than 7–8 hours of sleep, your reaction time slows. Restoring that reaction time to normal can take more than one night of good sleep, because a sleep debt accumulates after each night you lose sleep. It may take several nights of being well rested to repay that sleep debt and make you ready for driving on a long road trip. Avoid driving between midnight and 7 a.m. Unless you are accustomed to being awake then, this period of time is when we are naturally the least alert and most tired. Don’t drive alone. A companion who can keep you engaged in conversation might help you stay awake while driving. Schedule frequent breaks on long road trips. If you feel sleepy while driving, pull off the road and take a nap for 15–20 minutes. Don’t drink alcohol. Just one beer when you are sleep deprived will affect you as much as two or three beers when you are well rested. Don’t count on caffeine or other tricks. Although drinking a cola or a cup of coffee might help keep you awake for a short time, it won’t overcome extreme sleepiness or relieve a sleep debt. What Does Sleep Do for You? 18Your Guide to Healthy SleepI wake up early to get ready for school. I am tired in the morning, and by the end of the school day, I am very tired again. An afterschool nap seems to refresh me and help me focus on homework. Without it, I am grumpy and stressed, can’t focus, and sometimes get headaches. D A P H N E “ ” 19 How Much Sleep Is Enough? Animal studies suggest that sleep is as vital as food for survival. Rats, for example, normally live 2–3 years, but they live only 5 weeks if they are deprived of REM sleep and only 2–3 weeks if they are deprived of all sleep stages—a timeframe similar to death due to starvation. But how much sleep do humans need? To help answer that question, scientists look at how much people sleep when unrestricted, the average amount of sleep among various age groups, and the amount of sleep that studies reveal is necessary to function at your best. When healthy adults are given unlimited opportunity to sleep, they sleep on average between 8 and 8.5 hours a night. But sleep needs vary from person to person. Some people appear to need only about 7 hours to avoid problem sleepiness, whereas others need 9 or more hours of sleep. Sleep needs also change throughout the life cycle. Newborns sleep between 16 and 18 hours a day, and children in preschool sleep between 11 and 12 hours a day. School-aged children and adolescents need at least 10 hours of sleep each night. The hormonal influences of puberty tend to shift adolescents’ biologi cal clocks. As a result, teenagers (who need between 9 and 10 hours of sleep a night) are more likely to go to bed later than younger children and adults, and they tend to want to sleep later in the morning. This delayed sleep–wake rhythm conflicts with the early- morning start times of many high schools and helps explain why most teenagers get an average of only 7–7.5 hours of sleep a night. As people get older, the pattern of sleep also changes—especially the amount of time spent in deep sleep. This explains why children can sleep through loud noises and why they might not wake up when moved. Across the lifespan, the sleep period tends to advance, namely relative to teenagers; older adults tend to go to bed earlier and wake earlier. The quality—but not necessarily the quantity—of How Much Sleep Is Enough? 20Your Guide to Healthy Sleep deep, non-REM sleep also changes, with a trend toward lighter sleep. The relative percentages of stages of sleep appear to stay mostly constant after infancy. From midlife through late life, people awaken more throughout the night. These sleep disruptions cause older people to lose more and more of stages 1 and 2 non-REM sleep as well as REM sleep. Some older people complain of difficulty falling asleep, early morning awakenings, frequent and long awakenings during the night, daytime sleepiness, and a lack of refreshing sleep. Many sleep problems, however, are not a natural part of sleep in the elderly. Their sleep complaints may be due, in part, to medical conditions, illnesses, or medications they are taking— all of which can disrupt sleep. In fact, one study found that the prevalence of sleep problems is very low in healthy older adults. Other causes of some of older adults’ sleep complaints are sleep apnea, restless legs syndrome, and other sleep disorders that become more common with age. Also, older people are more likely to have their sleep disrupted by the need to urinate during the night. Some evidence shows that the biological clock shifts in older people, so they are more apt to go to sleep earlier at night and wake up earlier in the morning. No evidence indicates that older people can get by with less sleep than younger people. (See “Top 10 Sleep Myths” on page 22.) Poor sleep in older people may result in excessive daytime sleepiness, attention and memory problems, depressed mood, and overuse of sleeping pills. Despite variations in sleep quantity and quality, both related to age and 21 between individuals, studies suggest that the optimal amount of sleep needed to perform adequately, avoid a sleep debt, and not have problem sleepiness during the day is about 7–8 hours for adults and at least 10 hours for school-aged children and adolescents. Similar amounts seem to be necessary to avoid an increased risk of develop ing obesity, diabetes, or cardiovascular diseases. Quality of sleep and the timing of sleep are as important as quantity. People whose sleep is frequently interrupted or cut short may not get enough of both non-REM sleep and REM sleep. Both types of sleep appear to be crucial for learning and memory—and perhaps for the restorative benefits of healthy sleep, including the growth and repair of cells. Many people try to make up for lost sleep during the week by sleeping more on the weekends. But if you have lost too much sleep, sleeping in on a weekend does not completely erase your sleep debt. Certainly, sleeping more at the end of a week won’t make up for any poor performance you had earlier in that week. Just one night of inadequate sleep can negatively affect your functioning and mood during at least the next day. Daytime naps are another strategy some people use to make up for lost sleep during the night. Some evidence shows that short naps (up to an hour) can make up, at least partially, for the sleep missed on the previous night and improve alertness, mood, and work performance. But naps don’t substitute for a good night’s sleep. One study found that a daytime nap after a lack of sleep at night did not fully restore levels of blood sugar to the pattern seen with adequate nighttime sleep. If a nap lasts longer than 20 minutes, you may have a hard time waking up fully. In addition, late afternoon naps can make falling asleep at night more difficult. How Much Sleep Is Enough? 22Your Guide to Healthy Sleep Top 10 Sleep Myths Myth 1: Sleep is a time when your body and brain shut down for rest and relaxation. No evidence shows that any major organ (including the brain) or regulatory system in the body shuts down during sleep. Some physiological processes actually become more active while you sleep. For example, secretion of certain hormones is boosted, and activity of the pathways in the brain linked to learning and memory increases. Myth 2: Getting just 1 hour less sleep per night than needed will not have any effect on your daytime functioning. This lack of sleep may not make you noticeably sleepy during the day. But even slightly less sleep can affect your ability to think properly and respond quickly, and it can impair your cardiovascular health and energy balance as well as your body’s ability to fight infections, particularly if lack of sleep continues. If you consistently do not get enough sleep, a sleep debt builds up that you can never repay. This sleep debt affects your health and quality of life and makes you feel tired during the day. Myth 3: Your body adjusts quickly to different sleep schedules. Your biological clock makes you most alert during the daytime and least alert at night. Thus, even if you work the night shift, you will naturally feel sleepy when nighttime comes. Most people can reset their biological clock, but only by appropriately timed cues—and even then, by 1–2 hours per day at best. Consequently, it can take more than a week to adjust to a substantial change in your sleep–wake cycle—for example, when traveling across several time zones or switching from working the day shift to the night shift. Myth 4: People need less sleep as they get older. Older people don’t need less sleep, but they may get less sleep or find their sleep less refreshing. That’s because as people age, the quality of their sleep changes. Older people are also more likely to have insomnia or other medical conditions that disrupt their sleep. 23 Myth 5: Extra sleep for one night can cure you of problems with excessive daytime fatigue. Not only is the quantity of sleep important, but also the quality of sleep. Some people sleep 8 or 9 hours a night but don’t feel well rested when they wake up because the quality of their sleep is poor. A number of sleep disorders and other medical conditions affect the quality of sleep. Sleeping more won’t lessen the daytime sleepiness these disorders or conditions cause. However, many of these disorders or conditions can be treated effectively with changes in behavior or with medical therapies. Additionally, one night of increased sleep may not correct multiple nights of inadequate sleep. Myth 6: You can make up for lost sleep during the week by sleeping more on the weekends. Although this sleeping pattern will help you feel more rested, it will not completely make up for the lack of sleep or correct your sleep debt. This pattern also will not necessarily make up for impaired performance during the week or the physical problems that can result from not sleeping enough. Furthermore, sleeping later on the weekends can affect your biological clock, making it much harder to go to sleep at the right time on Sunday nights and get up early on Monday mornings. Myth 7: Naps are a waste of time. Although naps are no substitute for a good night’s sleep, they can be restorative and help counter some of the effects of not getting enough sleep at night. Naps can actually help you learn how to do certain tasks quicker. But avoid taking naps later than 3 p.m., particularly if you have trouble falling asleep at night, as late naps can make it harder for you to fall asleep when you go to bed. Also, limit your naps to no longer than 20 minutes, because longer naps will make it harder to wake up and How Much Sleep Is Enough? 24Your Guide to Healthy Sleep Top 10 Sleep Myths (continued) get back in the swing of things. If you take more than one or two planned or unplanned naps during the day, you may have a sleep disorder that should be treated. Myth 8: Snoring is a normal part of sleep. Snoring during sleep is common, particularly as a person gets older. Evidence is growing that snoring on a regular basis can make you sleepy during the day and increase your risk for diabetes and heart disease. In addition, some studies link frequent snoring to problem behavior and poorer school achievement in children. Loud, frequent snoring also can be a sign of sleep apnea, a serious sleep disorder that should be evaluated and treated. (See “Is Snoring a Problem?” on page 30.) Myth 9: Children who don’t get enough sleep at night will show signs of sleepiness during the day. Unlike adults, children who don’t get enough sleep at night typically become hyperactive, irritable, and inattentive during the day. They also have increased risk of injury and more behavior problems, and their growth rate may be impaired. Sleep debt appears to be quite common during childhood and may be misdiagnosed as attention-deficit hyperactivity disorder. Myth 10: The main cause of insomnia is worry. Although worry or stress can cause a short bout of insomnia, a persistent inability to fall asleep or stay asleep at night can be caused by a number of other factors. Certain medications and sleep disorders can keep you up at night. Other common causes of insomnia are depression, anxiety disorders, and asthma, arthritis, or other medical conditions with symptoms that tend to be troublesome at night. Some people who have chronic insomnia also appear to be more “revved up” than normal, so it is harder for them to fall asleep. Sleep Myths 25When medicines didn’t work for me, I started making big lifestyle changes. Now I try to eat a balanced diet and walk for at least an hour each day. Without doubt, my weight loss and more active lifestyle help me sleep better. What Disrupts Sleep? Many factors can prevent a good night’s sleep. These factors range from well-known stimulants, such as coffee, to certain pain relievers, decongestants, and other culprits. Many people depend on the caffeine in coffee, cola, or tea to wake them up in the morning or to keep them awake. Caffeine is thought to block the cell receptors that adenosine (a substance in the brain) uses to trigger its sleep- inducing signals. In this way, caffeine fools the body into thinking it isn’t tired. It can take as long as 6–8 hours for the effects of caffeine to wear off completely. Thus, drinking a cup of coffee in the late afternoon may prevent your falling asleep at night. Nicotine is another stimulant that can keep you awake. Nicotine also leads to lighter than normal sleep, and heavy smokers tend to wake up too early because of nicotine withdrawal. Although alcohol is a sedative that makes it easier to fall asleep, it prevents deep sleep and REM sleep, allowing only the lighter stages of sleep. People who drink alcohol also tend to wake up in the middle of the night when the effects of an alcoholic “nightcap” wear off. Certain commonly used prescrip tion and over-the-counter medi cines contain ingredients that can keep you awake. These ingredients include decongestants and steroids. Many medicines taken to relieve headaches contain caffeine. Heart and blood pressure medications known as beta blockers can make it difficult to fall asleep and cause more awakenings during the night. People who have chronic asthma or bronchitis also have more problems falling asleep and staying asleep than healthy people, either because of their breathing difficul ties or because of the medicines What Disrupts Sleep? S Z E - P I N G “ ” 26Your Guide to Healthy Sleep they take. Other chronic painful or uncomfortable conditions— such as arthritis, congestive heart failure, and sickle cell anemia— can disrupt sleep, too. A number of psychological disorders—including schizophrenia, bipolar disorder, and anxiety disorders—are well known for disrupt ing sleep. Depression often leads to insomnia, and insomnia can cause depression. Some of these psychological disorders are more likely to disrupt REM sleep. Psychological stress also takes its toll on sleep, making it more difficult to fall asleep or stay asleep. People who feel stressed also tend to spend less time in deep sleep and REM sleep. Many people report having difficulties sleeping if, for example, they have recently lost a loved one, are going through a divorce, or are under stress at work. Menstrual cycle hormones can affect how well women sleep. Pro gesterone is known to induce sleep and circulates in greater concen trations in the second half of the menstrual cycle. For this reason, women may sleep better during this phase of their menstrual cycle. On the other hand, many women report trouble sleeping the night before their menstrual flow starts. This sleep disruption may be related to the abrupt drop in progesterone levels that occurs just before menstruation. Women in their late forties and early fifties, however, report more difficulties sleeping (insomnia) than younger women. These difficulties may be linked to menopause, when they have lower concentrations of progesterone. Hot flashes in women of this age also may cause sleep disruption and difficulties. Certain lifestyle factors also may deprive a person of needed sleep. Large meals or vigorous exercise just before bedtime can make it harder to fall asleep. While vigorous exercise in the evening may delay sleep onset for various reasons, exercise in the daytime is associated with improved nighttime sleep. If you aren’t getting enough sleep or aren’t falling asleep early enough, you may be overscheduling activi ties that can pre vent you from getting the 27 quiet relaxation time you need to prepare for sleep. Most people report that it’s easier to fall asleep if they have time to wind down into a less active state before sleeping. Relaxing in a hot bath or having a hot, caffeine-free beverage before bedtime may help. In addition, your body temperature drops after a hot bath in a way that mimics, in part, what happens as you fall asleep. Probably for both these reasons, many people report that they fall asleep more easily after a hot bath. Your sleeping environment also can affect your sleep. Clear your bedroom of any potential sleep distractions, such as noises, bright lights, a TV, a cell phone, or computer. Having a comfortable mattress and pillow can help promote a good night’s sleep. You also sleep better if the temperature in your bedroom is kept on the cool side. For more ideas on improving your sleep, check out the tips for getting a good night’s sleep below. Tips for Getting a Good Night’s Sleep l l l Stick to a sleep schedule. Go to bed and wake up at the same time each day. As creatures of habit, people have a hard time adjusting to changes in sleep patterns. Sleeping later on weekends won’t fully make up for a lack of sleep during the week and will make it harder to wake up early on Monday morning. Exercise is great, but not too late in the day. Try to exercise at least 30 minutes on most days but not later than 2–3 hours before your bedtime. Avoid caffeine and nicotine. Coffee, colas, certain teas, and chocolate contain the stimulant caffeine, and its effects can take as long as 8 hours to wear off fully. Therefore, a cup of coffee in the late afternoon can make it hard for you to fall asleep at night. Nicotine is also a stimulant, often causing smokers to sleep only very lightly. In addition, smokers often wake up too early in the morning because of nicotine withdrawal. What Disrupts Sleep? 28Your Guide to Healthy Sleep Tips for Getting a Good Night’s Sleep (continued) l l l l l l l Avoid alcoholic drinks before bed. Having a “nightcap” or alcoholic beverage before sleep may help you relax, but heavy use robs you of deep sleep and REM sleep, keeping you in the lighter stages of sleep. Heavy alcohol ingestion also may contribute to impairment in breathing at night. You also tend to wake up in the middle of the night when the effects of the alcohol have worn off. Avoid large meals and beverages late at night. A light snack is okay, but a large meal can cause indigestion that interferes with sleep. Drinking too many fluids at night can cause frequent awakenings to urinate. If possible, avoid medicines that delay or disrupt your sleep. Some commonly prescribed heart, blood pressure, or asthma medications, as well as some over-the-counter and herbal remedies for coughs, colds, or allergies, can disrupt sleep patterns. If you have trouble sleeping, talk to your doctor or pharmacist to see whether any drugs you’re taking might be contributing to your insomnia and ask whether they can be taken at other times during the day or early in the evening. Don’t take naps after 3 p.m. Naps can help make up for lost sleep, but late afternoon naps can make it harder to fall asleep at night. Relax before bed. Don’t overschedule your day so that no time is left for unwinding. A relaxing activity, such as reading or listening to music, should be part of your bedtime ritual. Take a hot bath before bed. The drop in body temperature after getting out of the bath may help you feel sleepy, and the bath can help you relax and slow down so you’re more ready to sleep. Have a good sleeping environment. Get rid of anything in your bedroom that might distract you from sleep, such as noises, bright lights, an uncomfortable bed, or warm temperatures. You sleep better if the temperature in the room 29 is kept on the cool side. A TV, cell phone, or computer in the bedroom can be a distraction and deprive you of needed sleep. Having a comfortable mattress and pillow can help promote a good night’s sleep. Individuals who have insomnia often watch the clock. Turn the clock’s face out of view so you don’t worry about the time while trying to fall asleep. l l l Have the right sunlight exposure. Daylight is key to regulating daily sleep patterns. Try to get outside in natural sunlight for at least 30 minutes each day. If possible, wake up with the sun or use very bright lights in the morning. Sleep experts recommend that, if you have problems falling asleep, you should get an hour of exposure to morning sunlight and turn down the lights before bedtime. Don’t lie in bed awake. If you find yourself still awake after staying in bed for more than 20 minutes or if you are starting to feel anxious or worried, get up and do some relaxing activity until you feel sleepy. The anxiety of not being able to sleep can make it harder to fall asleep. See a doctor if you continue to have trouble sleeping. If you consistently find it difficult to fall or stay asleep and/ or feel tired or not well rested during the day despite spending enough time in bed at night, you may have a sleep disorder. Your family doctor or a sleep specialist should be able to help you, and it is important to rule out other health or psychiatric problems that may be disturbing your sleep. What Disrupts Sleep? 30My wife noticed that I snored loudly and sometimes stopped breathing in the middle of the night. She was the one who finally pushed me to see a doctor. Is Snoring a Problem? Long the material for jokes, snoring is generally accepted as com mon and annoying in adults but as nothing to worry about. How ever, snoring is no laughing matter. Frequent, loud snoring is often a sign of sleep apnea and may increase your risk of developing cardio vascular disease and diabetes. Snoring also may lead to daytime sleepiness and impaired performance. Snoring is caused by a narrowing or partial blockage of the airways at the back of your mouth, throat, or nose. This obstruction results in increased air turbulence when breathing in, causing the soft tissues in your upper airways to vibrate. The end result is a noisy snore that can disrupt the sleep of your bed partner. This narrowing of the airways is typically caused by the soft palate, tongue, and throat relaxing while you sleep, but allergies or sinus problems also can contribute to a narrowing of the airways, as can being over weight and having extra soft tissue around your upper airways. The larger the tissues in your soft palate (the roof of your mouth in the back of your throat), the more likely you are to snore while sleeping. Alcohol or sedatives taken shortly before sleep also promote snoring. These drugs cause greater relaxation of the tissues Your Guide to Healthy Sleep in your throat and mouth. Surveys reveal that about one-half of all adults snore, and 50 percent of these adults do so loudly and frequently. African Americans, Asians, and Hispanics are more likely to snore loudly and frequent ly compared with Caucasians, and snoring problems increase with age. Not everyone who snores has sleep apnea, but people who have sleep apnea typically do snore loudly and frequently. Sleep apnea is a J I M “ ” 31 serious sleep disorder, and its hallmark is loud, frequent snoring with pauses in breathing or shallow breaths while sleeping. (See “Sleep Apnea” on page 38.) Even if you don’t experience these breathing pauses, snoring can still be a problem for you as well as for your bed partner. Snoring adds extra effort to your breathing, which can reduce the quality of your sleep and lead to many of the same health consequences as sleep apnea. One study found that older adults who did not have sleep apnea, but who snored 6–7 nights a week, were more than twice as likely to report being extremely sleepy during the day than those who never snored. The more people snored, the more daytime fatigue they reported. That sleepiness may help explain why snorers are more likely to be in car crashes than people who don’t snore. Loud snoring also can disrupt the sleep of bed partners and strain marital relations, especially if snoring causes the spouses to sleep in separate bedrooms. In addition, snoring increases the risk of developing diabetes and heart disease. One study found that women who snored regularly were twice as likely as those who did not snore to develop diabetes, even if they were not overweight (another risk factor for diabetes). Other studies suggest that regular snoring may raise the lifetime risk of developing high blood pressure, heart failure, and stroke. About one-third of all pregnant women begin snoring for the first time during their second trimester. If you are snoring while preg nant, let your doctor know. Snoring in pregnancy can be associated with high blood pressure and can have a negative effect on your baby’s growth and development. Your doctor will keep a close eye on your blood pressure throughout your pregnancy and can let you know if any additional evaluations for the snoring might be useful. In most cases, the snoring and any related high blood pressure will go away shortly after delivery. Snoring also can be a problem in children. As many as 10–15 per cent of young children, who typically have enlarged adenoids and tonsils (both tissues in the throat), snore on a regular basis. Several studies show that children who snore (with or without sleep apnea) are more likely than those who do not snore to score lower on tests that measure intelligence, memory, and attention span. These children also have more problematic behavior, including hyperactiv ity. The end result is that children who snore don’t perform in Is Snoring a Problem? 32Your Guide to Healthy Sleep school as well as those who do not snore. Strikingly, snoring was linked to a greater drop in IQ than that seen in children who had elevated levels of lead in their blood. Although the behavior of children improves after they stop snoring, studies suggest they may continue to get poorer grades in school, perhaps because of lasting effects on the brain linked to the snoring. You should have your child evaluated by your doctor if the child snores loudly and frequently—three to four times a week—especially if you note brief pauses in breathing while asleep and if there are signs of hyperactiv ity or daytime sleepiness, inadequate school achievement, or slower than expected development. Surgery to remove the adenoids and tonsils of children often can cure their snoring and any associated sleep apnea. Such surgery has been linked to a reduction in hyperactivity and improved ability to pay attention, even in children who showed no signs of sleep apnea before surgery. Snoring in older children and adults may be relieved by less invasive measures, however. These measures include losing weight, refraining from use of tobacco, sleeping on the side rather than on the back, or elevating the head while sleeping. Treating chronic congestion and refraining from alcohol or sedatives before sleeping also may de crease snoring. In some adults, snoring can be relieved by dental appliances that reposition the soft tissues in the mouth. Although numerous over-the-counter nasal strips and sprays claim to relieve snoring, no scientific evidence supports those claims. 33 Common Sleep Disorders A number of sleep disorders can disrupt your sleep quality and make you overly sleepy during the day, even if you spent enough time in bed to be well rested. (See “Common Signs of a Sleep Disorder” on page 34.) Common Sleep Disorders More than 70 sleep disorders affect at least 40 million Americans and account for an estimated $16 billion in medical costs each year, not counting costs due to lost work time, car accidents, and other factors. The four most common sleep disorders are insomnia, sleep apnea, restless legs syndrome, and narcolepsy. Additional sleep problems include chronic insufficient sleep, circadian rhythm abnormalities, and “parasomnias” such as sleep walking, sleep paralysis, and night terrors. L A U R E N “My restless legs syndrome made me lose sleep and affected my quality of life. But I’m in a good place right now. I’m taking the right medicine for me, and I’ve adopted a healthy, active lifestyle. I am very passionate about taking control of my health. ” 34Your Guide to Healthy Sleep l l l l l l l l l l l l Common Signs of a Sleep disorder Look over this list of common signs of a sleep disorder, and talk to your doctor if you have any of them on three or more nights a week: It takes you more than 30 minutes to fall asleep at night. You awaken frequently in the night and then have trouble falling back to sleep again. You awaken too early in the morning. You often don’t feel well rested despite spending 7–8 hours or more asleep at night. You feel sleepy during the day and fall asleep within 5 minutes if you have an opportunity to nap, or you fall asleep unexpectedly or at inappropriate times during the day. Your bed partner claims you snore loudly, snort, gasp, or make choking sounds while you sleep, or your partner notices that your breathing stops for short periods. You have creeping, tingling, or crawling feelings in your legs that are relieved by moving or massaging them, especially in the evening and when you try to fall asleep. You have vivid, dreamlike experiences while falling asleep or dozing. You have episodes of sudden muscle weakness when you are angry or fearful, or when you laugh. You feel as though you cannot move when you first wake up. Your bed partner notes that your legs or arms jerk often during sleep. You regularly need to use stimulants to stay awake during the day. Also keep in mind that, although children can show some of these signs of a sleep disorder, they often do not show signs of excessive daytime sleepiness. Instead, they may seem overactive and have difficulty focusing and concentrating. They also may not do their best in school. 35 n n Insomnia Insomnia is defined as having trouble falling asleep or staying asleep, or as having unrefreshing sleep despite having ample opportunity to sleep. Life is filled with events that occasionally cause insomnia for a short time. Such temporary insomnia is common and is often brought on by situations such as stress at work, family pressures, or a traumatic event. A National Sleep Foundation poll of adults in the United States found that close to half of the respondents reported temporary insomnia in the nights immediately after the terrorist attacks on September 11, 2001. Chronic insomnia is defined as having symptoms at least 3 nights per week for more than 1 month. Most cases of chronic insomnia are secondary, which means they are due to another disorder or medications. Primary chronic insomnia is a distinct sleep disorder; its cause is not yet well understood. About 30–40 percent of adults say they have some symptoms of insomnia within any given year, and about 10–15 percent of adults say they have chronic insomnia. Chronic insomnia becomes more common with age, and women are more likely than men to report having insomnia. Insomnia often causes problems during the day, such as extreme sleepiness, fatigue, a lack of energy, difficulty concentrating, depressed mood, and irritability. Thus, untreated insomnia can impair quality of life as much as, or more than, other chronic medical problems. Chronic insomnia is often caused by one or more of the following: A disease or mood disorder. The most common causes of insomnia are depression and/or anxiety disorders. Neuro logical disorders, such as Alzheimer’s or Parkinson’s disease, also can have insomnia as a symptom. Chronic insomnia can result from thyroid dysfunction, arthritis, asthma, or other medical conditions in which symptoms become more trouble some at night, making it difficult to fall asleep or stay asleep. Various prescribed and over-the-counter medications that can disrupt sleep, such as decongestants, certain pain relievers, and steroids. Common Sleep Disorders 36Your Guide to Healthy Sleep n n Sleep-disrupting behavior such as drinking alcohol, exercising shortly before bedtime, ingesting caffeine late in the day, watching TV or reading while in bed, or irregular sleep schedules due to shift work or other causes. Another sleep disorder, such as sleep apnea or restless legs syndrome. Some people, however, have primary chronic insomnia. This condi tion is linked to a tendency to be more “revved up” than normal (hyperarousal). People who have primary chronic insomnia may have heightened levels of certain hormones, higher body tempera tures, faster heart rates, and a different pattern of brain waves while they sleep. Doctors diagnose insomnia based mainly on sleep history, often by reviewing a sleep diary. An overnight sleep recording may be required if another sleep disorder is suspected. Doctors also will try to diagnose and treat any other underlying medical or psychological problems as well as identify behaviors that might be causing the insomnia. Often, people who have insomnia enter into a vicious cycle—because they’ve had trouble sleeping on previous nights, they become anxious at the slightest sign that they may not be falling asleep right away. That anxiety can make it more difficult for them to fall asleep. The more time they spend in bed not sleeping, and watching the clock, the more their anxiety—and sleeplessness—increases. To break that cycle of anxiety and negative conditioning, experts recommend going to bed only when you’re sleepy. If you can’t fall asleep (or fall back to sleep) within 20 minutes, get out of bed, go into another room, and do a relaxing activity (such as reading) until you feel sleepy again. Then return to bed. Studies have shown that this reconditioning therapy is an effective way to treat insomnia. Relaxation therapy is another strategy that works for some people who have insomnia. Relaxation therapy may include meditation and other mental relaxation techniques. It also may include physical relaxation techniques, such as progressively tensing and then relax ing each of the muscle groups in your body before sleep. Another method is to focus on breathing deeply. Relaxation therapy can help your body and mind slow down so that you can fall asleep more easily at bedtime. 37 Sleep restriction therapy also works for some people who have insomnia. Calculate your average sleep time over the course of a week, and then limit your nightly sleep time to that average. Gradu ally add more sleep time each night until you achieve a more normal night’s sleep. You should avoid daytime naps longer than 15–20 min utes during sleep restriction therapy. Napping can make it harder to fall asleep at night, which may prolong insomnia. In addition, during sleep restriction therapy, avoid driving a car or operating dangerous machinery until you are getting enough sleep at night. All of these behavioral changes are part of a treatment called cognitive behavioral therapy. Cognitive behavioral therapy also can be used to replace negative thoughts about sleep, such as “I’ll never fall asleep without sleeping pills,” with more realistic positive thinking. Cognitive behavioral therapy is effective in most people who have chronic insomnia. Some people who have chronic insomnia that is not corrected by behavioral therapy or treatment of an underlying condition may need a prescription medication. You should talk to a doctor before trying to treat insomnia with alcohol, over-the-counter or prescribed short-acting sedatives, or sedating antihistamines that induce drowsiness. The benefits of these treatments are limited, and they have risks. Some may help you fall asleep but leave you feeling unrefreshed in the morning. Others have longer lasting effects and leave you feeling still tired and groggy in the morning. Some also may lose their effectiveness over time. Doctors may prescribe sedating antidepressants for insomnia, but the effectiveness of these medicines in people who do not have depression is not known, and there are significant side effects. Common Sleep Disorders 38Your Guide to Healthy Sleep To treat their insomnia, some people pursue “natural” remedies, such as melatonin supplements or valerian teas or extracts. These remedies are available over the counter. Little evidence exists that melatonin can help relieve insomnia. Studies with valerian also have been inconclusive, and the actual dose and purity of various supplements, extracts, or teas that contain valerian may vary from product to product. In addition, because melatonin, valerian, and other natural remedies are not regulated by the Food and Drug Administration, their safety is not monitored. Sleep Apnea In people who have sleep apnea (also referred to as sleep-disordered breathing), breathing briefly stops or becomes very shallow during sleep. This change is caused by intermittent blocking of the upper airway, usually when the soft tissue in the rear of the throat collapses and partially or completely closes the airway. Each pause in breathing typically lasts 10–120 seconds and may occur 20–30 times or more each sleeping hour. If you have sleep apnea, not enough air can flow into your lungs through your mouth and nose during sleep, even though breathing efforts continue. When this happens, the amount of oxygen in your blood decreases. Your brain responds by awakening you enough to tighten the upper airway muscles and open your windpipe. Normal breaths then start again, often with a loud snort or choking sound. Although people who have sleep apnea typically snore loudly and frequently, not everyone who snores has sleep apnea. (See “Is Snoring a Problem?” on page 30.) Because people who have sleep apnea frequently go from deeper sleep to lighter sleep during the night, they rarely spend enough time in deep, restorative stages of sleep. They are therefore often exces sively sleepy during the day. Such sleepiness is thought to lead to mood and behavior problems, including depression, and it more than triples the risk of being in a traffic or work-related accident. The many brief drops in blood-oxygen levels that occur during the night can result in morning headaches and trouble concentrating, thinking clearly, learning, and remembering. Additionally, the intermittent oxygen drops and reduced sleep quality together trigger the release of stress hormones. These hormones raise your blood pressure and heart rate and boost the risk of heart attack, stroke, irregular heartbeats, and congestive heart failure. In addition, 39 Common Sleep DisordersI realize now that my sleep apnea affected my quality of life. I felt tired all the time—so tired that I couldn’t exercise or spend time with my kids. I had other sleep apnea symptoms that affected my work—headaches, confusion, making errors, etc. “Looking back, I know that I should have taken it more seriously and told my doctor about my symptoms many years before I did. “One thing that helps me is physical activity. Now that I am feeling better, I come home from work with enough energy to have an exercise routine. J I M “ ” 40Your Guide to Healthy Sleep untreated sleep apnea can lead to changes in energy metabolism (the way your body changes food and oxygen into energy) that increase the risk for developing obesity and diabetes. Anyone can have sleep apnea. It is estimated that at least 12–18 million American adults have sleep apnea, making it as common as asthma. More than one-half of the people who have sleep apnea are overweight. Sleep apnea is more common in men. More than 1 in 25 middle-aged men and 1 in 50 middle-aged women have sleep apnea along with extreme daytime sleepiness. About 3 percent of children and 10 percent or more of people over age 65 have sleep apnea. This condition occurs more frequently in African Americans, Asians, Native Americans, and Hispanics than in Caucasians. More than one-half of all people who have sleep apnea are not diagnosed. People who have sleep apnea generally are not aware that their breathing stops in the night. They just notice that they don’t feel well rested when they wake up and are sleepy throughout the day. Their bed partners are likely to notice, however, that they snore loudly and frequently and that they often stop breathing briefly while sleeping. Doctors suspect sleep apnea if these symptoms are present, but the diagnosis must be confirmed with overnight sleep monitoring. (See “How Are Sleep Disorders Diagnosed?” on page 44.) This monitoring will reveal pauses in breathing, frequent sleep arousals (changes from sleep to wakefulness), and intermittent drops in levels of oxygen in the blood. 41 n n n n Like adults who have sleep apnea, children who have this disorder usually snore loudly, snort or gasp, and have brief pauses in breath ing while sleeping. Small children often have enlarged tonsils and adenoids that increase their risk for sleep apnea. But doctors may not suspect sleep apnea in children because, instead of showing the typical signs of sleepiness during the day, these children often become agitated and may be considered hyperactive. The effects of sleep apnea in children may include poor school performance and difficult, aggressive behavior. A number of factors can make a person susceptible to sleep apnea. These factors include: n n n n n n Throat muscles and tongue that relax more than normal while asleep Enlarged tonsils and adenoids Being overweight—the excess fat tissue around your neck makes it harder to keep the throat area open Head and neck shape that creates a somewhat smaller airway size in the mouth and throat area Congestion, due to allergies, that also can narrow the airway Family history of sleep apnea If your doctor suspects that you have sleep apnea, you may be referred to a sleep specialist. Some of the ways to help diagnose sleep apnea include: A medical history that includes asking you and your family questions about how you sleep and how you function during the day. Checking your mouth, nose, and throat for extra or large tissues—for example, checking the tonsils, uvula (the tissue that hangs from the middle of the back of the mouth), and soft palate (the roof of your mouth in the back of your throat). An overnight recording of what happens with your breathing during sleep (polysomnogram, or PSG). A multiple sleep latency test (MSLT), usually done in a sleep center, to see how quickly you fall asleep at times when you would normally be awake. (Falling asleep in only a few minutes usually means that you are very sleepy during the day. Being very sleepy during the day can be a sign of sleep apnea.) Common Sleep Disorders 42Your Guide to Healthy Sleep n n n Once all the tests are completed, the sleep specialist will review the results and work with you and your family to develop a treatment plan. Changes in daily activities or habits may help reduce your symptoms: Sleep on your side instead of on your back. Sleeping on your side will help reduce the amount of upper airway collapse during sleep. Avoid alcohol, smoking, sleeping pills, herbal supplements, and any other medications that make you sleepy. They make it harder for your airways to stay open while you sleep, and sedatives can make the breathing pauses longer and more severe. Tobacco smoke irritates the airways and can help trigger the intermittent collapse of the upper airway. Lose weight if you are overweight. Even a little weight loss can sometimes improve symptoms. These changes may be all that are needed to treat mild sleep apnea. However, if you have moderate or severe sleep apnea, you will need additional, more direct treatment approaches. Continuous positive airway pressure (CPAP) is the most effective treatment for sleep apnea in adults. A CPAP machine uses mild air pressure to keep your airways open while you sleep. The machine delivers air to your airways through a specially designed nasal mask. The mask does not breathe for you; the flow of air creates increased pressure to keep the airways in your nose and mouth more open while you sleep. The air pressure is adjusted so that it is just enough to stop your airways from briefly becoming too small during sleep. The pressure is constant and continuous. Sleep apnea will return if CPAP is stopped or if it is used incorrectly. People who have severe sleep apnea symptoms generally feel much better once they begin treatment with CPAP. CPAP treatment can cause side effects in some people. Possible side effects include dry or stuffy nose, irritation of the skin on the face, bloating of the stom ach, sore eyes, or headaches. If you have trouble with CPAP side effects, work with your sleep specialist and support staff. Together, you can do things to reduce or eliminate these problems. Currently, no medications cure sleep apnea. However, some prescription medications may help relieve the excessive sleepiness that sometimes persists even with CPAP treatment of sleep apnea. 43My doctor prescribed CPAP (continuous positive airway pressure) for me, but it was not easy to use at first. Sleeping with a CPAP machine was uncomfortable for me, so I didn’t use it like I should have—rarely, if at all. One day at work, I started feeling really bad, so I went to the hospital. The doctors told me that since I had not been using CPAP regularly, not enough oxygen was going to my brain, which caused symptoms like those for a stroke. So, I went back to my doctor and got a different CPAP machine that was more comfortable for me. “It’s important to talk with your health care provider to make sure that your treatment is comfortable and works for you. J I M “ ” Another treatment approach that may help some people is the use of a mouthpiece (oral or dental appliance). If you have mild sleep apnea or do not have sleep apnea but snore very loudly, your doctor or dentist also may recommend this. A custom-fitted plastic mouth piece will be made by a dentist or an orthodontist (a specialist in correcting teeth or jaw problems). The mouthpiece will adjust your lower jaw and tongue to help keep the airway in your throat more open while you are sleeping. Air can then flow more easily into your lungs because there is less resistance to breathing. Following up with the dentist or orthodontist is important to correct any side effects and to be sure that your mouthpiece continues to fit properly. It is also important to have a followup sleep study to see whether your sleep apnea has improved. Some people who have sleep apnea may benefit from surgery; this depends on the findings of the evaluation by the sleep specialist. Removing tonsils and adenoids that are blocking the airway is done frequently, especially in children. Uvulopalatopharyngoplasty (UPPP) is a surgery for adults that removes the tonsils, uvula, and part of the soft palate. Tracheostomy is a surgery used rarely and only in severe sleep apnea when no other treatments have been successful. A small hole is made in the windpipe, and a tube is inserted. Air will flow through the tube and into the lungs, bypass ing the obstruction in the upper airway. Common Sleep Disorders 44Your Guide to Healthy Sleep l l l How Are Sleep disorders diagnosed? Depending on your symptoms, your doctor will gather informa tion and consider several possible tests when trying to diagnose a sleep disorder: Sleep history and sleep log. Your doctor will ask you how many hours you sleep each night, how often you awaken during the night and for how long, how long it takes you to fall asleep, how well rested you feel upon awakening, and how sleepy you feel during the day. Your doctor may ask you to keep a sleep diary for a few weeks. (See “Sample Sleep Diary” on page 54.) Your doctor also may ask you whether you have any symptoms of sleep apnea or restless legs syndrome, such as loud snoring, snorting or gasping, morning headaches, tingling or unpleasant sensations in the limbs that are relieved by moving them, and jerking of the limbs during sleep. Your sleeping partner may be asked whether you have some of these symptoms, as you may not be aware of them yourself. Sleep recording in a sleep laboratory (polysomnogram). A sleep recording or polysomnogram (PSG) is usually done while you stay overnight at a sleep center or sleep laboratory. Electrodes and other monitors are placed on your scalp, face, chest, limbs, and finger. While you sleep, these devices measure your brain activity, eye movements, muscle activity, heart rate and rhythm, blood pressure, and how much air moves in and out of your lungs. This test also checks the amount of oxygen in your blood. A PSG test is painless. In certain circumstances, the PSG can be done at home. A home monitor can be used to record heart rate, how air moves in and out of your lungs, the amount of oxygen in your blood, and your breathing effort. Multiple sleep latency test (MSLT). This daytime sleep study measures how sleepy you are and is particularly useful for diagnosing narcolepsy. The MSLT is conducted in a sleep 45 laboratory and typically done after an overnight sleep recording (PSG). In this test, monitoring devices for sleep stage are placed on your scalp and face. You are asked to nap four or five times for 20 minutes every 2 hours during the day. Technicians note how quickly you fall asleep and how long it takes you to reach various stages of sleep, especially REM sleep, during your naps. Normal individuals either do not fall asleep during these short designated naptimes or take a long time to fall asleep. People who fall asleep in less than 5 minutes are likely to require treatment for a sleep disorder, as are those who quickly reach REM sleep during their naps. It is important to have a sleep specialist interpret the results of your PSG or MSLT. See “How To Find a Sleep Center and Sleep Specialist” on page 56. Common Sleep Disorders 46Your Guide to Healthy SleepI started to get weird feelings in my legs at night while I slept. To feel better, I would get up and move around and stretch. Then the weird feelings began to happen more often and made me lose sleep. I started to think that something was wrong. I decided to go to the doctor and was diagnosed with restless legs syndrome (RLS). “Because RLS symptoms can change, I’m always trying to find the right mix of diet, medication, and exercise. Exercise and massage help me manage my RLS. Yoga helps a lot too, because of all the stretching involved. L A U R E N “ ” 47 Restless Legs Syndrome Restless legs syndrome (RLS) causes an unpleasant prickling or tingling in the legs, especially in the calves, that is relieved by mov ing or massaging them. People who have RLS feel a need to stretch or move their legs to get rid of the uncomfortable or painful feelings. As a result, it may be difficult to fall asleep and stay asleep. One or both legs may be affected. Some people also feel the sensations in their arms. These sensations also can occur when lying down or sitting for long periods of time, such as while at a desk, riding in a car, or watching a movie. Many people who have RLS also have brief limb movements during sleep, often with abrupt onset, occurring every 5–90 seconds. This condition, known as periodic limb movements in sleep (PLMS), can repeatedly awaken people who have RLS, reducing their total sleep time and interrupting their sleep. Some people have PLMS but have no abnormal sensations in their legs while awake. RLS affects 5–15 percent of Americans, and its prevalence increases with age. RLS occurs more often in women than men. One study found that RLS accounted for one-third of the insomnia seen in patients older than age 60. Children also can have RLS. In children, the condition may be associated with symptoms of attention-deficit hyperactivity disorder. However, it’s not fully known how the disorders are related. Sometimes “growing pains” can be mistaken for RLS. RLS is often inherited. Pregnancy, kidney failure, and anemia related to iron or vitamin deficiency can trigger or worsen RLS symptoms. Researchers suspect that these conditions cause an iron deficiency that results in a lack of dopamine, which is used by the brain to control physical sensation and limb movements. Doctors usually can diagnose RLS by patients’ symptoms and a telltale worsening of symptoms at night or while at rest. Some doctors may order a blood test to check ferretin levels (ferretin is a form of iron). Doctors also may ask people who have RLS to spend a night in a sleep laboratory, where they are monitored to rule out other sleep disorders and to document the excessive limb movements. RLS is treatable but not always curable. Dramatic improvements are seen quickly when patients are given dopamine-like drugs or iron supplements. Alternatively, people who have milder cases may be treated successfully with sedatives or behavioral strategies. These Common Sleep Disorders 48Your Guide to Healthy Sleep n strategies include stretching, taking a hot bath, or massaging the legs before bedtime. Avoiding caffeinated beverages also can help reduce symptoms, and certain medications (e.g., some antidepressants, particularly selective serotonin reuptake inhibitors) may cause RLS. If iron or vitamin deficiency underlies RLS, symptoms may improve with prescribed iron, vitamin B12, or folate supplements. Some people may require anticonvulsant medications to control the creeping and crawling sensations in their limbs. Others who have severe symptoms that are associated with another medical disorder or that do not respond to normal treatments may need to be treated with pain relievers. Narcolepsy Narcolepsy’s main symptom is extreme and overwhelming daytime sleepiness, even after adequate nighttime sleep. In addition, nighttime sleep may be fragmented by frequent awakenings. People who have narcolepsy often fall asleep at inappropriate times and places. Although TV sitcoms occasionally feature these individuals to generate a few laughs, narcolepsy is no laughing matter. People who have narcolepsy experience daytime “sleep attacks” that last from seconds to more than one-half hour, can occur without warning, and may cause injury. These embarrassing sleep spells also can make it difficult to work and to maintain normal personal or social relationships. With narcolepsy, the usually sharp distinctions between being asleep and awake are blurred. Also, people who have narcolepsy tend to fall directly into dream-filled REM sleep, rather than enter REM sleep gradually after passing through the non-REM sleep stages first. In addition to overwhelming daytime sleepiness, narcolepsy has three other commonly associated symptoms, but these may not occur in all people: Sudden muscle weakness (cataplexy). This weakness is similar to the paralysis that normally occurs during REM sleep, but it lasts a few seconds to minutes while an individual is awake. Cataplexy tends to be triggered by sudden emotional reac tions, such as anger, surprise, fear, or laughter. The weakness may show up as limpness at the neck, buckling of the knees, or sagging facial muscles affecting speech, or it may cause a complete body collapse. 49 Common Sleep DisordersAt first, I was misdiagnosed with chronic fatigue syndrome, because I was in my forties and narcolepsy symptoms usually start during the teen years. Because I didn’t have any of the symptoms of chronic fatigue syndrome other than sleepiness, I went to a neurologist for help. He noticed the cataplexy (muscle weakness) right away, and then I was officially diagnosed with narcolepsy and then later on with borderline sleep apnea. “Even though there is no cure for narcolepsy, you can feel like you have control if you manage it well. “When you have narcolepsy, you live your life differently. But with a good plan and supportive friends and family, it all turns out OK. S Z E - P I N G “ ” 50Your Guide to Healthy Sleep n n Sleep paralysis. People who have narcolepsy may experience a temporary inability to talk or move when falling asleep or waking up, as if they were glued to their beds. Vivid dreams. These dreams can occur when people who have narcolepsy first fall asleep or wake up. The dreams are so lifelike that they can be confused with reality. Experts estimate that as many as 350,000 Americans have narco lepsy, but fewer than 50,000 are diagnosed. The disorder may be as widespread as Parkinson’s disease or multiple sclerosis, and more prevalent than cystic fibrosis, but it is less well known. Narcolepsy is often mistaken for depression, epilepsy, or the side effects of medicines. Narcolepsy can be difficult to diagnose in people who have only the symptom of excessive daytime sleepiness. It is usually diagnosed during an overnight sleep recording (PSG) that is followed by an MSLT. (See “How Are Sleep Disorders Diagnosed?” on page 44.) Both tests reveal symptoms of narcolepsy—the tendency to fall asleep rapidly and enter REM sleep early, even during brief naps. Narcolepsy can develop at any age, but the symptoms tend to appear first during adolescence or early adulthood. About 1 of every 10 people who have narcolepsy has a close family member who has the disorder, suggesting that one can inherit a tendency to develop narcolepsy. Studies suggest that a substance in the brain called hypocretin plays a key role in narcolepsy. Most people who have narcolepsy lack hypocretin, which promotes wakefulness. Scientists believe that an autoimmune reaction—perhaps triggered by disease, viral illness, or brain injury— specifically destroys the hypocretin-generating cells in the brains of people who have narcolepsy. 51 Eventually, researchers may develop a treatment for narcolepsy that restores hypocretin to normal levels. In the meantime, most people who have narcolepsy find some to all of their symptoms relieved by various drug treatments. For example, central nervous system stimulants can reduce daytime sleepiness. Antidepressants and other drugs that suppress REM sleep can prevent muscle weakness, sleep paralysis, and vivid dreaming. Doctors also usually recommend that people who have narcolepsy take short naps (10–15 minutes) two or three times a day, if possible, to help control excessive daytime sleepiness. Parasomnias (Abnormal Arousals) In some people, the walking, talking, and other body functions normally suppressed during sleep occur during certain sleep stages. Alternatively, the paralysis or vivid images usually experienced during dreaming may persist after awakening. These occurrences are collectively known as parasomnias and include confusional arousals (a mixed state of being both asleep and awake), sleep talking, sleep walking, night terrors, sleep paralysis, and REM sleep behavior disorder (acting out dreams). Most of these disorders— such as confusional arousals, sleep walking, and night terrors—are more common in children, who tend to outgrow them once they become adults. People who are sleep-deprived also may experience some of these disorders, including sleep walking and sleep paralysis. Sleep paralysis also commonly occurs in people who have narco lepsy. Certain medications or neurological disorders appear to lead to other parasomnias, such as REM sleep behavior disorder, and these parasomnias tend to occur more in elderly people. If you or a family member has persistent episodes of sleep paralysis, sleep walking, or acting out of dreams, talk with your doctor. Taking measures to assure the safety of children and other family members who have partial arousals from sleep is very important. Common Sleep Disorders 52Your Guide to Healthy SleepIt’s a scary experience, lying in bed, wanting to get up, but unable to—scary enough to almost make you not want to go to sleep anymore. I can remember, as a child, feeling as though there was a weight on me when I was trying to wake up, and I couldn’t move. When I would try to wake up, I would kick my legs and flail my arms, sometimes bumping my wife. I really didn’t have control over my limbs. “When the symptoms got really bad, I went to a sleep specialist, who told me I had sleep paralysis. My doctor prescribed a medicine that has worked great for me. Now, I rarely have sleep paralysis—maybe 3 times per year. L A W R E N C E “ ” 53 n n n n Do You Think You Have a Sleep Disorder? At various points in our lives, all of us suffer from a lack of sleep that can be corrected by making sure we have the opportunity to get enough sleep. But, if you are spending enough time in bed and still wake up tired or feel very sleepy during the day, you may have a sleep disorder. See “Common Signs of a Sleep Disorder” on page 34. One of the best ways you can tell whether you are getting enough good-quality sleep, and whether you have signs of a sleep disorder, is by keeping a sleep diary. (See “Sample Sleep Diary” on page 54.) Use this diary to record the quality and quantity of your sleep; your use of medications, alcohol, and caffeinated beverages; your exercise patterns; and how sleepy you feel during the day. After a week or so, look over this information to see how many hours of sleep or nighttime awakenings one night are linked to your being tired the next day. This information will give you a sense of how much uninterrupted sleep you need to avoid daytime sleepiness. You also can use the diary to see some of the patterns or practices that may keep you from getting a good night’s sleep. You may have a sleep disorder and should see your doctor if your sleep diary reveals any of the following: You consistently take more than 30 minutes each night to fall asleep. You consistently awaken more than a few times or for long periods of time each night. You take frequent naps. You often feel sleepy during the day—or you fall asleep at inappropriate times during the day. Do You Think You Have a Sleep Disorder? 54 Sample Sleep Diary Name: Complete in the Morning Today’s date (include month/day/year): Monday* Time I went to bed last night: Time I woke up this morning: No. of hours slept last night: 11 p.m. 7 a.m. 8 Number of awakenings and total time awake last night: 5 times 2 hours How long I took to fall asleep last night: 30 mins. Medications taken last night: None How awake did I feel when I got up this morning? 1—Wide awake 2—Awake but a little tired 3—Sleepy 2 Complete in the Evening Number of caffeinated drinks (coffee, tea, cola) and time when I had them today: 1 drink at 8 p.m. Number of alcoholic drinks (beer, wine, liquor) and time when I had them today: 2 drinks 9 p.m. Naptimes and lengths today: 3:30 p.m. 45 mins. Exercise times and lengths today: None How sleepy did I feel during the day today? 1—So sleepy had to struggle 1 to stay awake during much of the day 2—Somewhat tired 3—Fairly alert 4—Wide awake Your Guide to Healthy Sleep * This column shows example diary entries—use as a model for your own diary notes. 55 Do You Think You Have a Sleep Disorder? 56Your Guide to Healthy Sleep How To Find a Sleep Center and Sleep Specialist If your doctor refers you to a sleep center or sleep specialist, make sure that center or specialist is qualified to diagnose and treat your sleep problem. To find sleep centers accredited by the American Academy of Sleep Medicine, go to www.aasmnet.org and click on “Find a Sleep Center” (under the Patients & Public menu), or call 708–492–0930. To find sleep specialists certified by the American Board of Sleep Medicine, go to www.absm.org and click on “Verification of Diplomates of the ABSM.” 57 n n n n Research Researchers have learned a lot about sleep and sleep disorders in recent years. That knowledge has led to a better understanding of the importance of sleep to our lives and our health. Research supported by the National Heart, Lung, and Blood Institute (NHLBI) has helped identify some of the causes of sleep disorders and their effects on the heart, brain, lungs, and other body systems. The NHLBI also supports ongoing research on the most effective ways to diagnose and treat sleep disorders. Many questions remain about sleep and sleep disorders. The NHLBI continues to support a range of research that focuses on: Better understanding of how a lack of sleep increases the risk for obesity, diabetes, heart disease, and stroke New ways to diagnose sleep disorders Genetic, environmental, and social factors that lead to sleep disorders The adverse effects from a lack of sleep on body and brain Much of this research depends on the willingness of volunteers to participate in clinical research. If you would like to help researchers advance science on sleep or about a sleep disorder you have and possible treatments, talk to your doctor about participating in clinical research. (For more information, see “Clinical Research” on page 58.) Research 58Your Guide to Healthy Sleep Clinical research Researchers can learn quite a bit about sleep and sleep disorders by studying animals. However, to fully understand sleep and its affect on health and functioning, as well as how best to diagnose and treat sleep disorders, researchers need to do clinical research on people. This type of research is called clinical research because it is often conducted in clinical settings, such as hospitals or doctors’ offices. The two types of clinical research are clinical trials and clinical studies. l Clinical trials test new ways to diagnose, prevent, or treat various disorders. For example, treatments (such as medicines, medical devices, surgery, or other procedures) for a disorder need to be tested in people who have the disorder. A trial helps determine whether a treatment is safe and effective in humans before it is made available for public use. In a clinical trial, participants are randomly assigned to groups. One group receives the new treatment being tested. Other groups may receive a different treatment or a placebo (an inactive substance resembling a drug being tested). Comparing results from the groups gives researchers confidence that changes in the test group are due to the new treatment and not to other factors. 59 l l l l Other types of clinical studies are done to discover the factors, including environmental, behavioral, or genetic factors, that cause or worsen various disorders. Researchers may follow a group of people over time to learn what factors contribute to becoming sick. Clinical studies and trials may be relatively brief, or may last for years and require many visits to the study sites. These sites usually are university hospitals or research centers, but they can include private doctors’ offices and community hospitals. If you participate in clinical research, the research will be explained to you in detail, you will be given a chance to ask questions, and you will be asked to provide written permission. You may not directly benefit from the results of the clinical research you participate in, but the information gathered will help others and will add to scientific knowledge. Taking part in clinical research has other benefits, as well. You’ll learn more about your disorder, you’ll have the support of a team of health care providers, and your health will likely be monitored closely. However, participation also can have risks, which you should discuss with your doctor. No matter what you decide, your regular medical care will not be affected. If you’re thinking about participating in a clinical study, you may have questions about the purpose of the study, the types of tests and treatment involved, how participation will affect your daily life, and whether any costs are involved. Your doctor may be able to answer some of your questions and help you find clinical studies in which you can participate. You also can visit the following Web sites to learn about being in a study and to search for clinical trials being done on your disorder: www.clinicaltrials.gov http://clinicalresearch.nih.gov www.nhlbi.nih.gov/studies/index.htm Clinical Research Research 60Your Guide to Healthy Sleep For More Sleep Information Resources From the National Heart, Lung, and Blood Institute (NHLBI) National Center on Sleep Disorders Research Division of Lung Diseases, NHLBI Two Rockledge Centre, Suite 10170 6701 Rockledge Drive Bethesda, MD 20895–7952 Phone: 301–435–0199 Fax: 301–480–3451 Web site: www.nhlbi.nih.gov/sleep NHLBI Diseases and Conditions Index (DCI) The DCI includes articles on sleep disorders, tests, and procedures, along with videos, podcasts, and Spanish-language articles. Web site: www.nhlbi.nih.gov/health/dci/index.html NHLBI Health Information Center P.O. Box 30105 Bethesda, MD 20824–0105 Telephone: 301–592–8573 TTY: 240–629–3255 Fax: 301–592–8563 E-mail: [email protected] Web site: www.nhlbi.nih.gov NIH Office of Science Education Web site (for high school supplemental curriculum: Sleep, Sleep Disorders, and Biological Rhythms) http://science.education.nih.gov 61 Resources From Other Sleep Organizations American Academy of Sleep Medicine (AASM) 2510 North Frontage Road Darien, IL 60561 Telephone: 630–737–9700 Fax: 630–737–9790 Web site: www.aasmnet.org American Sleep Apnea Association 6856 Eastern Avenue, NW., Suite 203 Washington, DC 20012 Telephone: 202–203–3650 Fax: 202–293–3656 Web site: www.sleepapnea.org Narcolepsy Network P.O. Box 294 Pleasantville, NY 10570 Telephone: 401–667–2523 Fax: 401–633–6567 E-mail: [email protected] Web site: www.narcolepsynetwork.org National Sleep Foundation 1010 North Glebe Road, Suite 310 Arlington, VA 22201 Telephone: 703–243–1697 E-mail: [email protected] Web site: www.sleepfoundation.org Restless Legs Syndrome Foundation 1610 14th Street, NW., Suite 300 Rochester, MN 55901 Telephone: 507–287–6465 Fax: 507–287–6312 E-mail: [email protected] Web site: www.rls.org For More Sleep Information 62Your Guide to Healthy Sleep ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ Notes 63 ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ Notes Notes Discrimination Prohibited: Under provisions of applicable public laws enacted by Congress since 1964, no person in the United States shall, on the grounds of race, color, national origin, handicap, or age, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any program or activity (or, on the basis of sex, with respect to any educa- tion program or activity) receiving Federal finan- cial assistance. In addition, Executive Order 11141 prohibits discrimination on the basis of age by contractors and subcontractors in the performance of Federal contracts, and Executive Order 11246 states that no federally funded con- tractor may discriminate against any employee or applicant for employment because of race, color, religion, sex, or national origin. Therefore, the National Heart, Lung, and Blood Institute must be operated in compliance with these laws and Executive Orders. NIH Publication No. 11-5271 Originally printed November 2005 Revised August 2011
Only respond with the most direct answer possible. Do not discuss anything else. Use only information from the provided document. EVIDENCE: Y O U R G U I D E T O Healthy Sleep Y o u r G u i d e t o Healthy Sleep NIH Publication No. 11-5271 Originally printed November 2005 Revised August 2011 Contents Introduction 1 What Is Sleep? 4 What Makes You Sleep? 7 What Does Sleep Do for You? 12 Your Learning, Memory, and Mood 12 Your Heart 13 Your Hormones 14 How Much Sleep Is Enough? 19 What Disrupts Sleep? 25 Is Snoring a Problem? 30 Common Sleep Disorders 33 Insomnia 35 Sleep Apnea 38 Restless Legs Syndrome 47 Narcolepsy 48 Parasomnias (Abnormal Arousals) 51 Do You Think You Have a Sleep Disorder? 53 How To Find a Sleep Center and Sleep Specialist 56 Research 57 For More Sleep Information 60 Contents 1 Introduction Think of your daily activities. Which activity is so important you should devote one-third of your time to doing it? Probably the first things that come to mind are working, spending time with your family, or doing leisure activities. But there’s something else you should be doing about one-third of your time—sleeping. Many people view sleep as merely a “down time” when their brains shut off and their bodies rest. People may cut back on sleep, think ing it won’t be a problem, because other responsibilities seem much more important. But research shows that a number of vital tasks carried out during sleep help people stay healthy and function at their best. While you sleep, your brain is hard at work forming the pathways necessary for learning and creating memories and new insights. Without enough sleep, you can’t focus and pay attention or respond quickly. A lack of sleep may even cause mood problems. Also, growing evidence shows that a chronic lack of sleep increases your risk of obesity, diabetes, cardiovas cular disease, and infections. Introduction 2Your Guide to Healthy Sleep Despite growing support for the idea that adequate sleep, like adequate nutrition and physical activity, is vital to our well-being, people are sleeping less. The nonstop “24/7” nature of the world today encourages longer or nighttime work hours and offers continual access to entertainment and other activities. To keep up, people cut back on sleep. A common myth is that people can learn to get by on little sleep (such as less than 6 hours a night) with no adverse effects. Research suggests, however, that adults need at least 7–8 hours of sleep each night to be well rested. Indeed, in 1910, most people slept 9 hours a night. But recent surveys show the average adult now sleeps fewer than 7 hours a night. More than one-third of adults report daytime sleepiness so severe that it interferes with work, driving, and social functioning at least a few days each month. Evidence also shows that children’s and adolescents’ sleep is shorter than recommended. These trends have been linked to increased exposure to electronic media. Lack of sleep may have a direct effect on children’s health, behavior, and development. Chronic sleep loss or sleep disorders may affect as many as 70 million Americans. This may result in an annual cost of $16 billion in health care expenses and $50 billion in lost productivity. 3 What happens when you don’t get enough sleep? Can you make up for lost sleep during the week by sleeping more on the weekends? How does sleep change as you become older? Is snoring a problem? How can you tell if you have a sleep disorder? Read on to find the answers to these questions and to better understand what sleep is and why it is so necessary. Learn about common sleep myths and practical tips for getting enough sleep, coping with jet lag and nighttime shift work, and avoiding dangerous drowsy driving. Many common sleep disorders go unrecognized and thus are not Introduction treated. This booklet also gives the latest information on sleep disorders such as insomnia (trouble falling or staying asleep), sleep apnea (pauses in breathing during sleep), restless legs syndrome, narcolepsy (extreme daytime sleepiness), and parasomnias (abnormal sleep behaviors).It’s important to tell your doctor what you are experiencing, so you can help your doctor diagnose your condition. S Z E - P I N G “ ” 4Your Guide to Healthy Sleep What Is Sleep? Sleep was long considered just a block of time when your brain and body shut down. Thanks to sleep research studies done over the past several decades, it is now known that sleep has distinct stages that cycle throughout the night in predictable patterns. How well rested you are and how well you function depend not just on your total sleep time but on how much sleep you get each night and the timing of your sleep stages. Your brain and body functions stay active through out sleep, and each stage of sleep is linked to a specific type of brain waves (distinctive patterns of electrical activity in the brain). Sleep is divided into two basic types: rapid eye movement (REM) sleep and non-REM sleep (with three different stages). (For more information, see “Types of Sleep” on page 5.) Typically, sleep begins with non-REM sleep. In stage 1 non-REM sleep, you sleep lightly and can be awakened easily by noises or other disturbances. During this first stage of sleep, your eyes move slowly, your muscles relax, and your heart and breath ing rates begin to slow. You then enter stage 2 non-REM sleep, which is defined by slower brain waves with occasional bursts of rapid waves. You spend about half the night in this stage. When you progress into stage 3 non- REM sleep, your brain waves become even slower, and the brain produces extremely slow waves almost exclusively (called Delta waves). 5 l l l l l Stage 3 is a very deep stage of sleep, during which it is very difficult to be awakened. Children who wet the bed or sleep walk tend to do so during stage 3 of non-REM sleep. Deep sleep is considered the “restorative” stage of sleep that is necessary for feeling well rested and energetic during the day. Types of Sleep Non-REM Sleep REM Sleep Stage 1: Light sleep; easily awakened; muscles relax with occasional twitches; eye movements are slow. Stage 2: Eye movements stop; slower brain waves, with occasional bursts of rapid brain waves. Stage 3: Occurs soon after you fall asleep and mostly in the first half of the night. Deep sleep; difficult to awaken; large slow brain waves, heart and respiratory rates are slow and muscles are relaxed. Usually first occurs about 90 minutes after you fall asleep, and longer, deeper periods occur during the second half of the night; cycles along with the non-REM stages throughout the night. Eyes move rapidly behind closed eyelids. Breathing, heart rate, and blood pressure are irregular. Dreaming occurs. Arm and leg muscles are temporarily paralyzed. Types of Sleep During REM sleep, your eyes move rapidly in different directions, even though your eyelids stay closed. Your breathing also becomes more rapid, irregular, and shallow, and your heart rate and blood pressure increase. Dreaming typically occurs during REM sleep. During this type of sleep, your arm and leg muscles are temporarily paralyzed so that you cannot “act out” any dreams that you may be having. What Is Sleep? 6Your Guide to Healthy Sleep You typically first enter REM sleep about an hour to an hour and a half after falling asleep. After that, the sleep stages repeat them selves continuously while you sleep. As you sleep, REM sleep time becomes longer, while time spent in stage 3 non-REM sleep becomes shorter. By the time you wake up, nearly all your sleep time has been spent in stages 1 and 2 of non-REM sleep and in REM sleep. If REM sleep is severely disrupted during one night, REM sleep time is typically longer than normal in subsequent nights until you catch up. Overall, almost one-half of your total sleep time is spent in stage 2 non-REM sleep and about one-fifth each in deep sleep (stage 3 of non-REM sleep) and REM sleep. In contrast, infants spend half or more of their total sleep time in REM sleep. Gradually, as they grow, the percentage of total sleep time they spend in REM contin ues to decrease, until it reaches the one-fifth level typical of later childhood and adulthood. Why people dream and why REM sleep is so important are not well understood. It is known that REM sleep stimulates the brain regions you use to learn and make memories. Animal studies suggest that dreams may reflect the brain’s sorting and selectively storing new information acquired during wake time. While this information is processed, the brain might revisit scenes from the day and mix them randomly. Dreams are generally recalled when we wake briefly or are awakened by an alarm clock or some other noise in the environment. Studies show, however, that other stages of sleep besides REM also are needed to form the pathways in the brain that enable us to learn and remember. 7 What Makes You Sleep? Although you may put off going to sleep in order to squeeze more activities into your day, eventually your need for sleep becomes overwhelming. This need appears to be due, in part, to two sub stances your body produces. One substance, called adenosine, builds up in your blood while you’re awake. Then, while you sleep, your body breaks down the adenosine. Levels of this substance in your body may help trigger sleep when needed. A buildup of adenosine and many other complex factors might explain why, after several nights of less than optimal amounts of sleep, you build up a sleep debt. This may cause you to sleep longer than normal or at unplanned times during the day. Because of your body’s internal processes, you can’t adapt to getting less sleep than your body needs. Eventually, a lack of sleep catches up with you. The other substance that helps make you sleep is a hormone called melatonin. This hormone makes you naturally feel sleepy at night. It is part of your internal “biological clock,” which controls when you feel sleepy and your sleep patterns. Your biological clock is a small bundle of cells in your brain that works throughout the day and night. Internal and external environmental cues, such as light signals received through your eyes, control these cells. Your biologi cal clock triggers your body to produce melatonin, which helps prepare your brain and body for sleep. As melatonin is released, you’ll feel increasingly drowsy. Because of your biological clock, you naturally feel the most tired between midnight and 7 a.m. You also may feel mildly sleepy in the afternoon between 1 p.m. and 4 p.m. when another increase in melatonin occurs in your body. Your biological clock makes you the most alert during daylight hours and the least alert during the early morning hours. Conse quently, most people do their best work during the day. Our 24/7 society, however, demands that some people work at night. Nearly one-quarter of all workers work shifts that are not during the daytime, and more than two-thirds of these workers have problem sleepiness and/or difficulty sleeping. Because their work schedules What Makes You Sleep? 8Your Guide to Healthy Sleep are at odds with powerful sleep-regulating cues like sunlight, night shift workers often find themselves drowsy at work, and they have difficulty falling or staying asleep during the daylight hours when their work schedules require them to sleep. The fatigue experienced by night shift workers can be dangerous. Major industrial accidents—such as the Three Mile Island and Chernobyl nuclear power plant accidents and the Exxon Valdez oil spill—have been caused, in part, by mistakes made by overly tired workers on the night shift or an extended shift. Night shift workers also are at greater risk of being in car crashes when they drive home from work during the early morning hours, because the biological clock is not sending out an alerting signal. One study found that one-fifth of night shift workers had a car crash or a near miss in the preceding year because of sleepiness on the drive home from work. Night shift workers are also more likely to have physical problems, such as heart disease, digestive troubles, and infertility, as well as emotional problems. All of these problems may be related, at least in part, to the workers’ chronic sleepiness, possi bly because their biological clocks are not in tune with their work schedules. See “Working the Night Shift” on page 9 for some helpful tips if you work a night shift. Other factors also can influence your need for sleep, including your immune system’s production of hormones called cytokines. Cyto kines are made to help the immune system fight certain infections or chronic inflammation and may prompt you to sleep more than usual. The extra sleep may help you conserve the resources needed to fight the infection. Recent studies confirm that being well rested improves the body’s responses to infection. People are creatures of habit, and one of the hardest habits to break is the natural wake and sleep cycle. Together, a number of physiological factors help you sleep and wake up at the same times each day. 9 Consequently, you may have a hard time adjusting when you travel across time zones. The light cues outside and the clocks in your new location may tell you it is 8 a.m. and you should be active, but your body is telling you it is more like 4 a.m. and you should sleep. The end result is jet lag—sleepiness during the day, difficulty falling or staying asleep at night, poor concentration, confusion, nausea, and generally feeling unwell and irritable. See “Dealing With Jet Lag” on page 10. Working the Night Shift Try to limit night shift work, if that is possible. If you must work the night shift, the following tips may help you: l l l l l Increase your total amount of sleep by adding naps and lengthening the amount of time you allot for sleep. Use bright lights in your workplace. Minimize the number of shift changes so that your body’s biological clock has a longer time to adjust to a nighttime work schedule. Get rid of sound and light distractions in your bedroom during your daytime sleep. Use caffeine only during the first part of your shift to promote alertness at night. If you are unable to fall asleep during the day, and all else fails, talk with your doctor to see whether it would be wise for you to use prescribed, short-acting sleeping pills to help you sleep during the day. Night Shift What Makes You Sleep? 10Your Guide to Healthy Sleep Dealing With Jet Lag Be aware that adjusting to a new time zone may take several days. If you are going to be away for just a few days, it may be better to stick to your original sleep and wake times as much as possible, rather than adjusting your biological clock too many times in rapid succession. Eastward travel generally causes more severe jet lag than westward travel because traveling east requires you to shorten the day, and your biological clock is better able to adjust to a longer day than a shorter day. Fortunately for globetrotters, a few preventive measures and adjustments seem to help some people relieve jet lag, particularly when they are going to spend more than a few days at their destination: l l Adjust your biological clock. During the 2–3 days prior to a long trip, get adequate sleep. You can make minor changes to your sleep schedule. For example, if you are traveling west, delay your bed time and wake time progressively by 20- to 30-minute intervals. If you are traveling east, advance your wake time by 10 to 15 minutes a day for a few days and try to advance your bed time. Decreasing light exposure at bedtime and increasing light exposure at wake time can help you make these adjustments. When you arrive at your destination, spend a lot of time outdoors so your body gets the light cues it needs to adjust to the new time zone. Take a couple of short 10–15 minute catnaps if you feel tired, but do not take long naps during the day. Avoid alcohol and caffeine. Although it may be tempting to drink alcohol to relieve the stress of travel and make it easier to fall asleep, you’re more likely to sleep lighter and wake up in the middle of the night when the effects of the alcohol wear off. Caffeine can help keep you awake longer, but caffeine also can make it harder for you to fall asleep if its effects haven’t worn off by the time you are ready to go to bed. Therefore, it’s best to use caffeine only during the morning and not during the afternoon. 11 l What about melatonin? Your body produces this hormone that may cause some drowsiness and cues the brain and body that it is time to fall asleep. Melatonin builds up in your body during the early evening and into the first 2 hours of your sleep period, and then its release stops in the middle of the night. Melatonin is available as an over-the-counter supplement. Because melatonin is considered safe when used over a period of days or weeks and seems to help people feel sleepy, it has been suggested as a treatment for jet lag. But melatonin’s effectiveness is controversial, and its safety when used over a prolonged period is unclear. Some studies find that taking melatonin supplements before bedtime for several days after arrival in a new time zone can make it easier to fall asleep at the proper time. Other studies find that melatonin does not help relieve jet lag. What Makes You Sleep? Jet Lag 12Your Guide to Healthy Sleep What Does Sleep Do for You? A number of aspects of your health and quality of life are linked to sleep, and these aspects are impaired when you are sleep deprived. Your Learning, Memory, and Mood Students who have trouble grasping new information or learning new skills are often advised to “sleep on it,” and that advice seems well founded. Recent studies reveal that people can learn a task better if they are well rested. They also can better remember what they learned if they get a good night’s sleep after learning the task than if they are sleep deprived. Study volunteers had to sleep at least 6 hours to show improvement in learning. Additionally, the amount of improvement was directly related to how much time they slept—for example, volunteers who slept 8 hours outperformed those who slept only 6 or 7 hours. Other studies suggest that it’s important to get enough rest the night before a mentally challenging task, rather than only sleeping for a short period or waiting to sleep until after the task is complete. Many well-known artists and scientists claim to have had creative insights while they slept. Mary Shelley, for example, said the idea for her novel Frankenstein came to her in a dream. Although it has not been shown that dreaming is the driving force behind innova tion, one study suggests that sleep is needed for creative problem- solving. In that study, volunteers were asked to perform a memory task and then were tested on it 8 hours later. Those who were allowed to sleep for 8 hours immediately after trying the task and before being tested were much more likely to find a creative way of simplifying the task and improving their performance, compared with those who were awake the entire 8 hours before being tested. Exactly what happens during sleep to improve our learning, memo ry, and insight isn’t known. Experts suspect, however, that while 13 people sleep, they form or strengthen the pathways of brain cells needed to perform these tasks. This process may explain why sleep is needed for proper brain development in infants. Not only is a good night’s sleep required to form new learning and memory pathways in the brain, but also sleep is necessary for those pathways to work well. Several studies show that lack of sleep causes thinking processes to slow down. Lack of sleep also makes it harder to focus and pay attention. Lack of sleep can make you more easily confused. Studies also find that a lack of sleep leads to faulty decisionmaking and more risk taking. A lack of sleep slows down your reaction time, which is particularly important to driving and other tasks that require quick response. When people who lack sleep are tested on a driving simulator, they perform just as poorly as people who are drunk. (See “Crash in Bed, Not on the Road” on page 16.) The bottom line is: Not getting a good night’s sleep can be dangerous! Even if you don’t have a mentally or physically challenging day ahead of you, you should still get enough sleep to put yourself in a good mood. Most people report being irritable, if not downright unhappy, when they lack sleep. People who chronically suffer from a lack of sleep, either because they do not spend enough time in bed or because they have an untreated sleep disorder, are at greater risk of developing depression. One group of people who usually don’t get enough sleep is mothers of newborns. Some experts think depression after childbirth (postpar tum blues) is caused, in part, by a lack of sleep. Your Heart Sleep gives your heart and vascular system a much-needed rest. During non-REM sleep, your heart rate and blood pressure progressively slow as you enter deeper sleep. During REM sleep, in response to dreams, your heart What Does Sleep Do for You? 14Your Guide to Healthy Sleep and breathing rates can rise and fall and your blood pressure can be variable. These changes throughout the night in blood pressure and heart and breathing rates seem to promote cardiovascular health. If you don’t get enough sleep, the nightly dip in blood pressure that appears to be important for good cardiovascular health may not occur. Failure to experience the normal dip in blood pressure during sleep can be related to insufficient sleep time, an untreated sleep disorder (for example, sleep apnea), or other factors. Some sleep- related abnormalities may be markers of heart disease and increased risk of stroke. A lack of sleep also puts your body under stress and may trigger the release of more adrenaline, cortisol, and other stress hormones during the day. These hormones keep your blood pressure from dipping during sleep, which increases your risk for heart disease. Lack of sleep also may trigger your body to produce more of certain proteins thought to play a role in heart disease. For example, some studies find that people who repeatedly don’t get enough sleep have higher than normal blood levels of C-reactive protein, a sign of inflammation. High levels of this protein may indicate an increased risk for a condition called atherosclerosis, or hardening of the arteries. Your Hormones When you were young, your mother may have told you that you need to get enough sleep to grow strong and tall. She may have been right! Deep sleep (stage 3 non-REM sleep) triggers more release of growth hormone, which contributes to growth in children and boosts muscle mass and the repair of cells and tissues in children and adults. Sleep’s effect on the release of sex hormones also contributes to puberty and fertility. Consequently, women who work at night and tend to lack sleep may be at increased risk of miscarriage. Your mother also probably was right if she told you that getting a good night’s sleep on a regular basis would help keep you from getting sick and help you get better if you do get sick. During sleep, your body creates more cytokines—cellular hormones that help the immune system fight various infections. Lack of sleep can reduce your body’s ability to fight off common infections. Research also reveals that a lack of sleep can reduce the body’s response to the flu 15 vaccine. For example, sleep-deprived volunteers given the flu vaccine produced less than half as many flu antibodies as those who were well rested and given the same vaccine. Although lack of exercise and other factors also contribute, the current epidemic of diabetes and obesity seems to be related, at least in part, to chronically short or disrupted sleep or not sleeping during the night. Evidence is growing that sleep is a powerful regulator of appetite, energy use, and weight control. During sleep, the body’s production of the appetite suppressor leptin increases, and the appetite stimulant grehlin decreases. Studies find that the less people sleep, the more likely they are to be overweight or obese and prefer eating foods that are higher in calories and carbohydrates. People who report an average total sleep time of 5 hours a night, for example, are much more likely to become obese, compared with people who sleep 7–8 hours a night. A number of hormones released during sleep also control the body’s use of energy. A distinct rise and fall of blood sugar levels during sleep appears to be linked to sleep stages. Not sleeping at the right time, not getting enough sleep overall, or not enough of each stage of sleep disrupts this pattern. One study found that, when healthy young men slept only 4 hours a night for 6 nights in a row, their insulin and blood sugar levels matched those seen in people who were developing diabetes. Another study found that women who slept less than 7 hours a night were more likely to develop diabetes over time than those who slept between 7 and 8 hours a night. What Does Sleep Do for You? 16Your Guide to Healthy Sleep Crash in Bed Not on the Road Most people are aware of the hazards of drunk driving. But driving while sleepy can be just as dangerous. Indeed, crashes due to sleepy drivers are as deadly as those due to drivers impaired by alcohol. And you don’t have to be asleep at the wheel to put yourself and others in danger. Both alcohol and a lack of sleep limit your ability to react quickly to a suddenly braking car, a sharp curve in the road, or other situations that require rapid responses. Just a few seconds’ delay in reaction time can be a life-or-death matter when driving. When people who lack sleep are tested on a driving simulator, they perform as badly as or worse than those who are drunk. The combination of alcohol and lack of sleep can be especially dangerous. There is increasing evidence that sleep deprivation and inexperience behind the wheel, both particularly common in adolescents, is a lethal combination. Of course, driving is also hazardous if you fall asleep at the wheel, which happens surprisingly often. One-quarter of the drivers surveyed in New York State reported they had fallen asleep at the wheel at some time. Often, people briefly nod off at the wheel without being aware of it—they just can’t recall what happened over the previous few seconds or longer. And people who lack sleep are more apt to take risks and make poor judgments, which also can boost their chances of getting in a car crash. Opening a window or turning up the radio won’t help you stay awake while driving. The bottom line is that there is no substitute for sleep. Be aware of these warning signs that you are too sleepy to drive safely: trouble keeping your eyes open or focused, continual yawning, or being unable to recall driving the past few miles. Remember, if you are short on sleep, stay out of the driver’s seat! 17 Here are some potentially life-saving tips for avoiding drowsy driving: l l l l l l Be well rested before hitting the road. If you have several nights in a row of fewer than 7–8 hours of sleep, your reaction time slows. Restoring that reaction time to normal can take more than one night of good sleep, because a sleep debt accumulates after each night you lose sleep. It may take several nights of being well rested to repay that sleep debt and make you ready for driving on a long road trip. Avoid driving between midnight and 7 a.m. Unless you are accustomed to being awake then, this period of time is when we are naturally the least alert and most tired. Don’t drive alone. A companion who can keep you engaged in conversation might help you stay awake while driving. Schedule frequent breaks on long road trips. If you feel sleepy while driving, pull off the road and take a nap for 15–20 minutes. Don’t drink alcohol. Just one beer when you are sleep deprived will affect you as much as two or three beers when you are well rested. Don’t count on caffeine or other tricks. Although drinking a cola or a cup of coffee might help keep you awake for a short time, it won’t overcome extreme sleepiness or relieve a sleep debt. What Does Sleep Do for You? 18Your Guide to Healthy SleepI wake up early to get ready for school. I am tired in the morning, and by the end of the school day, I am very tired again. An afterschool nap seems to refresh me and help me focus on homework. Without it, I am grumpy and stressed, can’t focus, and sometimes get headaches. D A P H N E “ ” 19 How Much Sleep Is Enough? Animal studies suggest that sleep is as vital as food for survival. Rats, for example, normally live 2–3 years, but they live only 5 weeks if they are deprived of REM sleep and only 2–3 weeks if they are deprived of all sleep stages—a timeframe similar to death due to starvation. But how much sleep do humans need? To help answer that question, scientists look at how much people sleep when unrestricted, the average amount of sleep among various age groups, and the amount of sleep that studies reveal is necessary to function at your best. When healthy adults are given unlimited opportunity to sleep, they sleep on average between 8 and 8.5 hours a night. But sleep needs vary from person to person. Some people appear to need only about 7 hours to avoid problem sleepiness, whereas others need 9 or more hours of sleep. Sleep needs also change throughout the life cycle. Newborns sleep between 16 and 18 hours a day, and children in preschool sleep between 11 and 12 hours a day. School-aged children and adolescents need at least 10 hours of sleep each night. The hormonal influences of puberty tend to shift adolescents’ biologi cal clocks. As a result, teenagers (who need between 9 and 10 hours of sleep a night) are more likely to go to bed later than younger children and adults, and they tend to want to sleep later in the morning. This delayed sleep–wake rhythm conflicts with the early- morning start times of many high schools and helps explain why most teenagers get an average of only 7–7.5 hours of sleep a night. As people get older, the pattern of sleep also changes—especially the amount of time spent in deep sleep. This explains why children can sleep through loud noises and why they might not wake up when moved. Across the lifespan, the sleep period tends to advance, namely relative to teenagers; older adults tend to go to bed earlier and wake earlier. The quality—but not necessarily the quantity—of How Much Sleep Is Enough? 20Your Guide to Healthy Sleep deep, non-REM sleep also changes, with a trend toward lighter sleep. The relative percentages of stages of sleep appear to stay mostly constant after infancy. From midlife through late life, people awaken more throughout the night. These sleep disruptions cause older people to lose more and more of stages 1 and 2 non-REM sleep as well as REM sleep. Some older people complain of difficulty falling asleep, early morning awakenings, frequent and long awakenings during the night, daytime sleepiness, and a lack of refreshing sleep. Many sleep problems, however, are not a natural part of sleep in the elderly. Their sleep complaints may be due, in part, to medical conditions, illnesses, or medications they are taking— all of which can disrupt sleep. In fact, one study found that the prevalence of sleep problems is very low in healthy older adults. Other causes of some of older adults’ sleep complaints are sleep apnea, restless legs syndrome, and other sleep disorders that become more common with age. Also, older people are more likely to have their sleep disrupted by the need to urinate during the night. Some evidence shows that the biological clock shifts in older people, so they are more apt to go to sleep earlier at night and wake up earlier in the morning. No evidence indicates that older people can get by with less sleep than younger people. (See “Top 10 Sleep Myths” on page 22.) Poor sleep in older people may result in excessive daytime sleepiness, attention and memory problems, depressed mood, and overuse of sleeping pills. Despite variations in sleep quantity and quality, both related to age and 21 between individuals, studies suggest that the optimal amount of sleep needed to perform adequately, avoid a sleep debt, and not have problem sleepiness during the day is about 7–8 hours for adults and at least 10 hours for school-aged children and adolescents. Similar amounts seem to be necessary to avoid an increased risk of develop ing obesity, diabetes, or cardiovascular diseases. Quality of sleep and the timing of sleep are as important as quantity. People whose sleep is frequently interrupted or cut short may not get enough of both non-REM sleep and REM sleep. Both types of sleep appear to be crucial for learning and memory—and perhaps for the restorative benefits of healthy sleep, including the growth and repair of cells. Many people try to make up for lost sleep during the week by sleeping more on the weekends. But if you have lost too much sleep, sleeping in on a weekend does not completely erase your sleep debt. Certainly, sleeping more at the end of a week won’t make up for any poor performance you had earlier in that week. Just one night of inadequate sleep can negatively affect your functioning and mood during at least the next day. Daytime naps are another strategy some people use to make up for lost sleep during the night. Some evidence shows that short naps (up to an hour) can make up, at least partially, for the sleep missed on the previous night and improve alertness, mood, and work performance. But naps don’t substitute for a good night’s sleep. One study found that a daytime nap after a lack of sleep at night did not fully restore levels of blood sugar to the pattern seen with adequate nighttime sleep. If a nap lasts longer than 20 minutes, you may have a hard time waking up fully. In addition, late afternoon naps can make falling asleep at night more difficult. How Much Sleep Is Enough? 22Your Guide to Healthy Sleep Top 10 Sleep Myths Myth 1: Sleep is a time when your body and brain shut down for rest and relaxation. No evidence shows that any major organ (including the brain) or regulatory system in the body shuts down during sleep. Some physiological processes actually become more active while you sleep. For example, secretion of certain hormones is boosted, and activity of the pathways in the brain linked to learning and memory increases. Myth 2: Getting just 1 hour less sleep per night than needed will not have any effect on your daytime functioning. This lack of sleep may not make you noticeably sleepy during the day. But even slightly less sleep can affect your ability to think properly and respond quickly, and it can impair your cardiovascular health and energy balance as well as your body’s ability to fight infections, particularly if lack of sleep continues. If you consistently do not get enough sleep, a sleep debt builds up that you can never repay. This sleep debt affects your health and quality of life and makes you feel tired during the day. Myth 3: Your body adjusts quickly to different sleep schedules. Your biological clock makes you most alert during the daytime and least alert at night. Thus, even if you work the night shift, you will naturally feel sleepy when nighttime comes. Most people can reset their biological clock, but only by appropriately timed cues—and even then, by 1–2 hours per day at best. Consequently, it can take more than a week to adjust to a substantial change in your sleep–wake cycle—for example, when traveling across several time zones or switching from working the day shift to the night shift. Myth 4: People need less sleep as they get older. Older people don’t need less sleep, but they may get less sleep or find their sleep less refreshing. That’s because as people age, the quality of their sleep changes. Older people are also more likely to have insomnia or other medical conditions that disrupt their sleep. 23 Myth 5: Extra sleep for one night can cure you of problems with excessive daytime fatigue. Not only is the quantity of sleep important, but also the quality of sleep. Some people sleep 8 or 9 hours a night but don’t feel well rested when they wake up because the quality of their sleep is poor. A number of sleep disorders and other medical conditions affect the quality of sleep. Sleeping more won’t lessen the daytime sleepiness these disorders or conditions cause. However, many of these disorders or conditions can be treated effectively with changes in behavior or with medical therapies. Additionally, one night of increased sleep may not correct multiple nights of inadequate sleep. Myth 6: You can make up for lost sleep during the week by sleeping more on the weekends. Although this sleeping pattern will help you feel more rested, it will not completely make up for the lack of sleep or correct your sleep debt. This pattern also will not necessarily make up for impaired performance during the week or the physical problems that can result from not sleeping enough. Furthermore, sleeping later on the weekends can affect your biological clock, making it much harder to go to sleep at the right time on Sunday nights and get up early on Monday mornings. Myth 7: Naps are a waste of time. Although naps are no substitute for a good night’s sleep, they can be restorative and help counter some of the effects of not getting enough sleep at night. Naps can actually help you learn how to do certain tasks quicker. But avoid taking naps later than 3 p.m., particularly if you have trouble falling asleep at night, as late naps can make it harder for you to fall asleep when you go to bed. Also, limit your naps to no longer than 20 minutes, because longer naps will make it harder to wake up and How Much Sleep Is Enough? 24Your Guide to Healthy Sleep Top 10 Sleep Myths (continued) get back in the swing of things. If you take more than one or two planned or unplanned naps during the day, you may have a sleep disorder that should be treated. Myth 8: Snoring is a normal part of sleep. Snoring during sleep is common, particularly as a person gets older. Evidence is growing that snoring on a regular basis can make you sleepy during the day and increase your risk for diabetes and heart disease. In addition, some studies link frequent snoring to problem behavior and poorer school achievement in children. Loud, frequent snoring also can be a sign of sleep apnea, a serious sleep disorder that should be evaluated and treated. (See “Is Snoring a Problem?” on page 30.) Myth 9: Children who don’t get enough sleep at night will show signs of sleepiness during the day. Unlike adults, children who don’t get enough sleep at night typically become hyperactive, irritable, and inattentive during the day. They also have increased risk of injury and more behavior problems, and their growth rate may be impaired. Sleep debt appears to be quite common during childhood and may be misdiagnosed as attention-deficit hyperactivity disorder. Myth 10: The main cause of insomnia is worry. Although worry or stress can cause a short bout of insomnia, a persistent inability to fall asleep or stay asleep at night can be caused by a number of other factors. Certain medications and sleep disorders can keep you up at night. Other common causes of insomnia are depression, anxiety disorders, and asthma, arthritis, or other medical conditions with symptoms that tend to be troublesome at night. Some people who have chronic insomnia also appear to be more “revved up” than normal, so it is harder for them to fall asleep. Sleep Myths 25When medicines didn’t work for me, I started making big lifestyle changes. Now I try to eat a balanced diet and walk for at least an hour each day. Without doubt, my weight loss and more active lifestyle help me sleep better. What Disrupts Sleep? Many factors can prevent a good night’s sleep. These factors range from well-known stimulants, such as coffee, to certain pain relievers, decongestants, and other culprits. Many people depend on the caffeine in coffee, cola, or tea to wake them up in the morning or to keep them awake. Caffeine is thought to block the cell receptors that adenosine (a substance in the brain) uses to trigger its sleep- inducing signals. In this way, caffeine fools the body into thinking it isn’t tired. It can take as long as 6–8 hours for the effects of caffeine to wear off completely. Thus, drinking a cup of coffee in the late afternoon may prevent your falling asleep at night. Nicotine is another stimulant that can keep you awake. Nicotine also leads to lighter than normal sleep, and heavy smokers tend to wake up too early because of nicotine withdrawal. Although alcohol is a sedative that makes it easier to fall asleep, it prevents deep sleep and REM sleep, allowing only the lighter stages of sleep. People who drink alcohol also tend to wake up in the middle of the night when the effects of an alcoholic “nightcap” wear off. Certain commonly used prescrip tion and over-the-counter medi cines contain ingredients that can keep you awake. These ingredients include decongestants and steroids. Many medicines taken to relieve headaches contain caffeine. Heart and blood pressure medications known as beta blockers can make it difficult to fall asleep and cause more awakenings during the night. People who have chronic asthma or bronchitis also have more problems falling asleep and staying asleep than healthy people, either because of their breathing difficul ties or because of the medicines What Disrupts Sleep? S Z E - P I N G “ ” 26Your Guide to Healthy Sleep they take. Other chronic painful or uncomfortable conditions— such as arthritis, congestive heart failure, and sickle cell anemia— can disrupt sleep, too. A number of psychological disorders—including schizophrenia, bipolar disorder, and anxiety disorders—are well known for disrupt ing sleep. Depression often leads to insomnia, and insomnia can cause depression. Some of these psychological disorders are more likely to disrupt REM sleep. Psychological stress also takes its toll on sleep, making it more difficult to fall asleep or stay asleep. People who feel stressed also tend to spend less time in deep sleep and REM sleep. Many people report having difficulties sleeping if, for example, they have recently lost a loved one, are going through a divorce, or are under stress at work. Menstrual cycle hormones can affect how well women sleep. Pro gesterone is known to induce sleep and circulates in greater concen trations in the second half of the menstrual cycle. For this reason, women may sleep better during this phase of their menstrual cycle. On the other hand, many women report trouble sleeping the night before their menstrual flow starts. This sleep disruption may be related to the abrupt drop in progesterone levels that occurs just before menstruation. Women in their late forties and early fifties, however, report more difficulties sleeping (insomnia) than younger women. These difficulties may be linked to menopause, when they have lower concentrations of progesterone. Hot flashes in women of this age also may cause sleep disruption and difficulties. Certain lifestyle factors also may deprive a person of needed sleep. Large meals or vigorous exercise just before bedtime can make it harder to fall asleep. While vigorous exercise in the evening may delay sleep onset for various reasons, exercise in the daytime is associated with improved nighttime sleep. If you aren’t getting enough sleep or aren’t falling asleep early enough, you may be overscheduling activi ties that can pre vent you from getting the 27 quiet relaxation time you need to prepare for sleep. Most people report that it’s easier to fall asleep if they have time to wind down into a less active state before sleeping. Relaxing in a hot bath or having a hot, caffeine-free beverage before bedtime may help. In addition, your body temperature drops after a hot bath in a way that mimics, in part, what happens as you fall asleep. Probably for both these reasons, many people report that they fall asleep more easily after a hot bath. Your sleeping environment also can affect your sleep. Clear your bedroom of any potential sleep distractions, such as noises, bright lights, a TV, a cell phone, or computer. Having a comfortable mattress and pillow can help promote a good night’s sleep. You also sleep better if the temperature in your bedroom is kept on the cool side. For more ideas on improving your sleep, check out the tips for getting a good night’s sleep below. Tips for Getting a Good Night’s Sleep l l l Stick to a sleep schedule. Go to bed and wake up at the same time each day. As creatures of habit, people have a hard time adjusting to changes in sleep patterns. Sleeping later on weekends won’t fully make up for a lack of sleep during the week and will make it harder to wake up early on Monday morning. Exercise is great, but not too late in the day. Try to exercise at least 30 minutes on most days but not later than 2–3 hours before your bedtime. Avoid caffeine and nicotine. Coffee, colas, certain teas, and chocolate contain the stimulant caffeine, and its effects can take as long as 8 hours to wear off fully. Therefore, a cup of coffee in the late afternoon can make it hard for you to fall asleep at night. Nicotine is also a stimulant, often causing smokers to sleep only very lightly. In addition, smokers often wake up too early in the morning because of nicotine withdrawal. What Disrupts Sleep? 28Your Guide to Healthy Sleep Tips for Getting a Good Night’s Sleep (continued) l l l l l l l Avoid alcoholic drinks before bed. Having a “nightcap” or alcoholic beverage before sleep may help you relax, but heavy use robs you of deep sleep and REM sleep, keeping you in the lighter stages of sleep. Heavy alcohol ingestion also may contribute to impairment in breathing at night. You also tend to wake up in the middle of the night when the effects of the alcohol have worn off. Avoid large meals and beverages late at night. A light snack is okay, but a large meal can cause indigestion that interferes with sleep. Drinking too many fluids at night can cause frequent awakenings to urinate. If possible, avoid medicines that delay or disrupt your sleep. Some commonly prescribed heart, blood pressure, or asthma medications, as well as some over-the-counter and herbal remedies for coughs, colds, or allergies, can disrupt sleep patterns. If you have trouble sleeping, talk to your doctor or pharmacist to see whether any drugs you’re taking might be contributing to your insomnia and ask whether they can be taken at other times during the day or early in the evening. Don’t take naps after 3 p.m. Naps can help make up for lost sleep, but late afternoon naps can make it harder to fall asleep at night. Relax before bed. Don’t overschedule your day so that no time is left for unwinding. A relaxing activity, such as reading or listening to music, should be part of your bedtime ritual. Take a hot bath before bed. The drop in body temperature after getting out of the bath may help you feel sleepy, and the bath can help you relax and slow down so you’re more ready to sleep. Have a good sleeping environment. Get rid of anything in your bedroom that might distract you from sleep, such as noises, bright lights, an uncomfortable bed, or warm temperatures. You sleep better if the temperature in the room 29 is kept on the cool side. A TV, cell phone, or computer in the bedroom can be a distraction and deprive you of needed sleep. Having a comfortable mattress and pillow can help promote a good night’s sleep. Individuals who have insomnia often watch the clock. Turn the clock’s face out of view so you don’t worry about the time while trying to fall asleep. l l l Have the right sunlight exposure. Daylight is key to regulating daily sleep patterns. Try to get outside in natural sunlight for at least 30 minutes each day. If possible, wake up with the sun or use very bright lights in the morning. Sleep experts recommend that, if you have problems falling asleep, you should get an hour of exposure to morning sunlight and turn down the lights before bedtime. Don’t lie in bed awake. If you find yourself still awake after staying in bed for more than 20 minutes or if you are starting to feel anxious or worried, get up and do some relaxing activity until you feel sleepy. The anxiety of not being able to sleep can make it harder to fall asleep. See a doctor if you continue to have trouble sleeping. If you consistently find it difficult to fall or stay asleep and/ or feel tired or not well rested during the day despite spending enough time in bed at night, you may have a sleep disorder. Your family doctor or a sleep specialist should be able to help you, and it is important to rule out other health or psychiatric problems that may be disturbing your sleep. What Disrupts Sleep? 30My wife noticed that I snored loudly and sometimes stopped breathing in the middle of the night. She was the one who finally pushed me to see a doctor. Is Snoring a Problem? Long the material for jokes, snoring is generally accepted as com mon and annoying in adults but as nothing to worry about. How ever, snoring is no laughing matter. Frequent, loud snoring is often a sign of sleep apnea and may increase your risk of developing cardio vascular disease and diabetes. Snoring also may lead to daytime sleepiness and impaired performance. Snoring is caused by a narrowing or partial blockage of the airways at the back of your mouth, throat, or nose. This obstruction results in increased air turbulence when breathing in, causing the soft tissues in your upper airways to vibrate. The end result is a noisy snore that can disrupt the sleep of your bed partner. This narrowing of the airways is typically caused by the soft palate, tongue, and throat relaxing while you sleep, but allergies or sinus problems also can contribute to a narrowing of the airways, as can being over weight and having extra soft tissue around your upper airways. The larger the tissues in your soft palate (the roof of your mouth in the back of your throat), the more likely you are to snore while sleeping. Alcohol or sedatives taken shortly before sleep also promote snoring. These drugs cause greater relaxation of the tissues Your Guide to Healthy Sleep in your throat and mouth. Surveys reveal that about one-half of all adults snore, and 50 percent of these adults do so loudly and frequently. African Americans, Asians, and Hispanics are more likely to snore loudly and frequent ly compared with Caucasians, and snoring problems increase with age. Not everyone who snores has sleep apnea, but people who have sleep apnea typically do snore loudly and frequently. Sleep apnea is a J I M “ ” 31 serious sleep disorder, and its hallmark is loud, frequent snoring with pauses in breathing or shallow breaths while sleeping. (See “Sleep Apnea” on page 38.) Even if you don’t experience these breathing pauses, snoring can still be a problem for you as well as for your bed partner. Snoring adds extra effort to your breathing, which can reduce the quality of your sleep and lead to many of the same health consequences as sleep apnea. One study found that older adults who did not have sleep apnea, but who snored 6–7 nights a week, were more than twice as likely to report being extremely sleepy during the day than those who never snored. The more people snored, the more daytime fatigue they reported. That sleepiness may help explain why snorers are more likely to be in car crashes than people who don’t snore. Loud snoring also can disrupt the sleep of bed partners and strain marital relations, especially if snoring causes the spouses to sleep in separate bedrooms. In addition, snoring increases the risk of developing diabetes and heart disease. One study found that women who snored regularly were twice as likely as those who did not snore to develop diabetes, even if they were not overweight (another risk factor for diabetes). Other studies suggest that regular snoring may raise the lifetime risk of developing high blood pressure, heart failure, and stroke. About one-third of all pregnant women begin snoring for the first time during their second trimester. If you are snoring while preg nant, let your doctor know. Snoring in pregnancy can be associated with high blood pressure and can have a negative effect on your baby’s growth and development. Your doctor will keep a close eye on your blood pressure throughout your pregnancy and can let you know if any additional evaluations for the snoring might be useful. In most cases, the snoring and any related high blood pressure will go away shortly after delivery. Snoring also can be a problem in children. As many as 10–15 per cent of young children, who typically have enlarged adenoids and tonsils (both tissues in the throat), snore on a regular basis. Several studies show that children who snore (with or without sleep apnea) are more likely than those who do not snore to score lower on tests that measure intelligence, memory, and attention span. These children also have more problematic behavior, including hyperactiv ity. The end result is that children who snore don’t perform in Is Snoring a Problem? 32Your Guide to Healthy Sleep school as well as those who do not snore. Strikingly, snoring was linked to a greater drop in IQ than that seen in children who had elevated levels of lead in their blood. Although the behavior of children improves after they stop snoring, studies suggest they may continue to get poorer grades in school, perhaps because of lasting effects on the brain linked to the snoring. You should have your child evaluated by your doctor if the child snores loudly and frequently—three to four times a week—especially if you note brief pauses in breathing while asleep and if there are signs of hyperactiv ity or daytime sleepiness, inadequate school achievement, or slower than expected development. Surgery to remove the adenoids and tonsils of children often can cure their snoring and any associated sleep apnea. Such surgery has been linked to a reduction in hyperactivity and improved ability to pay attention, even in children who showed no signs of sleep apnea before surgery. Snoring in older children and adults may be relieved by less invasive measures, however. These measures include losing weight, refraining from use of tobacco, sleeping on the side rather than on the back, or elevating the head while sleeping. Treating chronic congestion and refraining from alcohol or sedatives before sleeping also may de crease snoring. In some adults, snoring can be relieved by dental appliances that reposition the soft tissues in the mouth. Although numerous over-the-counter nasal strips and sprays claim to relieve snoring, no scientific evidence supports those claims. 33 Common Sleep Disorders A number of sleep disorders can disrupt your sleep quality and make you overly sleepy during the day, even if you spent enough time in bed to be well rested. (See “Common Signs of a Sleep Disorder” on page 34.) Common Sleep Disorders More than 70 sleep disorders affect at least 40 million Americans and account for an estimated $16 billion in medical costs each year, not counting costs due to lost work time, car accidents, and other factors. The four most common sleep disorders are insomnia, sleep apnea, restless legs syndrome, and narcolepsy. Additional sleep problems include chronic insufficient sleep, circadian rhythm abnormalities, and “parasomnias” such as sleep walking, sleep paralysis, and night terrors. L A U R E N “My restless legs syndrome made me lose sleep and affected my quality of life. But I’m in a good place right now. I’m taking the right medicine for me, and I’ve adopted a healthy, active lifestyle. I am very passionate about taking control of my health. ” 34Your Guide to Healthy Sleep l l l l l l l l l l l l Common Signs of a Sleep disorder Look over this list of common signs of a sleep disorder, and talk to your doctor if you have any of them on three or more nights a week: It takes you more than 30 minutes to fall asleep at night. You awaken frequently in the night and then have trouble falling back to sleep again. You awaken too early in the morning. You often don’t feel well rested despite spending 7–8 hours or more asleep at night. You feel sleepy during the day and fall asleep within 5 minutes if you have an opportunity to nap, or you fall asleep unexpectedly or at inappropriate times during the day. Your bed partner claims you snore loudly, snort, gasp, or make choking sounds while you sleep, or your partner notices that your breathing stops for short periods. You have creeping, tingling, or crawling feelings in your legs that are relieved by moving or massaging them, especially in the evening and when you try to fall asleep. You have vivid, dreamlike experiences while falling asleep or dozing. You have episodes of sudden muscle weakness when you are angry or fearful, or when you laugh. You feel as though you cannot move when you first wake up. Your bed partner notes that your legs or arms jerk often during sleep. You regularly need to use stimulants to stay awake during the day. Also keep in mind that, although children can show some of these signs of a sleep disorder, they often do not show signs of excessive daytime sleepiness. Instead, they may seem overactive and have difficulty focusing and concentrating. They also may not do their best in school. 35 n n Insomnia Insomnia is defined as having trouble falling asleep or staying asleep, or as having unrefreshing sleep despite having ample opportunity to sleep. Life is filled with events that occasionally cause insomnia for a short time. Such temporary insomnia is common and is often brought on by situations such as stress at work, family pressures, or a traumatic event. A National Sleep Foundation poll of adults in the United States found that close to half of the respondents reported temporary insomnia in the nights immediately after the terrorist attacks on September 11, 2001. Chronic insomnia is defined as having symptoms at least 3 nights per week for more than 1 month. Most cases of chronic insomnia are secondary, which means they are due to another disorder or medications. Primary chronic insomnia is a distinct sleep disorder; its cause is not yet well understood. About 30–40 percent of adults say they have some symptoms of insomnia within any given year, and about 10–15 percent of adults say they have chronic insomnia. Chronic insomnia becomes more common with age, and women are more likely than men to report having insomnia. Insomnia often causes problems during the day, such as extreme sleepiness, fatigue, a lack of energy, difficulty concentrating, depressed mood, and irritability. Thus, untreated insomnia can impair quality of life as much as, or more than, other chronic medical problems. Chronic insomnia is often caused by one or more of the following: A disease or mood disorder. The most common causes of insomnia are depression and/or anxiety disorders. Neuro logical disorders, such as Alzheimer’s or Parkinson’s disease, also can have insomnia as a symptom. Chronic insomnia can result from thyroid dysfunction, arthritis, asthma, or other medical conditions in which symptoms become more trouble some at night, making it difficult to fall asleep or stay asleep. Various prescribed and over-the-counter medications that can disrupt sleep, such as decongestants, certain pain relievers, and steroids. Common Sleep Disorders 36Your Guide to Healthy Sleep n n Sleep-disrupting behavior such as drinking alcohol, exercising shortly before bedtime, ingesting caffeine late in the day, watching TV or reading while in bed, or irregular sleep schedules due to shift work or other causes. Another sleep disorder, such as sleep apnea or restless legs syndrome. Some people, however, have primary chronic insomnia. This condi tion is linked to a tendency to be more “revved up” than normal (hyperarousal). People who have primary chronic insomnia may have heightened levels of certain hormones, higher body tempera tures, faster heart rates, and a different pattern of brain waves while they sleep. Doctors diagnose insomnia based mainly on sleep history, often by reviewing a sleep diary. An overnight sleep recording may be required if another sleep disorder is suspected. Doctors also will try to diagnose and treat any other underlying medical or psychological problems as well as identify behaviors that might be causing the insomnia. Often, people who have insomnia enter into a vicious cycle—because they’ve had trouble sleeping on previous nights, they become anxious at the slightest sign that they may not be falling asleep right away. That anxiety can make it more difficult for them to fall asleep. The more time they spend in bed not sleeping, and watching the clock, the more their anxiety—and sleeplessness—increases. To break that cycle of anxiety and negative conditioning, experts recommend going to bed only when you’re sleepy. If you can’t fall asleep (or fall back to sleep) within 20 minutes, get out of bed, go into another room, and do a relaxing activity (such as reading) until you feel sleepy again. Then return to bed. Studies have shown that this reconditioning therapy is an effective way to treat insomnia. Relaxation therapy is another strategy that works for some people who have insomnia. Relaxation therapy may include meditation and other mental relaxation techniques. It also may include physical relaxation techniques, such as progressively tensing and then relax ing each of the muscle groups in your body before sleep. Another method is to focus on breathing deeply. Relaxation therapy can help your body and mind slow down so that you can fall asleep more easily at bedtime. 37 Sleep restriction therapy also works for some people who have insomnia. Calculate your average sleep time over the course of a week, and then limit your nightly sleep time to that average. Gradu ally add more sleep time each night until you achieve a more normal night’s sleep. You should avoid daytime naps longer than 15–20 min utes during sleep restriction therapy. Napping can make it harder to fall asleep at night, which may prolong insomnia. In addition, during sleep restriction therapy, avoid driving a car or operating dangerous machinery until you are getting enough sleep at night. All of these behavioral changes are part of a treatment called cognitive behavioral therapy. Cognitive behavioral therapy also can be used to replace negative thoughts about sleep, such as “I’ll never fall asleep without sleeping pills,” with more realistic positive thinking. Cognitive behavioral therapy is effective in most people who have chronic insomnia. Some people who have chronic insomnia that is not corrected by behavioral therapy or treatment of an underlying condition may need a prescription medication. You should talk to a doctor before trying to treat insomnia with alcohol, over-the-counter or prescribed short-acting sedatives, or sedating antihistamines that induce drowsiness. The benefits of these treatments are limited, and they have risks. Some may help you fall asleep but leave you feeling unrefreshed in the morning. Others have longer lasting effects and leave you feeling still tired and groggy in the morning. Some also may lose their effectiveness over time. Doctors may prescribe sedating antidepressants for insomnia, but the effectiveness of these medicines in people who do not have depression is not known, and there are significant side effects. Common Sleep Disorders 38Your Guide to Healthy Sleep To treat their insomnia, some people pursue “natural” remedies, such as melatonin supplements or valerian teas or extracts. These remedies are available over the counter. Little evidence exists that melatonin can help relieve insomnia. Studies with valerian also have been inconclusive, and the actual dose and purity of various supplements, extracts, or teas that contain valerian may vary from product to product. In addition, because melatonin, valerian, and other natural remedies are not regulated by the Food and Drug Administration, their safety is not monitored. Sleep Apnea In people who have sleep apnea (also referred to as sleep-disordered breathing), breathing briefly stops or becomes very shallow during sleep. This change is caused by intermittent blocking of the upper airway, usually when the soft tissue in the rear of the throat collapses and partially or completely closes the airway. Each pause in breathing typically lasts 10–120 seconds and may occur 20–30 times or more each sleeping hour. If you have sleep apnea, not enough air can flow into your lungs through your mouth and nose during sleep, even though breathing efforts continue. When this happens, the amount of oxygen in your blood decreases. Your brain responds by awakening you enough to tighten the upper airway muscles and open your windpipe. Normal breaths then start again, often with a loud snort or choking sound. Although people who have sleep apnea typically snore loudly and frequently, not everyone who snores has sleep apnea. (See “Is Snoring a Problem?” on page 30.) Because people who have sleep apnea frequently go from deeper sleep to lighter sleep during the night, they rarely spend enough time in deep, restorative stages of sleep. They are therefore often exces sively sleepy during the day. Such sleepiness is thought to lead to mood and behavior problems, including depression, and it more than triples the risk of being in a traffic or work-related accident. The many brief drops in blood-oxygen levels that occur during the night can result in morning headaches and trouble concentrating, thinking clearly, learning, and remembering. Additionally, the intermittent oxygen drops and reduced sleep quality together trigger the release of stress hormones. These hormones raise your blood pressure and heart rate and boost the risk of heart attack, stroke, irregular heartbeats, and congestive heart failure. In addition, 39 Common Sleep DisordersI realize now that my sleep apnea affected my quality of life. I felt tired all the time—so tired that I couldn’t exercise or spend time with my kids. I had other sleep apnea symptoms that affected my work—headaches, confusion, making errors, etc. “Looking back, I know that I should have taken it more seriously and told my doctor about my symptoms many years before I did. “One thing that helps me is physical activity. Now that I am feeling better, I come home from work with enough energy to have an exercise routine. J I M “ ” 40Your Guide to Healthy Sleep untreated sleep apnea can lead to changes in energy metabolism (the way your body changes food and oxygen into energy) that increase the risk for developing obesity and diabetes. Anyone can have sleep apnea. It is estimated that at least 12–18 million American adults have sleep apnea, making it as common as asthma. More than one-half of the people who have sleep apnea are overweight. Sleep apnea is more common in men. More than 1 in 25 middle-aged men and 1 in 50 middle-aged women have sleep apnea along with extreme daytime sleepiness. About 3 percent of children and 10 percent or more of people over age 65 have sleep apnea. This condition occurs more frequently in African Americans, Asians, Native Americans, and Hispanics than in Caucasians. More than one-half of all people who have sleep apnea are not diagnosed. People who have sleep apnea generally are not aware that their breathing stops in the night. They just notice that they don’t feel well rested when they wake up and are sleepy throughout the day. Their bed partners are likely to notice, however, that they snore loudly and frequently and that they often stop breathing briefly while sleeping. Doctors suspect sleep apnea if these symptoms are present, but the diagnosis must be confirmed with overnight sleep monitoring. (See “How Are Sleep Disorders Diagnosed?” on page 44.) This monitoring will reveal pauses in breathing, frequent sleep arousals (changes from sleep to wakefulness), and intermittent drops in levels of oxygen in the blood. 41 n n n n Like adults who have sleep apnea, children who have this disorder usually snore loudly, snort or gasp, and have brief pauses in breath ing while sleeping. Small children often have enlarged tonsils and adenoids that increase their risk for sleep apnea. But doctors may not suspect sleep apnea in children because, instead of showing the typical signs of sleepiness during the day, these children often become agitated and may be considered hyperactive. The effects of sleep apnea in children may include poor school performance and difficult, aggressive behavior. A number of factors can make a person susceptible to sleep apnea. These factors include: n n n n n n Throat muscles and tongue that relax more than normal while asleep Enlarged tonsils and adenoids Being overweight—the excess fat tissue around your neck makes it harder to keep the throat area open Head and neck shape that creates a somewhat smaller airway size in the mouth and throat area Congestion, due to allergies, that also can narrow the airway Family history of sleep apnea If your doctor suspects that you have sleep apnea, you may be referred to a sleep specialist. Some of the ways to help diagnose sleep apnea include: A medical history that includes asking you and your family questions about how you sleep and how you function during the day. Checking your mouth, nose, and throat for extra or large tissues—for example, checking the tonsils, uvula (the tissue that hangs from the middle of the back of the mouth), and soft palate (the roof of your mouth in the back of your throat). An overnight recording of what happens with your breathing during sleep (polysomnogram, or PSG). A multiple sleep latency test (MSLT), usually done in a sleep center, to see how quickly you fall asleep at times when you would normally be awake. (Falling asleep in only a few minutes usually means that you are very sleepy during the day. Being very sleepy during the day can be a sign of sleep apnea.) Common Sleep Disorders 42Your Guide to Healthy Sleep n n n Once all the tests are completed, the sleep specialist will review the results and work with you and your family to develop a treatment plan. Changes in daily activities or habits may help reduce your symptoms: Sleep on your side instead of on your back. Sleeping on your side will help reduce the amount of upper airway collapse during sleep. Avoid alcohol, smoking, sleeping pills, herbal supplements, and any other medications that make you sleepy. They make it harder for your airways to stay open while you sleep, and sedatives can make the breathing pauses longer and more severe. Tobacco smoke irritates the airways and can help trigger the intermittent collapse of the upper airway. Lose weight if you are overweight. Even a little weight loss can sometimes improve symptoms. These changes may be all that are needed to treat mild sleep apnea. However, if you have moderate or severe sleep apnea, you will need additional, more direct treatment approaches. Continuous positive airway pressure (CPAP) is the most effective treatment for sleep apnea in adults. A CPAP machine uses mild air pressure to keep your airways open while you sleep. The machine delivers air to your airways through a specially designed nasal mask. The mask does not breathe for you; the flow of air creates increased pressure to keep the airways in your nose and mouth more open while you sleep. The air pressure is adjusted so that it is just enough to stop your airways from briefly becoming too small during sleep. The pressure is constant and continuous. Sleep apnea will return if CPAP is stopped or if it is used incorrectly. People who have severe sleep apnea symptoms generally feel much better once they begin treatment with CPAP. CPAP treatment can cause side effects in some people. Possible side effects include dry or stuffy nose, irritation of the skin on the face, bloating of the stom ach, sore eyes, or headaches. If you have trouble with CPAP side effects, work with your sleep specialist and support staff. Together, you can do things to reduce or eliminate these problems. Currently, no medications cure sleep apnea. However, some prescription medications may help relieve the excessive sleepiness that sometimes persists even with CPAP treatment of sleep apnea. 43My doctor prescribed CPAP (continuous positive airway pressure) for me, but it was not easy to use at first. Sleeping with a CPAP machine was uncomfortable for me, so I didn’t use it like I should have—rarely, if at all. One day at work, I started feeling really bad, so I went to the hospital. The doctors told me that since I had not been using CPAP regularly, not enough oxygen was going to my brain, which caused symptoms like those for a stroke. So, I went back to my doctor and got a different CPAP machine that was more comfortable for me. “It’s important to talk with your health care provider to make sure that your treatment is comfortable and works for you. J I M “ ” Another treatment approach that may help some people is the use of a mouthpiece (oral or dental appliance). If you have mild sleep apnea or do not have sleep apnea but snore very loudly, your doctor or dentist also may recommend this. A custom-fitted plastic mouth piece will be made by a dentist or an orthodontist (a specialist in correcting teeth or jaw problems). The mouthpiece will adjust your lower jaw and tongue to help keep the airway in your throat more open while you are sleeping. Air can then flow more easily into your lungs because there is less resistance to breathing. Following up with the dentist or orthodontist is important to correct any side effects and to be sure that your mouthpiece continues to fit properly. It is also important to have a followup sleep study to see whether your sleep apnea has improved. Some people who have sleep apnea may benefit from surgery; this depends on the findings of the evaluation by the sleep specialist. Removing tonsils and adenoids that are blocking the airway is done frequently, especially in children. Uvulopalatopharyngoplasty (UPPP) is a surgery for adults that removes the tonsils, uvula, and part of the soft palate. Tracheostomy is a surgery used rarely and only in severe sleep apnea when no other treatments have been successful. A small hole is made in the windpipe, and a tube is inserted. Air will flow through the tube and into the lungs, bypass ing the obstruction in the upper airway. Common Sleep Disorders 44Your Guide to Healthy Sleep l l l How Are Sleep disorders diagnosed? Depending on your symptoms, your doctor will gather informa tion and consider several possible tests when trying to diagnose a sleep disorder: Sleep history and sleep log. Your doctor will ask you how many hours you sleep each night, how often you awaken during the night and for how long, how long it takes you to fall asleep, how well rested you feel upon awakening, and how sleepy you feel during the day. Your doctor may ask you to keep a sleep diary for a few weeks. (See “Sample Sleep Diary” on page 54.) Your doctor also may ask you whether you have any symptoms of sleep apnea or restless legs syndrome, such as loud snoring, snorting or gasping, morning headaches, tingling or unpleasant sensations in the limbs that are relieved by moving them, and jerking of the limbs during sleep. Your sleeping partner may be asked whether you have some of these symptoms, as you may not be aware of them yourself. Sleep recording in a sleep laboratory (polysomnogram). A sleep recording or polysomnogram (PSG) is usually done while you stay overnight at a sleep center or sleep laboratory. Electrodes and other monitors are placed on your scalp, face, chest, limbs, and finger. While you sleep, these devices measure your brain activity, eye movements, muscle activity, heart rate and rhythm, blood pressure, and how much air moves in and out of your lungs. This test also checks the amount of oxygen in your blood. A PSG test is painless. In certain circumstances, the PSG can be done at home. A home monitor can be used to record heart rate, how air moves in and out of your lungs, the amount of oxygen in your blood, and your breathing effort. Multiple sleep latency test (MSLT). This daytime sleep study measures how sleepy you are and is particularly useful for diagnosing narcolepsy. The MSLT is conducted in a sleep 45 laboratory and typically done after an overnight sleep recording (PSG). In this test, monitoring devices for sleep stage are placed on your scalp and face. You are asked to nap four or five times for 20 minutes every 2 hours during the day. Technicians note how quickly you fall asleep and how long it takes you to reach various stages of sleep, especially REM sleep, during your naps. Normal individuals either do not fall asleep during these short designated naptimes or take a long time to fall asleep. People who fall asleep in less than 5 minutes are likely to require treatment for a sleep disorder, as are those who quickly reach REM sleep during their naps. It is important to have a sleep specialist interpret the results of your PSG or MSLT. See “How To Find a Sleep Center and Sleep Specialist” on page 56. Common Sleep Disorders 46Your Guide to Healthy SleepI started to get weird feelings in my legs at night while I slept. To feel better, I would get up and move around and stretch. Then the weird feelings began to happen more often and made me lose sleep. I started to think that something was wrong. I decided to go to the doctor and was diagnosed with restless legs syndrome (RLS). “Because RLS symptoms can change, I’m always trying to find the right mix of diet, medication, and exercise. Exercise and massage help me manage my RLS. Yoga helps a lot too, because of all the stretching involved. L A U R E N “ ” 47 Restless Legs Syndrome Restless legs syndrome (RLS) causes an unpleasant prickling or tingling in the legs, especially in the calves, that is relieved by mov ing or massaging them. People who have RLS feel a need to stretch or move their legs to get rid of the uncomfortable or painful feelings. As a result, it may be difficult to fall asleep and stay asleep. One or both legs may be affected. Some people also feel the sensations in their arms. These sensations also can occur when lying down or sitting for long periods of time, such as while at a desk, riding in a car, or watching a movie. Many people who have RLS also have brief limb movements during sleep, often with abrupt onset, occurring every 5–90 seconds. This condition, known as periodic limb movements in sleep (PLMS), can repeatedly awaken people who have RLS, reducing their total sleep time and interrupting their sleep. Some people have PLMS but have no abnormal sensations in their legs while awake. RLS affects 5–15 percent of Americans, and its prevalence increases with age. RLS occurs more often in women than men. One study found that RLS accounted for one-third of the insomnia seen in patients older than age 60. Children also can have RLS. In children, the condition may be associated with symptoms of attention-deficit hyperactivity disorder. However, it’s not fully known how the disorders are related. Sometimes “growing pains” can be mistaken for RLS. RLS is often inherited. Pregnancy, kidney failure, and anemia related to iron or vitamin deficiency can trigger or worsen RLS symptoms. Researchers suspect that these conditions cause an iron deficiency that results in a lack of dopamine, which is used by the brain to control physical sensation and limb movements. Doctors usually can diagnose RLS by patients’ symptoms and a telltale worsening of symptoms at night or while at rest. Some doctors may order a blood test to check ferretin levels (ferretin is a form of iron). Doctors also may ask people who have RLS to spend a night in a sleep laboratory, where they are monitored to rule out other sleep disorders and to document the excessive limb movements. RLS is treatable but not always curable. Dramatic improvements are seen quickly when patients are given dopamine-like drugs or iron supplements. Alternatively, people who have milder cases may be treated successfully with sedatives or behavioral strategies. These Common Sleep Disorders 48Your Guide to Healthy Sleep n strategies include stretching, taking a hot bath, or massaging the legs before bedtime. Avoiding caffeinated beverages also can help reduce symptoms, and certain medications (e.g., some antidepressants, particularly selective serotonin reuptake inhibitors) may cause RLS. If iron or vitamin deficiency underlies RLS, symptoms may improve with prescribed iron, vitamin B12, or folate supplements. Some people may require anticonvulsant medications to control the creeping and crawling sensations in their limbs. Others who have severe symptoms that are associated with another medical disorder or that do not respond to normal treatments may need to be treated with pain relievers. Narcolepsy Narcolepsy’s main symptom is extreme and overwhelming daytime sleepiness, even after adequate nighttime sleep. In addition, nighttime sleep may be fragmented by frequent awakenings. People who have narcolepsy often fall asleep at inappropriate times and places. Although TV sitcoms occasionally feature these individuals to generate a few laughs, narcolepsy is no laughing matter. People who have narcolepsy experience daytime “sleep attacks” that last from seconds to more than one-half hour, can occur without warning, and may cause injury. These embarrassing sleep spells also can make it difficult to work and to maintain normal personal or social relationships. With narcolepsy, the usually sharp distinctions between being asleep and awake are blurred. Also, people who have narcolepsy tend to fall directly into dream-filled REM sleep, rather than enter REM sleep gradually after passing through the non-REM sleep stages first. In addition to overwhelming daytime sleepiness, narcolepsy has three other commonly associated symptoms, but these may not occur in all people: Sudden muscle weakness (cataplexy). This weakness is similar to the paralysis that normally occurs during REM sleep, but it lasts a few seconds to minutes while an individual is awake. Cataplexy tends to be triggered by sudden emotional reac tions, such as anger, surprise, fear, or laughter. The weakness may show up as limpness at the neck, buckling of the knees, or sagging facial muscles affecting speech, or it may cause a complete body collapse. 49 Common Sleep DisordersAt first, I was misdiagnosed with chronic fatigue syndrome, because I was in my forties and narcolepsy symptoms usually start during the teen years. Because I didn’t have any of the symptoms of chronic fatigue syndrome other than sleepiness, I went to a neurologist for help. He noticed the cataplexy (muscle weakness) right away, and then I was officially diagnosed with narcolepsy and then later on with borderline sleep apnea. “Even though there is no cure for narcolepsy, you can feel like you have control if you manage it well. “When you have narcolepsy, you live your life differently. But with a good plan and supportive friends and family, it all turns out OK. S Z E - P I N G “ ” 50Your Guide to Healthy Sleep n n Sleep paralysis. People who have narcolepsy may experience a temporary inability to talk or move when falling asleep or waking up, as if they were glued to their beds. Vivid dreams. These dreams can occur when people who have narcolepsy first fall asleep or wake up. The dreams are so lifelike that they can be confused with reality. Experts estimate that as many as 350,000 Americans have narco lepsy, but fewer than 50,000 are diagnosed. The disorder may be as widespread as Parkinson’s disease or multiple sclerosis, and more prevalent than cystic fibrosis, but it is less well known. Narcolepsy is often mistaken for depression, epilepsy, or the side effects of medicines. Narcolepsy can be difficult to diagnose in people who have only the symptom of excessive daytime sleepiness. It is usually diagnosed during an overnight sleep recording (PSG) that is followed by an MSLT. (See “How Are Sleep Disorders Diagnosed?” on page 44.) Both tests reveal symptoms of narcolepsy—the tendency to fall asleep rapidly and enter REM sleep early, even during brief naps. Narcolepsy can develop at any age, but the symptoms tend to appear first during adolescence or early adulthood. About 1 of every 10 people who have narcolepsy has a close family member who has the disorder, suggesting that one can inherit a tendency to develop narcolepsy. Studies suggest that a substance in the brain called hypocretin plays a key role in narcolepsy. Most people who have narcolepsy lack hypocretin, which promotes wakefulness. Scientists believe that an autoimmune reaction—perhaps triggered by disease, viral illness, or brain injury— specifically destroys the hypocretin-generating cells in the brains of people who have narcolepsy. 51 Eventually, researchers may develop a treatment for narcolepsy that restores hypocretin to normal levels. In the meantime, most people who have narcolepsy find some to all of their symptoms relieved by various drug treatments. For example, central nervous system stimulants can reduce daytime sleepiness. Antidepressants and other drugs that suppress REM sleep can prevent muscle weakness, sleep paralysis, and vivid dreaming. Doctors also usually recommend that people who have narcolepsy take short naps (10–15 minutes) two or three times a day, if possible, to help control excessive daytime sleepiness. Parasomnias (Abnormal Arousals) In some people, the walking, talking, and other body functions normally suppressed during sleep occur during certain sleep stages. Alternatively, the paralysis or vivid images usually experienced during dreaming may persist after awakening. These occurrences are collectively known as parasomnias and include confusional arousals (a mixed state of being both asleep and awake), sleep talking, sleep walking, night terrors, sleep paralysis, and REM sleep behavior disorder (acting out dreams). Most of these disorders— such as confusional arousals, sleep walking, and night terrors—are more common in children, who tend to outgrow them once they become adults. People who are sleep-deprived also may experience some of these disorders, including sleep walking and sleep paralysis. Sleep paralysis also commonly occurs in people who have narco lepsy. Certain medications or neurological disorders appear to lead to other parasomnias, such as REM sleep behavior disorder, and these parasomnias tend to occur more in elderly people. If you or a family member has persistent episodes of sleep paralysis, sleep walking, or acting out of dreams, talk with your doctor. Taking measures to assure the safety of children and other family members who have partial arousals from sleep is very important. Common Sleep Disorders 52Your Guide to Healthy SleepIt’s a scary experience, lying in bed, wanting to get up, but unable to—scary enough to almost make you not want to go to sleep anymore. I can remember, as a child, feeling as though there was a weight on me when I was trying to wake up, and I couldn’t move. When I would try to wake up, I would kick my legs and flail my arms, sometimes bumping my wife. I really didn’t have control over my limbs. “When the symptoms got really bad, I went to a sleep specialist, who told me I had sleep paralysis. My doctor prescribed a medicine that has worked great for me. Now, I rarely have sleep paralysis—maybe 3 times per year. L A W R E N C E “ ” 53 n n n n Do You Think You Have a Sleep Disorder? At various points in our lives, all of us suffer from a lack of sleep that can be corrected by making sure we have the opportunity to get enough sleep. But, if you are spending enough time in bed and still wake up tired or feel very sleepy during the day, you may have a sleep disorder. See “Common Signs of a Sleep Disorder” on page 34. One of the best ways you can tell whether you are getting enough good-quality sleep, and whether you have signs of a sleep disorder, is by keeping a sleep diary. (See “Sample Sleep Diary” on page 54.) Use this diary to record the quality and quantity of your sleep; your use of medications, alcohol, and caffeinated beverages; your exercise patterns; and how sleepy you feel during the day. After a week or so, look over this information to see how many hours of sleep or nighttime awakenings one night are linked to your being tired the next day. This information will give you a sense of how much uninterrupted sleep you need to avoid daytime sleepiness. You also can use the diary to see some of the patterns or practices that may keep you from getting a good night’s sleep. You may have a sleep disorder and should see your doctor if your sleep diary reveals any of the following: You consistently take more than 30 minutes each night to fall asleep. You consistently awaken more than a few times or for long periods of time each night. You take frequent naps. You often feel sleepy during the day—or you fall asleep at inappropriate times during the day. Do You Think You Have a Sleep Disorder? 54 Sample Sleep Diary Name: Complete in the Morning Today’s date (include month/day/year): Monday* Time I went to bed last night: Time I woke up this morning: No. of hours slept last night: 11 p.m. 7 a.m. 8 Number of awakenings and total time awake last night: 5 times 2 hours How long I took to fall asleep last night: 30 mins. Medications taken last night: None How awake did I feel when I got up this morning? 1—Wide awake 2—Awake but a little tired 3—Sleepy 2 Complete in the Evening Number of caffeinated drinks (coffee, tea, cola) and time when I had them today: 1 drink at 8 p.m. Number of alcoholic drinks (beer, wine, liquor) and time when I had them today: 2 drinks 9 p.m. Naptimes and lengths today: 3:30 p.m. 45 mins. Exercise times and lengths today: None How sleepy did I feel during the day today? 1—So sleepy had to struggle 1 to stay awake during much of the day 2—Somewhat tired 3—Fairly alert 4—Wide awake Your Guide to Healthy Sleep * This column shows example diary entries—use as a model for your own diary notes. 55 Do You Think You Have a Sleep Disorder? 56Your Guide to Healthy Sleep How To Find a Sleep Center and Sleep Specialist If your doctor refers you to a sleep center or sleep specialist, make sure that center or specialist is qualified to diagnose and treat your sleep problem. To find sleep centers accredited by the American Academy of Sleep Medicine, go to www.aasmnet.org and click on “Find a Sleep Center” (under the Patients & Public menu), or call 708–492–0930. To find sleep specialists certified by the American Board of Sleep Medicine, go to www.absm.org and click on “Verification of Diplomates of the ABSM.” 57 n n n n Research Researchers have learned a lot about sleep and sleep disorders in recent years. That knowledge has led to a better understanding of the importance of sleep to our lives and our health. Research supported by the National Heart, Lung, and Blood Institute (NHLBI) has helped identify some of the causes of sleep disorders and their effects on the heart, brain, lungs, and other body systems. The NHLBI also supports ongoing research on the most effective ways to diagnose and treat sleep disorders. Many questions remain about sleep and sleep disorders. The NHLBI continues to support a range of research that focuses on: Better understanding of how a lack of sleep increases the risk for obesity, diabetes, heart disease, and stroke New ways to diagnose sleep disorders Genetic, environmental, and social factors that lead to sleep disorders The adverse effects from a lack of sleep on body and brain Much of this research depends on the willingness of volunteers to participate in clinical research. If you would like to help researchers advance science on sleep or about a sleep disorder you have and possible treatments, talk to your doctor about participating in clinical research. (For more information, see “Clinical Research” on page 58.) Research 58Your Guide to Healthy Sleep Clinical research Researchers can learn quite a bit about sleep and sleep disorders by studying animals. However, to fully understand sleep and its affect on health and functioning, as well as how best to diagnose and treat sleep disorders, researchers need to do clinical research on people. This type of research is called clinical research because it is often conducted in clinical settings, such as hospitals or doctors’ offices. The two types of clinical research are clinical trials and clinical studies. l Clinical trials test new ways to diagnose, prevent, or treat various disorders. For example, treatments (such as medicines, medical devices, surgery, or other procedures) for a disorder need to be tested in people who have the disorder. A trial helps determine whether a treatment is safe and effective in humans before it is made available for public use. In a clinical trial, participants are randomly assigned to groups. One group receives the new treatment being tested. Other groups may receive a different treatment or a placebo (an inactive substance resembling a drug being tested). Comparing results from the groups gives researchers confidence that changes in the test group are due to the new treatment and not to other factors. 59 l l l l Other types of clinical studies are done to discover the factors, including environmental, behavioral, or genetic factors, that cause or worsen various disorders. Researchers may follow a group of people over time to learn what factors contribute to becoming sick. Clinical studies and trials may be relatively brief, or may last for years and require many visits to the study sites. These sites usually are university hospitals or research centers, but they can include private doctors’ offices and community hospitals. If you participate in clinical research, the research will be explained to you in detail, you will be given a chance to ask questions, and you will be asked to provide written permission. You may not directly benefit from the results of the clinical research you participate in, but the information gathered will help others and will add to scientific knowledge. Taking part in clinical research has other benefits, as well. You’ll learn more about your disorder, you’ll have the support of a team of health care providers, and your health will likely be monitored closely. However, participation also can have risks, which you should discuss with your doctor. No matter what you decide, your regular medical care will not be affected. If you’re thinking about participating in a clinical study, you may have questions about the purpose of the study, the types of tests and treatment involved, how participation will affect your daily life, and whether any costs are involved. Your doctor may be able to answer some of your questions and help you find clinical studies in which you can participate. You also can visit the following Web sites to learn about being in a study and to search for clinical trials being done on your disorder: www.clinicaltrials.gov http://clinicalresearch.nih.gov www.nhlbi.nih.gov/studies/index.htm Clinical Research Research 60Your Guide to Healthy Sleep For More Sleep Information Resources From the National Heart, Lung, and Blood Institute (NHLBI) National Center on Sleep Disorders Research Division of Lung Diseases, NHLBI Two Rockledge Centre, Suite 10170 6701 Rockledge Drive Bethesda, MD 20895–7952 Phone: 301–435–0199 Fax: 301–480–3451 Web site: www.nhlbi.nih.gov/sleep NHLBI Diseases and Conditions Index (DCI) The DCI includes articles on sleep disorders, tests, and procedures, along with videos, podcasts, and Spanish-language articles. Web site: www.nhlbi.nih.gov/health/dci/index.html NHLBI Health Information Center P.O. Box 30105 Bethesda, MD 20824–0105 Telephone: 301–592–8573 TTY: 240–629–3255 Fax: 301–592–8563 E-mail: [email protected] Web site: www.nhlbi.nih.gov NIH Office of Science Education Web site (for high school supplemental curriculum: Sleep, Sleep Disorders, and Biological Rhythms) http://science.education.nih.gov 61 Resources From Other Sleep Organizations American Academy of Sleep Medicine (AASM) 2510 North Frontage Road Darien, IL 60561 Telephone: 630–737–9700 Fax: 630–737–9790 Web site: www.aasmnet.org American Sleep Apnea Association 6856 Eastern Avenue, NW., Suite 203 Washington, DC 20012 Telephone: 202–203–3650 Fax: 202–293–3656 Web site: www.sleepapnea.org Narcolepsy Network P.O. Box 294 Pleasantville, NY 10570 Telephone: 401–667–2523 Fax: 401–633–6567 E-mail: [email protected] Web site: www.narcolepsynetwork.org National Sleep Foundation 1010 North Glebe Road, Suite 310 Arlington, VA 22201 Telephone: 703–243–1697 E-mail: [email protected] Web site: www.sleepfoundation.org Restless Legs Syndrome Foundation 1610 14th Street, NW., Suite 300 Rochester, MN 55901 Telephone: 507–287–6465 Fax: 507–287–6312 E-mail: [email protected] Web site: www.rls.org For More Sleep Information 62Your Guide to Healthy Sleep ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ Notes 63 ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ Notes Notes Discrimination Prohibited: Under provisions of applicable public laws enacted by Congress since 1964, no person in the United States shall, on the grounds of race, color, national origin, handicap, or age, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any program or activity (or, on the basis of sex, with respect to any educa- tion program or activity) receiving Federal finan- cial assistance. In addition, Executive Order 11141 prohibits discrimination on the basis of age by contractors and subcontractors in the performance of Federal contracts, and Executive Order 11246 states that no federally funded con- tractor may discriminate against any employee or applicant for employment because of race, color, religion, sex, or national origin. Therefore, the National Heart, Lung, and Blood Institute must be operated in compliance with these laws and Executive Orders. NIH Publication No. 11-5271 Originally printed November 2005 Revised August 2011 USER: What are the health benefits of high-quality sleep? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
true
20
8
16,677
null
394
You will only rely on information from the context block, and not on any external or prior knowledge. You will limit your response to 200 words.
Find and summarize key similarities between the IDEA and the NCLB acts.
Introduction The skills, knowledge, and credentials obtained through education are widely believed to be connected to positive occupational and economic outcomes. In recent decades, considerable attention has been devoted to improving educational attainment levels of students with disabilities. Several federal policies have aimed to require educators to pay greater attention to the educational progress and attainment of students with disabilities, and many others provide for a variety of supports with the goal of improving levels of attainment. Data collection efforts have also been launched to allow for better tracking of relevant trends. This report discusses policies aiming to promote educational attainment and examines trends in high school graduation and college enrollment for students with disabilities. It begins with a discussion of the laws related to the education of students with disabilities at the secondary and postsecondary levels. Subsequent sections discuss the existing data on transition-aged students with disabilities, what is currently known about such students, and federal legislation and other factors that may have contributed to changes in students with disabilities’ high school graduation rates and postsecondary enrollment over time. The report offers a brief overview of what is currently known about the U.S. population of students with disabilities in secondary and postsecondary education. It focuses on data gathered in conjunction with federal programs and federally funded studies of nationally representative samples of students with disabilities. It does not attempt to provide an overview or review of existing research on transition-aged students with disabilities or to provide an in-depth examination of the differences between the rights of and services afforded to students with disabilities at the secondary and postsecondary levels. The next sections of the report provide an overview of the education and civil rights laws that aim to support students with disabilities as they work toward completing high school and potentially transition into further educational pursuits. Education Laws Individuals with Disabilities Education Act (IDEA) The IDEA was originally enacted in 1975 (P.L. 94-142) 1 and was most recently reauthorized in 2004. 2 It is the primary federal act providing for special education and related services for children with disabilities between birth and 21 years old.3 Approximately 13% of the K-12 student population received IDEA services in the 2013-2014 school year (SY).4 The IDEA provides states with grants that support the identification, evaluation, and provision of special education services to children with disabilities. States may receive grants under the 1 When P.L. 94-142, the Education for All Handicapped Children Act, was reauthorized in 1990 (P.L. 101-476), its name was changed to the Individuals with Disabilities Education Act (IDEA). 2 The Individuals with Disabilities Education Improvement Act of 2004, P.L. 108-446. 3 For more information, see CRS Report R41833, The Individuals with Disabilities Education Act (IDEA), Part B: Key Statutory and Regulatory Provisions, by Kyrie E. Dragoo and CRS Report R43631, The Individuals with Disabilities Education Act (IDEA), Part C: Early Intervention for Infants and Toddlers with Disabilities, by Kyrie E. Dragoo. 4 U.S. Department of Education, National Center for Education Statistics, Digest of Education Statistics, 2015, Table 204.30, “Children 3 to 21 years old served under Individuals with Disabilities Education Act (IDEA), Part B, by type of disability: Selected years, 1976-77 through 2013-14.” Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 2 condition that, among other requirements, they provide each qualifying student with (1) an individualized education program (IEP) outlining the student’s goals, and the accommodations, special education, and related services that the school will provide to the student, and (2) a free appropriate public education (FAPE) in the least restrictive environment (LRE). This means specially designed instruction to meet students’ needs, provided to the greatest extent possible with their general education peers and at no cost to their families. Beginning with its 1990 reauthorization,5 the IDEA has required that the IEPs of students who are 16 years old or older contain a statement of transition goals and services. Transition services are defined as: A coordinated set of activities for a child with a disability that— (A) is designed to be within a results-oriented process, that is focused on improving the academic and functional achievement of the child with a disability to facilitate the child’s movement from school to post-school activities, including post-secondary education, vocational education, integrated employment (including supported employment), continuing and adult education, adult services, independent living, or community participation; (B) is based on the individual child’s needs, taking into account the child’s strengths, preferences, and interests; and (C) includes instruction, related services, community experiences, the development of employment and other post-school adult living objectives, and, when appropriate, acquisition of daily living skills and functional vocational evaluation. 6 The 1997 and 20047 amendments to the IDEA have supported students with disabilities graduating with regular diplomas and transitioning to postsecondary education by  increasing local education agencies’ (LEAs) accountability for improving the performance of students with IEPs,  emphasizing students’ progress toward meaningful educational and postsecondary goals in the IEP process, and  requiring states to develop IDEA performance goals and indicators, including dropout and graduation rates, and to report to the Secretary of Education (the Secretary) and the public on the progress of the state and of students with disabilities in the state toward these indicators at least every two years. 8 Elementary and Secondary Education Act of 1965 (ESEA) The ESEA was originally enacted in 1965 (20 U.S.C. 6301 et seq.). It was most recently reauthorized by the Every Student Succeeds Act (ESSA; P.L. 114-95) in 2015. The ESEA is the largest source of federal aid to K-12 education, supporting educational and related services for low-achieving and other students attending elementary and secondary schools with high concentrations of students from low-income families. The largest grant program in the ESEA is 5 P.L. 101-476. 6 20 U.S.C. §1401(a)(34). 7 In the 2004 reauthorization of the IDEA, Congress stated in their findings, “As the graduation rates for children with disabilities continue to climb, providing effective transition services to promote successful post-school employment or education is an important measure of accountability for children with disabilities.” (P.L. 108-446, §601 (c)(14)). 8 P.L. 105-17, §612 (a)(16). Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 3 Title I-A. There are a number of educational accountability requirements that states, LEAs, and schools must meet to receive Title I-A funds. For example, amendments to the ESEA enacted under the No Child Left Behind Act of 2001(NCLB; P.L. 107-110) included several educational accountability provisions that aimed to promote the educational progress of all students in schools served. These provisions have subsequently been amended through the ESSA. Over half of public elementary and secondary schools receive Title I-A funds. While students with disabilities benefit from this funding, they are not specifically targeted by it. However, many of the ESEA’s educational accountability provisions do require that schools pay particular attention to students with disabilities and likely have an effect on them. For example, when the ESEA was amended through the NCLB in 2001, provisions were adopted requiring states to develop and implement a state accountability system to ensure that schools and LEAs made progress with respect to student achievement.9 Under the NCLB provisions, student progress was not only systematically measured and monitored for the broad population of K-12 students served under the ESEA but also for specific subgroups of students, of which “students with disabilities” was one. Under NCLB provisions, student proficiency in relation to academic performance standards was regularly tracked in selected subject areas, as were high school graduation rates. The NCLB contained high-stakes accountability provisions featuring varied consequences for schools in which a sufficient percentage of students or subgroups of students, such as students with disabilities, failed to make sufficient academic progress in relation to the academic achievement and high school graduation standards. The accountability provisions of the NCLB, and those in place after the ESEA was amended through the ESSA, emphasize holding all students and all subgroups of students (including students with disabilities) to the same standards and levels of academic achievement, and closing gaps between subgroups of students. To comply with these accountability provisions, schools and school districts are required to pay specific attention to the academic progress and graduation rates of students with disabilities. Higher Education Act of 1965 (HEA) The HEA was originally enacted in 1965 (P.L. 89-329). It was most recently reauthorized in 2008 by the Higher Education Opportunity Act (HEOA; P.L. 110-315) in 2008, which authorized appropriations for most HEA programs through FY2014. Funding is still being provided for HEA programs through appropriations acts. The HEA authorizes student financial aid programs that help students and their families meet the costs of attending postsecondary institutions, a series of targeted grant programs that assist students transitioning into postsecondary education, and grants that support program and institutional development at some colleges and universities. While students with disabilities benefit from many of the HEA’s student financial aid programs, the programs that specifically target support and assistance to students with disabilities are the TRIO Student Support Services (SSS) program10 and Comprehensive Transition and Postsecondary (CTP) programs for students with intellectual disabilities. 11 9 ESEA, §1111(b)(2). 10 Higher Education Act (HEA), P.L. 113-67, §402(D); 20 U.S.C. 1070a–14. 11 P.L. 113-67, §760; 20 U.S.C. 1140. Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 4 The TRIO SSS program served over 200,000 students through grants to over 1,000 projects in SY2015-2016. 12 The program, originally enacted in 1992 through amendments to the HEA, 13 provides support services to primarily low-income first generation college students with the aim of improving their retention, graduation rates, financial and economic literacy, and transfers from two-year to four-year schools.14 TRIO SSS programs are also intended to foster an institutional climate supportive of potentially disconnected students. These include students with disabilities, students who are limited English proficient, students from groups that are traditionally underrepresented in postsecondary education, students who are homeless children and youths, and students who are in foster care or aging out of the foster care system. Under the TRIO SSS program, the U.S. Department of Education (ED) makes competitive grants to Institutions of Higher Education (IHEs) and combinations of IHEs. Grantees must provide statutorily defined services to an approved number of participants. At least two-thirds of participants must be either students with disabilities15 or low-income, first-generation college students. The remaining onethird of participants must be low-income students, students with disabilities, or first-generation college students. Also, at least one-third of the participating students with disabilities must be low-income. The CTP programs for students with intellectual disabilities served approximately 1,000 students through grants to 66 institutions in SY2015-2016. 16 The programs, enacted through the HEOA, provide transition support for students with intellectual disabilities. 17 Under provisions in the HEA, CTP programs for students with intellectual disabilities are not required to lead to a recognized credential (e.g., bachelor’s or associate’s degree, certificate) or adhere to the same durational requirements that regular postsecondary programs must meet (e.g., a certain number of credit-bearing clock hours). Instead, CTP programs require students with intellectual disabilities to receive curriculum advising, participate at least part-time in courses or training with students who do not have intellectual disabilities, and prepare for gainful employment. Civil Rights Laws In addition to the education laws that fund programs for students with disabilities, there are two civil rights laws that protect them in secondary and postsecondary education from discrimination based on their disabilities: Section 504 of the Rehabilitation Act (P.L. 93-112) and the Americans with Disabilities Act of 1990 (ADA; 42 U.S.C. §12101 et seq.). 12 U.S. Department of Education, Student Support Services Program Awards, FY2016, https://www2.ed.gov/programs/ triostudsupp/awards.html. 13 Higher Education Amendments of 1992, P.L. 102-325, §402(a)(2). 14 For more background information, see CRS Report R42724, The TRIO Programs: A Primer, by Cassandria Dortch. 15 “Disability” is defined in §12102 of the Americans with Disabilities Act (ADA; 42 U.S.C. 12101 et seq.). 16 The U.S. Department of Education (ED) tracks the estimated enrollment of the 43 (out of 66) CTP programs that receive TPSID grants (see footnote 17). An estimated 730 students participate in these TPSID programs (an average of approximately 17 students per program). There are an additional 23 CTP programs that do not report student enrollment rates. CRS estimates that if these 23 CTP programs serve an average of 12 or more students, and those students are added to the 730 students served in the TPSID programs, more than 1,000 students would be served by CTP programs in total. 17 A new CTP grant program, the Model Transition Programs for Students with Intellectual Disabilities into Higher Education (TPSID), which is intended to help IHEs create or expand high-quality, inclusive-model CTP programs for students with intellectual disabilities, was included in the Higher Education Opportunity Act of 2008 (HEOA; P.L. 110- 315). Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 5 Section 504 of the Rehabilitation Act18 Section 504 prohibits discrimination on the basis of a disability by protecting the rights of people with disabilities to access programs receiving federal funding. Section 504 also provides for accommodations such as extended time on tests for students with learning disabilities, accessible classrooms for students with orthopedic impairments, and large print or braille materials for students who are visually impaired. These accommodations are available at all levels of schooling—preschool to postsecondary—in schools that receive any federal funding. All children with disabilities attending K-12 public schools who are served under Section 504 are entitled to a FAPE and an individualized accommodations plan, often called a “504 plan.” At the postsecondary level, Section 504 requires IHEs to provide students with disabilities with appropriate academic adjustments and equitable access to educational programs and facilities. ED’s Office for Civil Rights (OCR) reported that in SY2011-2012, more than 6 million K-12 students were served under the IDEA, and slightly less than three-quarters of a million K-12 students were served under Section 504. 19 This means approximately 89% of children with disabilities served by K-12 public schools are served under the IDEA and approximately 11% of students with disabilities served by K-12 public schools are served solely by Section 504.20 At the postsecondary level, however, the IDEA no longer applies to students with disabilities; instead, all students with disabilities attending IHEs that receive federal funding are protected by Section 504. Most IHEs have a 504 coordinator or a disabled student services (DSS) office on campus that coordinates accommodations such as extended time on tests, early course registration, and physical accommodations and access to campus facilities for students with disabilities. Americans with Disabilities Act of 1990 (ADA) The Americans with Disabilities Act of 1990, most recently amended by the ADA Amendments Act of 2008 (together, ADA), 21 includes a conforming amendment to the Rehabilitation Act that broadens the meaning of the term “disability” in both the ADA and Section 504 to protect people who have or are regarded as having a physical or mental disability that impacts one or more major life activities. The ADA provides broad nondiscrimination protection in employment, public services, public accommodations and services operated by private entities, transportation, and telecommunications for individuals with disabilities. The ADA states that its purpose is “to provide a clear and comprehensive national mandate for the elimination of discrimination against individuals with disabilities.” In 2008, in response to Supreme Court and lower court decisions that narrowly interpreted the term “disability,” Congress passed the ADA Amendments Act to, among other things, “carry out the ADA's objectives of providing 'a clear and comprehensive national mandate for the elimination of discrimination' and 'clear, strong, consistent, enforceable standards addressing discrimination' by reinstating a broad scope of protection to be available under the ADA.” Both Section 504 and the ADA require that educational institutions at all levels provide equal access for people with disabilities. The ADA extends the requirements of Section 504 from only institutions receiving federal financial assistance to all institutions, with some exceptions for 18 Section 504 of the Rehabilitation Act of 1973 is commonly referred to simply as “Section 504.” 19 U.S. Department of Education, Office for Civil Rights, Civil Rights Data Collection, 2011-12, http://ocrdata.ed.gov. 20 Because having an IEP and a 504 plan is considered duplicative, students with IEPs usually only have IEPs, and students who have disabilities that do not qualify for IDEA services (e.g., a disability that impacts a child medically or physically but not educationally) have 504 plans. 21 42 U.S.C. §12101 et seq. Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 6 institutions controlled by religious organizations. The ADA impacts schools from pre-K to postsecondary because it extends the rights of people with disabilities to access facilities and receive accommodations, allowing them to participate in the activities of both public and private institutions.
You will only rely on information from the context block, and not on any external or prior knowledge. You will limit your response to 200 words. Find and summarize key similarities between the IDEA and the NCLB acts. Context block: Introduction The skills, knowledge, and credentials obtained through education are widely believed to be connected to positive occupational and economic outcomes. In recent decades, considerable attention has been devoted to improving educational attainment levels of students with disabilities. Several federal policies have aimed to require educators to pay greater attention to the educational progress and attainment of students with disabilities, and many others provide for a variety of supports with the goal of improving levels of attainment. Data collection efforts have also been launched to allow for better tracking of relevant trends. This report discusses policies aiming to promote educational attainment and examines trends in high school graduation and college enrollment for students with disabilities. It begins with a discussion of the laws related to the education of students with disabilities at the secondary and postsecondary levels. Subsequent sections discuss the existing data on transition-aged students with disabilities, what is currently known about such students, and federal legislation and other factors that may have contributed to changes in students with disabilities’ high school graduation rates and postsecondary enrollment over time. The report offers a brief overview of what is currently known about the U.S. population of students with disabilities in secondary and postsecondary education. It focuses on data gathered in conjunction with federal programs and federally funded studies of nationally representative samples of students with disabilities. It does not attempt to provide an overview or review of existing research on transition-aged students with disabilities or to provide an in-depth examination of the differences between the rights of and services afforded to students with disabilities at the secondary and postsecondary levels. The next sections of the report provide an overview of the education and civil rights laws that aim to support students with disabilities as they work toward completing high school and potentially transition into further educational pursuits. Education Laws Individuals with Disabilities Education Act (IDEA) The IDEA was originally enacted in 1975 (P.L. 94-142) 1 and was most recently reauthorized in 2004. 2 It is the primary federal act providing for special education and related services for children with disabilities between birth and 21 years old.3 Approximately 13% of the K-12 student population received IDEA services in the 2013-2014 school year (SY).4 The IDEA provides states with grants that support the identification, evaluation, and provision of special education services to children with disabilities. States may receive grants under the 1 When P.L. 94-142, the Education for All Handicapped Children Act, was reauthorized in 1990 (P.L. 101-476), its name was changed to the Individuals with Disabilities Education Act (IDEA). 2 The Individuals with Disabilities Education Improvement Act of 2004, P.L. 108-446. 3 For more information, see CRS Report R41833, The Individuals with Disabilities Education Act (IDEA), Part B: Key Statutory and Regulatory Provisions, by Kyrie E. Dragoo and CRS Report R43631, The Individuals with Disabilities Education Act (IDEA), Part C: Early Intervention for Infants and Toddlers with Disabilities, by Kyrie E. Dragoo. 4 U.S. Department of Education, National Center for Education Statistics, Digest of Education Statistics, 2015, Table 204.30, “Children 3 to 21 years old served under Individuals with Disabilities Education Act (IDEA), Part B, by type of disability: Selected years, 1976-77 through 2013-14.” Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 2 condition that, among other requirements, they provide each qualifying student with (1) an individualized education program (IEP) outlining the student’s goals, and the accommodations, special education, and related services that the school will provide to the student, and (2) a free appropriate public education (FAPE) in the least restrictive environment (LRE). This means specially designed instruction to meet students’ needs, provided to the greatest extent possible with their general education peers and at no cost to their families. Beginning with its 1990 reauthorization,5 the IDEA has required that the IEPs of students who are 16 years old or older contain a statement of transition goals and services. Transition services are defined as: A coordinated set of activities for a child with a disability that— (A) is designed to be within a results-oriented process, that is focused on improving the academic and functional achievement of the child with a disability to facilitate the child’s movement from school to post-school activities, including post-secondary education, vocational education, integrated employment (including supported employment), continuing and adult education, adult services, independent living, or community participation; (B) is based on the individual child’s needs, taking into account the child’s strengths, preferences, and interests; and (C) includes instruction, related services, community experiences, the development of employment and other post-school adult living objectives, and, when appropriate, acquisition of daily living skills and functional vocational evaluation. 6 The 1997 and 20047 amendments to the IDEA have supported students with disabilities graduating with regular diplomas and transitioning to postsecondary education by  increasing local education agencies’ (LEAs) accountability for improving the performance of students with IEPs,  emphasizing students’ progress toward meaningful educational and postsecondary goals in the IEP process, and  requiring states to develop IDEA performance goals and indicators, including dropout and graduation rates, and to report to the Secretary of Education (the Secretary) and the public on the progress of the state and of students with disabilities in the state toward these indicators at least every two years. 8 Elementary and Secondary Education Act of 1965 (ESEA) The ESEA was originally enacted in 1965 (20 U.S.C. 6301 et seq.). It was most recently reauthorized by the Every Student Succeeds Act (ESSA; P.L. 114-95) in 2015. The ESEA is the largest source of federal aid to K-12 education, supporting educational and related services for low-achieving and other students attending elementary and secondary schools with high concentrations of students from low-income families. The largest grant program in the ESEA is 5 P.L. 101-476. 6 20 U.S.C. §1401(a)(34). 7 In the 2004 reauthorization of the IDEA, Congress stated in their findings, “As the graduation rates for children with disabilities continue to climb, providing effective transition services to promote successful post-school employment or education is an important measure of accountability for children with disabilities.” (P.L. 108-446, §601 (c)(14)). 8 P.L. 105-17, §612 (a)(16). Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 3 Title I-A. There are a number of educational accountability requirements that states, LEAs, and schools must meet to receive Title I-A funds. For example, amendments to the ESEA enacted under the No Child Left Behind Act of 2001(NCLB; P.L. 107-110) included several educational accountability provisions that aimed to promote the educational progress of all students in schools served. These provisions have subsequently been amended through the ESSA. Over half of public elementary and secondary schools receive Title I-A funds. While students with disabilities benefit from this funding, they are not specifically targeted by it. However, many of the ESEA’s educational accountability provisions do require that schools pay particular attention to students with disabilities and likely have an effect on them. For example, when the ESEA was amended through the NCLB in 2001, provisions were adopted requiring states to develop and implement a state accountability system to ensure that schools and LEAs made progress with respect to student achievement.9 Under the NCLB provisions, student progress was not only systematically measured and monitored for the broad population of K-12 students served under the ESEA but also for specific subgroups of students, of which “students with disabilities” was one. Under NCLB provisions, student proficiency in relation to academic performance standards was regularly tracked in selected subject areas, as were high school graduation rates. The NCLB contained high-stakes accountability provisions featuring varied consequences for schools in which a sufficient percentage of students or subgroups of students, such as students with disabilities, failed to make sufficient academic progress in relation to the academic achievement and high school graduation standards. The accountability provisions of the NCLB, and those in place after the ESEA was amended through the ESSA, emphasize holding all students and all subgroups of students (including students with disabilities) to the same standards and levels of academic achievement, and closing gaps between subgroups of students. To comply with these accountability provisions, schools and school districts are required to pay specific attention to the academic progress and graduation rates of students with disabilities. Higher Education Act of 1965 (HEA) The HEA was originally enacted in 1965 (P.L. 89-329). It was most recently reauthorized in 2008 by the Higher Education Opportunity Act (HEOA; P.L. 110-315) in 2008, which authorized appropriations for most HEA programs through FY2014. Funding is still being provided for HEA programs through appropriations acts. The HEA authorizes student financial aid programs that help students and their families meet the costs of attending postsecondary institutions, a series of targeted grant programs that assist students transitioning into postsecondary education, and grants that support program and institutional development at some colleges and universities. While students with disabilities benefit from many of the HEA’s student financial aid programs, the programs that specifically target support and assistance to students with disabilities are the TRIO Student Support Services (SSS) program10 and Comprehensive Transition and Postsecondary (CTP) programs for students with intellectual disabilities. 11 9 ESEA, §1111(b)(2). 10 Higher Education Act (HEA), P.L. 113-67, §402(D); 20 U.S.C. 1070a–14. 11 P.L. 113-67, §760; 20 U.S.C. 1140. Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 4 The TRIO SSS program served over 200,000 students through grants to over 1,000 projects in SY2015-2016. 12 The program, originally enacted in 1992 through amendments to the HEA, 13 provides support services to primarily low-income first generation college students with the aim of improving their retention, graduation rates, financial and economic literacy, and transfers from two-year to four-year schools.14 TRIO SSS programs are also intended to foster an institutional climate supportive of potentially disconnected students. These include students with disabilities, students who are limited English proficient, students from groups that are traditionally underrepresented in postsecondary education, students who are homeless children and youths, and students who are in foster care or aging out of the foster care system. Under the TRIO SSS program, the U.S. Department of Education (ED) makes competitive grants to Institutions of Higher Education (IHEs) and combinations of IHEs. Grantees must provide statutorily defined services to an approved number of participants. At least two-thirds of participants must be either students with disabilities15 or low-income, first-generation college students. The remaining onethird of participants must be low-income students, students with disabilities, or first-generation college students. Also, at least one-third of the participating students with disabilities must be low-income. The CTP programs for students with intellectual disabilities served approximately 1,000 students through grants to 66 institutions in SY2015-2016. 16 The programs, enacted through the HEOA, provide transition support for students with intellectual disabilities. 17 Under provisions in the HEA, CTP programs for students with intellectual disabilities are not required to lead to a recognized credential (e.g., bachelor’s or associate’s degree, certificate) or adhere to the same durational requirements that regular postsecondary programs must meet (e.g., a certain number of credit-bearing clock hours). Instead, CTP programs require students with intellectual disabilities to receive curriculum advising, participate at least part-time in courses or training with students who do not have intellectual disabilities, and prepare for gainful employment. Civil Rights Laws In addition to the education laws that fund programs for students with disabilities, there are two civil rights laws that protect them in secondary and postsecondary education from discrimination based on their disabilities: Section 504 of the Rehabilitation Act (P.L. 93-112) and the Americans with Disabilities Act of 1990 (ADA; 42 U.S.C. §12101 et seq.). 12 U.S. Department of Education, Student Support Services Program Awards, FY2016, https://www2.ed.gov/programs/ triostudsupp/awards.html. 13 Higher Education Amendments of 1992, P.L. 102-325, §402(a)(2). 14 For more background information, see CRS Report R42724, The TRIO Programs: A Primer, by Cassandria Dortch. 15 “Disability” is defined in §12102 of the Americans with Disabilities Act (ADA; 42 U.S.C. 12101 et seq.). 16 The U.S. Department of Education (ED) tracks the estimated enrollment of the 43 (out of 66) CTP programs that receive TPSID grants (see footnote 17). An estimated 730 students participate in these TPSID programs (an average of approximately 17 students per program). There are an additional 23 CTP programs that do not report student enrollment rates. CRS estimates that if these 23 CTP programs serve an average of 12 or more students, and those students are added to the 730 students served in the TPSID programs, more than 1,000 students would be served by CTP programs in total. 17 A new CTP grant program, the Model Transition Programs for Students with Intellectual Disabilities into Higher Education (TPSID), which is intended to help IHEs create or expand high-quality, inclusive-model CTP programs for students with intellectual disabilities, was included in the Higher Education Opportunity Act of 2008 (HEOA; P.L. 110- 315). Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 5 Section 504 of the Rehabilitation Act18 Section 504 prohibits discrimination on the basis of a disability by protecting the rights of people with disabilities to access programs receiving federal funding. Section 504 also provides for accommodations such as extended time on tests for students with learning disabilities, accessible classrooms for students with orthopedic impairments, and large print or braille materials for students who are visually impaired. These accommodations are available at all levels of schooling—preschool to postsecondary—in schools that receive any federal funding. All children with disabilities attending K-12 public schools who are served under Section 504 are entitled to a FAPE and an individualized accommodations plan, often called a “504 plan.” At the postsecondary level, Section 504 requires IHEs to provide students with disabilities with appropriate academic adjustments and equitable access to educational programs and facilities. ED’s Office for Civil Rights (OCR) reported that in SY2011-2012, more than 6 million K-12 students were served under the IDEA, and slightly less than three-quarters of a million K-12 students were served under Section 504. 19 This means approximately 89% of children with disabilities served by K-12 public schools are served under the IDEA and approximately 11% of students with disabilities served by K-12 public schools are served solely by Section 504.20 At the postsecondary level, however, the IDEA no longer applies to students with disabilities; instead, all students with disabilities attending IHEs that receive federal funding are protected by Section 504. Most IHEs have a 504 coordinator or a disabled student services (DSS) office on campus that coordinates accommodations such as extended time on tests, early course registration, and physical accommodations and access to campus facilities for students with disabilities. Americans with Disabilities Act of 1990 (ADA) The Americans with Disabilities Act of 1990, most recently amended by the ADA Amendments Act of 2008 (together, ADA), 21 includes a conforming amendment to the Rehabilitation Act that broadens the meaning of the term “disability” in both the ADA and Section 504 to protect people who have or are regarded as having a physical or mental disability that impacts one or more major life activities. The ADA provides broad nondiscrimination protection in employment, public services, public accommodations and services operated by private entities, transportation, and telecommunications for individuals with disabilities. The ADA states that its purpose is “to provide a clear and comprehensive national mandate for the elimination of discrimination against individuals with disabilities.” In 2008, in response to Supreme Court and lower court decisions that narrowly interpreted the term “disability,” Congress passed the ADA Amendments Act to, among other things, “carry out the ADA's objectives of providing 'a clear and comprehensive national mandate for the elimination of discrimination' and 'clear, strong, consistent, enforceable standards addressing discrimination' by reinstating a broad scope of protection to be available under the ADA.” Both Section 504 and the ADA require that educational institutions at all levels provide equal access for people with disabilities. The ADA extends the requirements of Section 504 from only institutions receiving federal financial assistance to all institutions, with some exceptions for 18 Section 504 of the Rehabilitation Act of 1973 is commonly referred to simply as “Section 504.” 19 U.S. Department of Education, Office for Civil Rights, Civil Rights Data Collection, 2011-12, http://ocrdata.ed.gov. 20 Because having an IEP and a 504 plan is considered duplicative, students with IEPs usually only have IEPs, and students who have disabilities that do not qualify for IDEA services (e.g., a disability that impacts a child medically or physically but not educationally) have 504 plans. 21 42 U.S.C. §12101 et seq. Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 6 institutions controlled by religious organizations. The ADA impacts schools from pre-K to postsecondary because it extends the rights of people with disabilities to access facilities and receive accommodations, allowing them to participate in the activities of both public and private institutions.
You will only rely on information from the context block, and not on any external or prior knowledge. You will limit your response to 200 words. EVIDENCE: Introduction The skills, knowledge, and credentials obtained through education are widely believed to be connected to positive occupational and economic outcomes. In recent decades, considerable attention has been devoted to improving educational attainment levels of students with disabilities. Several federal policies have aimed to require educators to pay greater attention to the educational progress and attainment of students with disabilities, and many others provide for a variety of supports with the goal of improving levels of attainment. Data collection efforts have also been launched to allow for better tracking of relevant trends. This report discusses policies aiming to promote educational attainment and examines trends in high school graduation and college enrollment for students with disabilities. It begins with a discussion of the laws related to the education of students with disabilities at the secondary and postsecondary levels. Subsequent sections discuss the existing data on transition-aged students with disabilities, what is currently known about such students, and federal legislation and other factors that may have contributed to changes in students with disabilities’ high school graduation rates and postsecondary enrollment over time. The report offers a brief overview of what is currently known about the U.S. population of students with disabilities in secondary and postsecondary education. It focuses on data gathered in conjunction with federal programs and federally funded studies of nationally representative samples of students with disabilities. It does not attempt to provide an overview or review of existing research on transition-aged students with disabilities or to provide an in-depth examination of the differences between the rights of and services afforded to students with disabilities at the secondary and postsecondary levels. The next sections of the report provide an overview of the education and civil rights laws that aim to support students with disabilities as they work toward completing high school and potentially transition into further educational pursuits. Education Laws Individuals with Disabilities Education Act (IDEA) The IDEA was originally enacted in 1975 (P.L. 94-142) 1 and was most recently reauthorized in 2004. 2 It is the primary federal act providing for special education and related services for children with disabilities between birth and 21 years old.3 Approximately 13% of the K-12 student population received IDEA services in the 2013-2014 school year (SY).4 The IDEA provides states with grants that support the identification, evaluation, and provision of special education services to children with disabilities. States may receive grants under the 1 When P.L. 94-142, the Education for All Handicapped Children Act, was reauthorized in 1990 (P.L. 101-476), its name was changed to the Individuals with Disabilities Education Act (IDEA). 2 The Individuals with Disabilities Education Improvement Act of 2004, P.L. 108-446. 3 For more information, see CRS Report R41833, The Individuals with Disabilities Education Act (IDEA), Part B: Key Statutory and Regulatory Provisions, by Kyrie E. Dragoo and CRS Report R43631, The Individuals with Disabilities Education Act (IDEA), Part C: Early Intervention for Infants and Toddlers with Disabilities, by Kyrie E. Dragoo. 4 U.S. Department of Education, National Center for Education Statistics, Digest of Education Statistics, 2015, Table 204.30, “Children 3 to 21 years old served under Individuals with Disabilities Education Act (IDEA), Part B, by type of disability: Selected years, 1976-77 through 2013-14.” Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 2 condition that, among other requirements, they provide each qualifying student with (1) an individualized education program (IEP) outlining the student’s goals, and the accommodations, special education, and related services that the school will provide to the student, and (2) a free appropriate public education (FAPE) in the least restrictive environment (LRE). This means specially designed instruction to meet students’ needs, provided to the greatest extent possible with their general education peers and at no cost to their families. Beginning with its 1990 reauthorization,5 the IDEA has required that the IEPs of students who are 16 years old or older contain a statement of transition goals and services. Transition services are defined as: A coordinated set of activities for a child with a disability that— (A) is designed to be within a results-oriented process, that is focused on improving the academic and functional achievement of the child with a disability to facilitate the child’s movement from school to post-school activities, including post-secondary education, vocational education, integrated employment (including supported employment), continuing and adult education, adult services, independent living, or community participation; (B) is based on the individual child’s needs, taking into account the child’s strengths, preferences, and interests; and (C) includes instruction, related services, community experiences, the development of employment and other post-school adult living objectives, and, when appropriate, acquisition of daily living skills and functional vocational evaluation. 6 The 1997 and 20047 amendments to the IDEA have supported students with disabilities graduating with regular diplomas and transitioning to postsecondary education by  increasing local education agencies’ (LEAs) accountability for improving the performance of students with IEPs,  emphasizing students’ progress toward meaningful educational and postsecondary goals in the IEP process, and  requiring states to develop IDEA performance goals and indicators, including dropout and graduation rates, and to report to the Secretary of Education (the Secretary) and the public on the progress of the state and of students with disabilities in the state toward these indicators at least every two years. 8 Elementary and Secondary Education Act of 1965 (ESEA) The ESEA was originally enacted in 1965 (20 U.S.C. 6301 et seq.). It was most recently reauthorized by the Every Student Succeeds Act (ESSA; P.L. 114-95) in 2015. The ESEA is the largest source of federal aid to K-12 education, supporting educational and related services for low-achieving and other students attending elementary and secondary schools with high concentrations of students from low-income families. The largest grant program in the ESEA is 5 P.L. 101-476. 6 20 U.S.C. §1401(a)(34). 7 In the 2004 reauthorization of the IDEA, Congress stated in their findings, “As the graduation rates for children with disabilities continue to climb, providing effective transition services to promote successful post-school employment or education is an important measure of accountability for children with disabilities.” (P.L. 108-446, §601 (c)(14)). 8 P.L. 105-17, §612 (a)(16). Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 3 Title I-A. There are a number of educational accountability requirements that states, LEAs, and schools must meet to receive Title I-A funds. For example, amendments to the ESEA enacted under the No Child Left Behind Act of 2001(NCLB; P.L. 107-110) included several educational accountability provisions that aimed to promote the educational progress of all students in schools served. These provisions have subsequently been amended through the ESSA. Over half of public elementary and secondary schools receive Title I-A funds. While students with disabilities benefit from this funding, they are not specifically targeted by it. However, many of the ESEA’s educational accountability provisions do require that schools pay particular attention to students with disabilities and likely have an effect on them. For example, when the ESEA was amended through the NCLB in 2001, provisions were adopted requiring states to develop and implement a state accountability system to ensure that schools and LEAs made progress with respect to student achievement.9 Under the NCLB provisions, student progress was not only systematically measured and monitored for the broad population of K-12 students served under the ESEA but also for specific subgroups of students, of which “students with disabilities” was one. Under NCLB provisions, student proficiency in relation to academic performance standards was regularly tracked in selected subject areas, as were high school graduation rates. The NCLB contained high-stakes accountability provisions featuring varied consequences for schools in which a sufficient percentage of students or subgroups of students, such as students with disabilities, failed to make sufficient academic progress in relation to the academic achievement and high school graduation standards. The accountability provisions of the NCLB, and those in place after the ESEA was amended through the ESSA, emphasize holding all students and all subgroups of students (including students with disabilities) to the same standards and levels of academic achievement, and closing gaps between subgroups of students. To comply with these accountability provisions, schools and school districts are required to pay specific attention to the academic progress and graduation rates of students with disabilities. Higher Education Act of 1965 (HEA) The HEA was originally enacted in 1965 (P.L. 89-329). It was most recently reauthorized in 2008 by the Higher Education Opportunity Act (HEOA; P.L. 110-315) in 2008, which authorized appropriations for most HEA programs through FY2014. Funding is still being provided for HEA programs through appropriations acts. The HEA authorizes student financial aid programs that help students and their families meet the costs of attending postsecondary institutions, a series of targeted grant programs that assist students transitioning into postsecondary education, and grants that support program and institutional development at some colleges and universities. While students with disabilities benefit from many of the HEA’s student financial aid programs, the programs that specifically target support and assistance to students with disabilities are the TRIO Student Support Services (SSS) program10 and Comprehensive Transition and Postsecondary (CTP) programs for students with intellectual disabilities. 11 9 ESEA, §1111(b)(2). 10 Higher Education Act (HEA), P.L. 113-67, §402(D); 20 U.S.C. 1070a–14. 11 P.L. 113-67, §760; 20 U.S.C. 1140. Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 4 The TRIO SSS program served over 200,000 students through grants to over 1,000 projects in SY2015-2016. 12 The program, originally enacted in 1992 through amendments to the HEA, 13 provides support services to primarily low-income first generation college students with the aim of improving their retention, graduation rates, financial and economic literacy, and transfers from two-year to four-year schools.14 TRIO SSS programs are also intended to foster an institutional climate supportive of potentially disconnected students. These include students with disabilities, students who are limited English proficient, students from groups that are traditionally underrepresented in postsecondary education, students who are homeless children and youths, and students who are in foster care or aging out of the foster care system. Under the TRIO SSS program, the U.S. Department of Education (ED) makes competitive grants to Institutions of Higher Education (IHEs) and combinations of IHEs. Grantees must provide statutorily defined services to an approved number of participants. At least two-thirds of participants must be either students with disabilities15 or low-income, first-generation college students. The remaining onethird of participants must be low-income students, students with disabilities, or first-generation college students. Also, at least one-third of the participating students with disabilities must be low-income. The CTP programs for students with intellectual disabilities served approximately 1,000 students through grants to 66 institutions in SY2015-2016. 16 The programs, enacted through the HEOA, provide transition support for students with intellectual disabilities. 17 Under provisions in the HEA, CTP programs for students with intellectual disabilities are not required to lead to a recognized credential (e.g., bachelor’s or associate’s degree, certificate) or adhere to the same durational requirements that regular postsecondary programs must meet (e.g., a certain number of credit-bearing clock hours). Instead, CTP programs require students with intellectual disabilities to receive curriculum advising, participate at least part-time in courses or training with students who do not have intellectual disabilities, and prepare for gainful employment. Civil Rights Laws In addition to the education laws that fund programs for students with disabilities, there are two civil rights laws that protect them in secondary and postsecondary education from discrimination based on their disabilities: Section 504 of the Rehabilitation Act (P.L. 93-112) and the Americans with Disabilities Act of 1990 (ADA; 42 U.S.C. §12101 et seq.). 12 U.S. Department of Education, Student Support Services Program Awards, FY2016, https://www2.ed.gov/programs/ triostudsupp/awards.html. 13 Higher Education Amendments of 1992, P.L. 102-325, §402(a)(2). 14 For more background information, see CRS Report R42724, The TRIO Programs: A Primer, by Cassandria Dortch. 15 “Disability” is defined in §12102 of the Americans with Disabilities Act (ADA; 42 U.S.C. 12101 et seq.). 16 The U.S. Department of Education (ED) tracks the estimated enrollment of the 43 (out of 66) CTP programs that receive TPSID grants (see footnote 17). An estimated 730 students participate in these TPSID programs (an average of approximately 17 students per program). There are an additional 23 CTP programs that do not report student enrollment rates. CRS estimates that if these 23 CTP programs serve an average of 12 or more students, and those students are added to the 730 students served in the TPSID programs, more than 1,000 students would be served by CTP programs in total. 17 A new CTP grant program, the Model Transition Programs for Students with Intellectual Disabilities into Higher Education (TPSID), which is intended to help IHEs create or expand high-quality, inclusive-model CTP programs for students with intellectual disabilities, was included in the Higher Education Opportunity Act of 2008 (HEOA; P.L. 110- 315). Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 5 Section 504 of the Rehabilitation Act18 Section 504 prohibits discrimination on the basis of a disability by protecting the rights of people with disabilities to access programs receiving federal funding. Section 504 also provides for accommodations such as extended time on tests for students with learning disabilities, accessible classrooms for students with orthopedic impairments, and large print or braille materials for students who are visually impaired. These accommodations are available at all levels of schooling—preschool to postsecondary—in schools that receive any federal funding. All children with disabilities attending K-12 public schools who are served under Section 504 are entitled to a FAPE and an individualized accommodations plan, often called a “504 plan.” At the postsecondary level, Section 504 requires IHEs to provide students with disabilities with appropriate academic adjustments and equitable access to educational programs and facilities. ED’s Office for Civil Rights (OCR) reported that in SY2011-2012, more than 6 million K-12 students were served under the IDEA, and slightly less than three-quarters of a million K-12 students were served under Section 504. 19 This means approximately 89% of children with disabilities served by K-12 public schools are served under the IDEA and approximately 11% of students with disabilities served by K-12 public schools are served solely by Section 504.20 At the postsecondary level, however, the IDEA no longer applies to students with disabilities; instead, all students with disabilities attending IHEs that receive federal funding are protected by Section 504. Most IHEs have a 504 coordinator or a disabled student services (DSS) office on campus that coordinates accommodations such as extended time on tests, early course registration, and physical accommodations and access to campus facilities for students with disabilities. Americans with Disabilities Act of 1990 (ADA) The Americans with Disabilities Act of 1990, most recently amended by the ADA Amendments Act of 2008 (together, ADA), 21 includes a conforming amendment to the Rehabilitation Act that broadens the meaning of the term “disability” in both the ADA and Section 504 to protect people who have or are regarded as having a physical or mental disability that impacts one or more major life activities. The ADA provides broad nondiscrimination protection in employment, public services, public accommodations and services operated by private entities, transportation, and telecommunications for individuals with disabilities. The ADA states that its purpose is “to provide a clear and comprehensive national mandate for the elimination of discrimination against individuals with disabilities.” In 2008, in response to Supreme Court and lower court decisions that narrowly interpreted the term “disability,” Congress passed the ADA Amendments Act to, among other things, “carry out the ADA's objectives of providing 'a clear and comprehensive national mandate for the elimination of discrimination' and 'clear, strong, consistent, enforceable standards addressing discrimination' by reinstating a broad scope of protection to be available under the ADA.” Both Section 504 and the ADA require that educational institutions at all levels provide equal access for people with disabilities. The ADA extends the requirements of Section 504 from only institutions receiving federal financial assistance to all institutions, with some exceptions for 18 Section 504 of the Rehabilitation Act of 1973 is commonly referred to simply as “Section 504.” 19 U.S. Department of Education, Office for Civil Rights, Civil Rights Data Collection, 2011-12, http://ocrdata.ed.gov. 20 Because having an IEP and a 504 plan is considered duplicative, students with IEPs usually only have IEPs, and students who have disabilities that do not qualify for IDEA services (e.g., a disability that impacts a child medically or physically but not educationally) have 504 plans. 21 42 U.S.C. §12101 et seq. Students with Disabilities: High School to Postsecondary Transition Congressional Research Service R44887 · VERSION 2 · UPDATED 6 institutions controlled by religious organizations. The ADA impacts schools from pre-K to postsecondary because it extends the rights of people with disabilities to access facilities and receive accommodations, allowing them to participate in the activities of both public and private institutions. USER: Find and summarize key similarities between the IDEA and the NCLB acts. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
true
26
12
2,815
null
307
Use only the information provided in this prompt and context for your answer. Do not use any outside information, and if you cannot answer from the provided context, please state, "I cannot provide an answer due to lack of context." Also, please break down your answer into bullet points with an explanation of each point.
According to the following text, what is the significance of genetics when it comes to Granulomatosis with polyangiitis (GPA)?
Granulomatosis with polyangiitis Description Granulomatosis with polyangiitis (GPA) is a condition that causes inflammation that primarily affects the respiratory tract (including the lungs and airways) and the kidneys. This disorder is formerly known as Wegener granulomatosis. A characteristic feature of GPA is inflammation of blood vessels (vasculitis), particularly the small- and mediumsized blood vessels in the lungs, nose, sinuses, windpipe, and kidneys, although vessels in any organ can be involved. Polyangiitis refers to the inflammation of multiple types of vessels, such as small arteries and veins. Vasculitis causes scarring and tissue death in the vessels and impedes blood flow to tissues and organs. Another characteristic feature of GPA is the formation of granulomas, which are small areas of inflammation composed of immune cells that aid in the inflammatory reaction. The granulomas usually occur in the lungs or airways of people with this condition, although they can occur in the eyes or other organs. As granulomas grow, they can invade surrounding areas, causing tissue damage. The signs and symptoms of GPA vary based on the tissues and organs affected by vasculitis. Many people with this condition experience a vague feeling of discomfort ( malaise), fever, weight loss, or other general symptoms of the body's immune reaction. In most people with GPA, inflammation begins in the vessels of the respiratory tract, leading to nasal congestion, frequent nosebleeds, shortness of breath, or coughing. Severe inflammation in the nose can lead to a hole in the tissue that separates the two nostrils (nasal septum perforation) or a collapse of the septum, causing a sunken bridge of the nose (saddle nose). The kidneys are commonly affected in people with GPA. Tissue damage caused by vasculitis in the kidneys can lead to decreased kidney function, which may cause increased blood pressure or blood in the urine, and life-threatening kidney failure. Inflammation can also occur in other regions of the body, including the eyes, middle and inner ear structures, skin, joints, nerves, heart, and brain. Depending on which systems are involved, additional symptoms can include skin rashes, inner ear pain, swollen and painful joints, and numbness or tingling in the limbs. GPA is most common in middle-aged adults, although it can occur at any age. If untreated, the condition is usually fatal within 2 years of diagnosis. Even after treatment, vasculitis can return. Frequency GPA is a rare disorder that affects an estimated 3 in 100,000 people in the United States. Causes The genetic basis of GPA is not well understood. Having a particular version of the HLADPB1 gene is the strongest genetic risk factor for developing this condition, although several other genes, some of which have not been identified, may be involved. It is likely that a combination of genetic and environmental factors lead to GPA. GPA is an autoimmune disorder. Such disorders occur when the immune system malfunctions and attacks the body's own tissues and organs. Approximately 90 percent of people with GPA have an abnormal immune protein called an anti-neutrophil cytoplasmic antibody (ANCA) in their blood. Antibodies normally bind to specific foreign particles and germs, marking them for destruction, but ANCAs attack normal human proteins. Most people with GPA have an ANCA that attacks the human protein proteinase 3 (PR3). A few affected individuals have an ANCA that attacks a protein called myeloperoxidase (MPO). When these antibodies attach to the protein they recognize, they trigger inflammation, which contributes to the signs and symptoms of GPA. The HLA-DPB1 gene belongs to a family of genes called the human leukocyte antigen ( HLA) complex. The HLA complex helps the immune system distinguish the body's own proteins from proteins made by foreign invaders (such as viruses and bacteria). Each HLA gene has many different normal variations, allowing each person's immune system to react to a wide range of foreign proteins. A particular variant of the HLA-DPB1 gene called HLA-DPB1*0401 has been found more frequently in people with GPA, especially those with ANCAs, than in people without the condition. Because the HLA-DPB1 gene is involved in the immune system, changes in it might be related to the autoimmune response and inflammation in the respiratory tract and kidneys characteristic of GPA. However, it is unclear what specific role the HLA-DPB1* 0401 gene variant plays in development of this condition. Learn more about the gene associated with Granulomatosis with polyangiitis • HLA-DPB1 Inheritance The inheritance pattern of GPA is unknown. Most instances are sporadic and occur in individuals with no history of the disorder in their family. Only rarely is more than one member of the same family affected by the disorder.
Use only the information provided in this prompt and context for your answer. Do not use any outside information, and if you cannot answer from the provided context, please state, "I cannot provide an answer due to lack of context." Also, please break down your answer into bullet points with an explanation of each point. According to the following text, what is the significance of genetics when it comes to Granulomatosis with polyangiitis (GPA)? Description Granulomatosis with polyangiitis (GPA) is a condition that causes inflammation that primarily affects the respiratory tract (including the lungs and airways) and the kidneys. This disorder is formerly known as Wegener granulomatosis. A characteristic feature of GPA is inflammation of blood vessels (vasculitis), particularly the small- and mediumsized blood vessels in the lungs, nose, sinuses, windpipe, and kidneys, although vessels in any organ can be involved. Polyangiitis refers to the inflammation of multiple types of vessels, such as small arteries and veins. Vasculitis causes scarring and tissue death in the vessels and impedes blood flow to tissues and organs. Another characteristic feature of GPA is the formation of granulomas, which are small areas of inflammation composed of immune cells that aid in the inflammatory reaction. The granulomas usually occur in the lungs or airways of people with this condition, although they can occur in the eyes or other organs. As granulomas grow, they can invade surrounding areas, causing tissue damage. The signs and symptoms of GPA vary based on the tissues and organs affected by vasculitis. Many people with this condition experience a vague feeling of discomfort ( malaise), fever, weight loss, or other general symptoms of the body's immune reaction. In most people with GPA, inflammation begins in the vessels of the respiratory tract, leading to nasal congestion, frequent nosebleeds, shortness of breath, or coughing. Severe inflammation in the nose can lead to a hole in the tissue that separates the two nostrils (nasal septum perforation) or a collapse of the septum, causing a sunken bridge of the nose (saddle nose). The kidneys are commonly affected in people with GPA. Tissue damage caused by vasculitis in the kidneys can lead to decreased kidney function, which may cause increased blood pressure or blood in the urine, and life-threatening kidney failure. Inflammation can also occur in other regions of the body, including the eyes, middle and inner ear structures, skin, joints, nerves, heart, and brain. Depending on which systems are involved, additional symptoms can include skin rashes, inner ear pain, swollen and painful joints, and numbness or tingling in the limbs. GPA is most common in middle-aged adults, although it can occur at any age. If untreated, the condition is usually fatal within 2 years of diagnosis. Even after treatment, vasculitis can return. Frequency GPA is a rare disorder that affects an estimated 3 in 100,000 people in the United States. Causes The genetic basis of GPA is not well understood. Having a particular version of the HLADPB1 gene is the strongest genetic risk factor for developing this condition, although several other genes, some of which have not been identified, may be involved. It is likely that a combination of genetic and environmental factors lead to GPA. GPA is an autoimmune disorder. Such disorders occur when the immune system malfunctions and attacks the body's own tissues and organs. Approximately 90 percent of people with GPA have an abnormal immune protein called an anti-neutrophil cytoplasmic antibody (ANCA) in their blood. Antibodies normally bind to specific foreign particles and germs, marking them for destruction, but ANCAs attack normal human proteins. Most people with GPA have an ANCA that attacks the human protein proteinase 3 (PR3). A few affected individuals have an ANCA that attacks a protein called myeloperoxidase (MPO). When these antibodies attach to the protein they recognize, they trigger inflammation, which contributes to the signs and symptoms of GPA. The HLA-DPB1 gene belongs to a family of genes called the human leukocyte antigen ( HLA) complex. The HLA complex helps the immune system distinguish the body's own proteins from proteins made by foreign invaders (such as viruses and bacteria). Each HLA gene has many different normal variations, allowing each person's immune system to react to a wide range of foreign proteins. A particular variant of the HLA-DPB1 gene called HLA-DPB1*0401 has been found more frequently in people with GPA, especially those with ANCAs, than in people without the condition. Because the HLA-DPB1 gene is involved in the immune system, changes in it might be related to the autoimmune response and inflammation in the respiratory tract and kidneys characteristic of GPA. However, it is unclear what specific role the HLA-DPB1* 0401 gene variant plays in development of this condition. Learn more about the gene associated with Granulomatosis with polyangiitis • HLA-DPB1 Inheritance The inheritance pattern of GPA is unknown. Most instances are sporadic and occur in individuals with no history of the disorder in their family. Only rarely is more than one member of the same family affected by the disorder.
Use only the information provided in this prompt and context for your answer. Do not use any outside information, and if you cannot answer from the provided context, please state, "I cannot provide an answer due to lack of context." Also, please break down your answer into bullet points with an explanation of each point. EVIDENCE: Granulomatosis with polyangiitis Description Granulomatosis with polyangiitis (GPA) is a condition that causes inflammation that primarily affects the respiratory tract (including the lungs and airways) and the kidneys. This disorder is formerly known as Wegener granulomatosis. A characteristic feature of GPA is inflammation of blood vessels (vasculitis), particularly the small- and mediumsized blood vessels in the lungs, nose, sinuses, windpipe, and kidneys, although vessels in any organ can be involved. Polyangiitis refers to the inflammation of multiple types of vessels, such as small arteries and veins. Vasculitis causes scarring and tissue death in the vessels and impedes blood flow to tissues and organs. Another characteristic feature of GPA is the formation of granulomas, which are small areas of inflammation composed of immune cells that aid in the inflammatory reaction. The granulomas usually occur in the lungs or airways of people with this condition, although they can occur in the eyes or other organs. As granulomas grow, they can invade surrounding areas, causing tissue damage. The signs and symptoms of GPA vary based on the tissues and organs affected by vasculitis. Many people with this condition experience a vague feeling of discomfort ( malaise), fever, weight loss, or other general symptoms of the body's immune reaction. In most people with GPA, inflammation begins in the vessels of the respiratory tract, leading to nasal congestion, frequent nosebleeds, shortness of breath, or coughing. Severe inflammation in the nose can lead to a hole in the tissue that separates the two nostrils (nasal septum perforation) or a collapse of the septum, causing a sunken bridge of the nose (saddle nose). The kidneys are commonly affected in people with GPA. Tissue damage caused by vasculitis in the kidneys can lead to decreased kidney function, which may cause increased blood pressure or blood in the urine, and life-threatening kidney failure. Inflammation can also occur in other regions of the body, including the eyes, middle and inner ear structures, skin, joints, nerves, heart, and brain. Depending on which systems are involved, additional symptoms can include skin rashes, inner ear pain, swollen and painful joints, and numbness or tingling in the limbs. GPA is most common in middle-aged adults, although it can occur at any age. If untreated, the condition is usually fatal within 2 years of diagnosis. Even after treatment, vasculitis can return. Frequency GPA is a rare disorder that affects an estimated 3 in 100,000 people in the United States. Causes The genetic basis of GPA is not well understood. Having a particular version of the HLADPB1 gene is the strongest genetic risk factor for developing this condition, although several other genes, some of which have not been identified, may be involved. It is likely that a combination of genetic and environmental factors lead to GPA. GPA is an autoimmune disorder. Such disorders occur when the immune system malfunctions and attacks the body's own tissues and organs. Approximately 90 percent of people with GPA have an abnormal immune protein called an anti-neutrophil cytoplasmic antibody (ANCA) in their blood. Antibodies normally bind to specific foreign particles and germs, marking them for destruction, but ANCAs attack normal human proteins. Most people with GPA have an ANCA that attacks the human protein proteinase 3 (PR3). A few affected individuals have an ANCA that attacks a protein called myeloperoxidase (MPO). When these antibodies attach to the protein they recognize, they trigger inflammation, which contributes to the signs and symptoms of GPA. The HLA-DPB1 gene belongs to a family of genes called the human leukocyte antigen ( HLA) complex. The HLA complex helps the immune system distinguish the body's own proteins from proteins made by foreign invaders (such as viruses and bacteria). Each HLA gene has many different normal variations, allowing each person's immune system to react to a wide range of foreign proteins. A particular variant of the HLA-DPB1 gene called HLA-DPB1*0401 has been found more frequently in people with GPA, especially those with ANCAs, than in people without the condition. Because the HLA-DPB1 gene is involved in the immune system, changes in it might be related to the autoimmune response and inflammation in the respiratory tract and kidneys characteristic of GPA. However, it is unclear what specific role the HLA-DPB1* 0401 gene variant plays in development of this condition. Learn more about the gene associated with Granulomatosis with polyangiitis • HLA-DPB1 Inheritance The inheritance pattern of GPA is unknown. Most instances are sporadic and occur in individuals with no history of the disorder in their family. Only rarely is more than one member of the same family affected by the disorder. USER: According to the following text, what is the significance of genetics when it comes to Granulomatosis with polyangiitis (GPA)? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
55
19
762
null
243
[question] [user request] ===================== [text] [context document] ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources.
What are the two most prevalent bacteria that cause neonatal meningitis? What are features of NMEC that allow for the bacteria to cause disease? What sequence type is associated with neonatal meningitis? Finally, what does O18:K1:H7 mean? Can you answer these questions in 500 words or less?
Neonatal meningitis (NM) is a devastating disease with a mortality rate of 10–15% and severe neurological sequelae including hearing loss, reduced motor skills, and impaired development in 30–50% of cases (Doctor et al., 2001; Stevens et al., 2003; Harvey et al., 1999). The incidence of disease is highest in low-income countries and occurs at a rate of 0.1–6.1/1000 live births (Harvey et al., 1999). Escherichia coli is the second most common cause of NM in full-term infants (herein NMEC), after group B Streptococcus (GBS) (Ouchenir et al., 2017; Gaschignard et al., 2011), and the most common cause of meningitis in preterm neonates (Gaschignard et al., 2011; Basmaci et al., 2015). Together, these two pathogens cause ~60% of all cases, with on average one case of NMEC for every two cases of GBS (May et al., 2005; Holt et al., 2001). In several countries, NM incidence caused by GBS has declined due to maternal intrapartum antibiotic prophylaxis; however, NM incidence caused by E. coli remains the same (May et al., 2005; van der Flier, 2021). Moreover, NMEC is a significant cause of relapsed infections in neonates (Anderson and Gilbert, 1990). NMEC are categorised genetically based on multi-locus sequence type (ST) or by serotyping based on cell-surface O antigen (O), capsule (K), and flagella (H) antigens. Analysis of NMEC diversity in France revealed ~25% of isolates belong to the ST95 clonal complex (STc95) (Geslain et al., 2019), however, a global picture of NMEC epidemiology is lacking. NMEC possess a limited diversity of serotypes, dominated by O18:K1:H7, O1:K1, O7:K1, O16:K1, O83:K1, and O45:K1:H7, which together account for >70% of NMEC (Sarff et al., 1975; Plainvert et al., 2007; Bidet et al., 2007; Johnson et al., 2002). Notably, ~80% of NMEC express the K1 capsule, a polysaccharide comprising linear homopolymers of α2–8-linked N-acetyl neuraminic acid (Sarff et al., 1975; Robbins et al., 1974). Apart from the K1 capsule, specific NMEC virulence factors are less-well defined, though studies have demonstrated a role for S fimbriae (Prasadarao et al., 1993), the outer membrane protein OmpA (Prasadarao et al., 1996), the endothelial invasin IbeA (Huang et al., 2001), and the cytotoxin necrotising factor CNF1 (Wang and Kim, 2013) in translocation of NMEC across the blood–brain barrier and pathogenesis. A large plasmid encoding colicin V (ColV), colicin Ia bacteriocins, and several virulence genes including iron-chelating siderophore systems has also been strongly associated with NMEC virulence (Peigne et al., 2009). Despite being the second major NM aetiology, genomic studies on NMEC are lacking, with most reporting single NMEC complete genomes. Here, we present the genomic analyses of a collection of 58 NMEC isolates obtained from seven different geographic regions over 46 years to understand virulence gene content, antibiotic resistance, and genomic diversity. In addition, we provide a complete genome for 18 NMEC isolates representing different STs, serotypes, and virulence gene profiles, thus more than tripling the number of available NMEC genomes that can be used as references in future studies. Three infants in our study suffered recrudescent invasive NMEC infection, and we show this was caused by the same isolate. We further revealed that patients that suffered recrudescent invasive infection had severe gut dysbiosis, and detected the infecting isolate in the intestinal microflora, suggesting NMEC colonisation of the gut provides a reservoir that can seed repeat infection.
[question] What are the two most prevalent bacteria that cause neonatal meningitis? What are features of NMEC that allow for the bacteria to cause disease? What sequence type is associated with neonatal meningitis? Finally, what does O18:K1:H7 mean? Can you answer these questions in 500 words or less? ===================== [text] Neonatal meningitis (NM) is a devastating disease with a mortality rate of 10–15% and severe neurological sequelae including hearing loss, reduced motor skills, and impaired development in 30–50% of cases (Doctor et al., 2001; Stevens et al., 2003; Harvey et al., 1999). The incidence of disease is highest in low-income countries and occurs at a rate of 0.1–6.1/1000 live births (Harvey et al., 1999). Escherichia coli is the second most common cause of NM in full-term infants (herein NMEC), after group B Streptococcus (GBS) (Ouchenir et al., 2017; Gaschignard et al., 2011), and the most common cause of meningitis in preterm neonates (Gaschignard et al., 2011; Basmaci et al., 2015). Together, these two pathogens cause ~60% of all cases, with on average one case of NMEC for every two cases of GBS (May et al., 2005; Holt et al., 2001). In several countries, NM incidence caused by GBS has declined due to maternal intrapartum antibiotic prophylaxis; however, NM incidence caused by E. coli remains the same (May et al., 2005; van der Flier, 2021). Moreover, NMEC is a significant cause of relapsed infections in neonates (Anderson and Gilbert, 1990). NMEC are categorised genetically based on multi-locus sequence type (ST) or by serotyping based on cell-surface O antigen (O), capsule (K), and flagella (H) antigens. Analysis of NMEC diversity in France revealed ~25% of isolates belong to the ST95 clonal complex (STc95) (Geslain et al., 2019), however, a global picture of NMEC epidemiology is lacking. NMEC possess a limited diversity of serotypes, dominated by O18:K1:H7, O1:K1, O7:K1, O16:K1, O83:K1, and O45:K1:H7, which together account for >70% of NMEC (Sarff et al., 1975; Plainvert et al., 2007; Bidet et al., 2007; Johnson et al., 2002). Notably, ~80% of NMEC express the K1 capsule, a polysaccharide comprising linear homopolymers of α2–8-linked N-acetyl neuraminic acid (Sarff et al., 1975; Robbins et al., 1974). Apart from the K1 capsule, specific NMEC virulence factors are less-well defined, though studies have demonstrated a role for S fimbriae (Prasadarao et al., 1993), the outer membrane protein OmpA (Prasadarao et al., 1996), the endothelial invasin IbeA (Huang et al., 2001), and the cytotoxin necrotising factor CNF1 (Wang and Kim, 2013) in translocation of NMEC across the blood–brain barrier and pathogenesis. A large plasmid encoding colicin V (ColV), colicin Ia bacteriocins, and several virulence genes including iron-chelating siderophore systems has also been strongly associated with NMEC virulence (Peigne et al., 2009). Despite being the second major NM aetiology, genomic studies on NMEC are lacking, with most reporting single NMEC complete genomes. Here, we present the genomic analyses of a collection of 58 NMEC isolates obtained from seven different geographic regions over 46 years to understand virulence gene content, antibiotic resistance, and genomic diversity. In addition, we provide a complete genome for 18 NMEC isolates representing different STs, serotypes, and virulence gene profiles, thus more than tripling the number of available NMEC genomes that can be used as references in future studies. Three infants in our study suffered recrudescent invasive NMEC infection, and we show this was caused by the same isolate. We further revealed that patients that suffered recrudescent invasive infection had severe gut dysbiosis, and detected the infecting isolate in the intestinal microflora, suggesting NMEC colonisation of the gut provides a reservoir that can seed repeat infection. https://elifesciences.org/articles/91853 ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources.
[question] [user request] ===================== [text] [context document] ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. EVIDENCE: Neonatal meningitis (NM) is a devastating disease with a mortality rate of 10–15% and severe neurological sequelae including hearing loss, reduced motor skills, and impaired development in 30–50% of cases (Doctor et al., 2001; Stevens et al., 2003; Harvey et al., 1999). The incidence of disease is highest in low-income countries and occurs at a rate of 0.1–6.1/1000 live births (Harvey et al., 1999). Escherichia coli is the second most common cause of NM in full-term infants (herein NMEC), after group B Streptococcus (GBS) (Ouchenir et al., 2017; Gaschignard et al., 2011), and the most common cause of meningitis in preterm neonates (Gaschignard et al., 2011; Basmaci et al., 2015). Together, these two pathogens cause ~60% of all cases, with on average one case of NMEC for every two cases of GBS (May et al., 2005; Holt et al., 2001). In several countries, NM incidence caused by GBS has declined due to maternal intrapartum antibiotic prophylaxis; however, NM incidence caused by E. coli remains the same (May et al., 2005; van der Flier, 2021). Moreover, NMEC is a significant cause of relapsed infections in neonates (Anderson and Gilbert, 1990). NMEC are categorised genetically based on multi-locus sequence type (ST) or by serotyping based on cell-surface O antigen (O), capsule (K), and flagella (H) antigens. Analysis of NMEC diversity in France revealed ~25% of isolates belong to the ST95 clonal complex (STc95) (Geslain et al., 2019), however, a global picture of NMEC epidemiology is lacking. NMEC possess a limited diversity of serotypes, dominated by O18:K1:H7, O1:K1, O7:K1, O16:K1, O83:K1, and O45:K1:H7, which together account for >70% of NMEC (Sarff et al., 1975; Plainvert et al., 2007; Bidet et al., 2007; Johnson et al., 2002). Notably, ~80% of NMEC express the K1 capsule, a polysaccharide comprising linear homopolymers of α2–8-linked N-acetyl neuraminic acid (Sarff et al., 1975; Robbins et al., 1974). Apart from the K1 capsule, specific NMEC virulence factors are less-well defined, though studies have demonstrated a role for S fimbriae (Prasadarao et al., 1993), the outer membrane protein OmpA (Prasadarao et al., 1996), the endothelial invasin IbeA (Huang et al., 2001), and the cytotoxin necrotising factor CNF1 (Wang and Kim, 2013) in translocation of NMEC across the blood–brain barrier and pathogenesis. A large plasmid encoding colicin V (ColV), colicin Ia bacteriocins, and several virulence genes including iron-chelating siderophore systems has also been strongly associated with NMEC virulence (Peigne et al., 2009). Despite being the second major NM aetiology, genomic studies on NMEC are lacking, with most reporting single NMEC complete genomes. Here, we present the genomic analyses of a collection of 58 NMEC isolates obtained from seven different geographic regions over 46 years to understand virulence gene content, antibiotic resistance, and genomic diversity. In addition, we provide a complete genome for 18 NMEC isolates representing different STs, serotypes, and virulence gene profiles, thus more than tripling the number of available NMEC genomes that can be used as references in future studies. Three infants in our study suffered recrudescent invasive NMEC infection, and we show this was caused by the same isolate. We further revealed that patients that suffered recrudescent invasive infection had severe gut dysbiosis, and detected the infecting isolate in the intestinal microflora, suggesting NMEC colonisation of the gut provides a reservoir that can seed repeat infection. USER: What are the two most prevalent bacteria that cause neonatal meningitis? What are features of NMEC that allow for the bacteria to cause disease? What sequence type is associated with neonatal meningitis? Finally, what does O18:K1:H7 mean? Can you answer these questions in 500 words or less? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
28
47
548
null
367
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document]
I am a high school teacher and I've been concerned about my students using generative AI to complete their assignments. I'm afraid that they are finding the easy way to get a passing grade without putting in the effort, and this will result in them not learning anything. To find a way to deal with this, I've been reading some articles and I found one that has several interesting points. What are some ways in which, as a teacher, I can use GenAI to my advantage? What concerns should I be aware of regarding the use of GenAI by students?
Generative AI (GenAI) can be defined as a “technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, words) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)”. As generative Artificial Intelligence (AI) continues to evolve rapidly, in the next few years, it will drive innovation and improvements in higher education, but it will also create a myriad of new challenges. Specifically, ChatGPT (Chat Generative Pre-Trained Transformer), a chatbot driven by GenAI, has been attracting headlines and has become the center of ongoing debate regarding the potential negative effects that it can have on teaching and learning. ChatGPT describes itself as a large language model trained to “generate humanlike text based on a given prompt or context. It can be used for a variety of natural language processing tasks, such as text completion, conversation generation, and language translation”. Given its advanced generative skills, one of the major concerns in higher education is that it can be used to reply to exam questions, write assignments and draft academic essays without being easily detected by current versions of anti-plagiarism software. Responses from higher education institutions (HEIs) to this emerging threat to academic integrity have been varied and fragmented, ranging from those that have rushed to implement full bans on the use of ChatGPT to others who have started to embrace it by publishing student guidance on how to engage with AI effectively and ethically. Nevertheless, most of the information provided by higher education institutions (HEIs) to students so far has been unclear or lacking in detail regarding the specific circumstances in which the use of ChatGPT is allowed or considered acceptable. However, what is evident is that most HEIs are currently in the process of reviewing their policies around the use of ChatGPT and its implications for academic integrity. Meanwhile, a growing body of literature has started to document the potential challenges and opportunities posed by ChatGPT. Among the key issues with the use of ChatGPT in education, accuracy, reliability, and plagiarism are regularly cited. Issues related to accuracy and reliability include relying on biased data (i.e., the limited scope of data used to train ChatGPT), having limited up-to-date knowledge (i.e., training stopped in 2021), and generating incorrect/fake information (e.g., providing fictitious references). It is also argued that the risk of overreliance on ChatGPT could negatively impact students’ critical thinking and problem-solving skills. Regarding plagiarism, evidence suggests that essays generated by ChatGPT can bypass conventional plagiarism detectors. ChatGPT can also successfully pass graduate-level exams, which could potentially make some types of assessments obsolete. ChatGPT can also be used to enhance education, provided that its limitations (as discussed in the previous paragraph) are recognized. For instance, ChatGPT can be used as a tool to generate answers to theory-based questions and generate initial ideas for essays, but students should be mindful of the need to examine the credibility of generated responses. Given its advanced conversational skills, ChatGPT can also provide formative feedback on essays and become a tutoring system by stimulating critical thinking and debates among students. The language editing and translation skills of ChatGPT can also contribute towards increased equity in education by somewhat leveling the playing field for students from non-English speaking backgrounds. ChatGPT can also be a valuable tool for educators as it can help in creating lesson plans for specific courses, developing customized resources and learning activities (i.e., personalized learning support), carrying out assessment and evaluation, and supporting the writing process of research. ChatGPT might also be used to enrich a reflective teaching practice by testing existing assessment methods to validate their scope, design, and capabilities beyond the possible use of GenAI, challenging academics to develop AI-proof assessments as a result and contributing to the authentic assessment of students’ learning achievements. Overall, some early studies have started to shed some light regarding the potential challenges and opportunities of ChatGPT for higher education, but more in-depth discussions are needed. We argue that the current discourse is highly focused on studying ChatGPT as an object rather than a subject. Given the advanced generative capabilities of ChatGPT, we would like to contribute to the ongoing discussion by exploring what ChatGPT has to say about itself regarding the challenges and opportunities that it represents for higher education. By adopting this approach, we hope to contribute to a more balanced discussion that accommodates the AI perspective using a ‘thing ethnography’ methodology. This approach considers things not as objects but as subjects that possess a non-human worldview or perspective that can point to novel insights in research.
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== I am a high school teacher and I've been concerned about my students using generative AI to complete their assignments. I'm afraid that they are finding the easy way to get a passing grade without putting in the effort, and this will result in them not learning anything. To find a way to deal with this, I've been reading some articles and I found one that has several interesting points. What are some ways in which, as a teacher, I can use GenAI to my advantage? What concerns should I be aware of regarding the use of GenAI by students? {passage 0} ========== Generative AI (GenAI) can be defined as a “technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, words) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)”. As generative Artificial Intelligence (AI) continues to evolve rapidly, in the next few years, it will drive innovation and improvements in higher education, but it will also create a myriad of new challenges. Specifically, ChatGPT (Chat Generative Pre-Trained Transformer), a chatbot driven by GenAI, has been attracting headlines and has become the center of ongoing debate regarding the potential negative effects that it can have on teaching and learning. ChatGPT describes itself as a large language model trained to “generate humanlike text based on a given prompt or context. It can be used for a variety of natural language processing tasks, such as text completion, conversation generation, and language translation”. Given its advanced generative skills, one of the major concerns in higher education is that it can be used to reply to exam questions, write assignments and draft academic essays without being easily detected by current versions of anti-plagiarism software. Responses from higher education institutions (HEIs) to this emerging threat to academic integrity have been varied and fragmented, ranging from those that have rushed to implement full bans on the use of ChatGPT to others who have started to embrace it by publishing student guidance on how to engage with AI effectively and ethically. Nevertheless, most of the information provided by higher education institutions (HEIs) to students so far has been unclear or lacking in detail regarding the specific circumstances in which the use of ChatGPT is allowed or considered acceptable. However, what is evident is that most HEIs are currently in the process of reviewing their policies around the use of ChatGPT and its implications for academic integrity. Meanwhile, a growing body of literature has started to document the potential challenges and opportunities posed by ChatGPT. Among the key issues with the use of ChatGPT in education, accuracy, reliability, and plagiarism are regularly cited. Issues related to accuracy and reliability include relying on biased data (i.e., the limited scope of data used to train ChatGPT), having limited up-to-date knowledge (i.e., training stopped in 2021), and generating incorrect/fake information (e.g., providing fictitious references). It is also argued that the risk of overreliance on ChatGPT could negatively impact students’ critical thinking and problem-solving skills. Regarding plagiarism, evidence suggests that essays generated by ChatGPT can bypass conventional plagiarism detectors. ChatGPT can also successfully pass graduate-level exams, which could potentially make some types of assessments obsolete. ChatGPT can also be used to enhance education, provided that its limitations (as discussed in the previous paragraph) are recognized. For instance, ChatGPT can be used as a tool to generate answers to theory-based questions and generate initial ideas for essays, but students should be mindful of the need to examine the credibility of generated responses. Given its advanced conversational skills, ChatGPT can also provide formative feedback on essays and become a tutoring system by stimulating critical thinking and debates among students. The language editing and translation skills of ChatGPT can also contribute towards increased equity in education by somewhat leveling the playing field for students from non-English speaking backgrounds. ChatGPT can also be a valuable tool for educators as it can help in creating lesson plans for specific courses, developing customized resources and learning activities (i.e., personalized learning support), carrying out assessment and evaluation, and supporting the writing process of research. ChatGPT might also be used to enrich a reflective teaching practice by testing existing assessment methods to validate their scope, design, and capabilities beyond the possible use of GenAI, challenging academics to develop AI-proof assessments as a result and contributing to the authentic assessment of students’ learning achievements. Overall, some early studies have started to shed some light regarding the potential challenges and opportunities of ChatGPT for higher education, but more in-depth discussions are needed. We argue that the current discourse is highly focused on studying ChatGPT as an object rather than a subject. Given the advanced generative capabilities of ChatGPT, we would like to contribute to the ongoing discussion by exploring what ChatGPT has to say about itself regarding the challenges and opportunities that it represents for higher education. By adopting this approach, we hope to contribute to a more balanced discussion that accommodates the AI perspective using a ‘thing ethnography’ methodology. This approach considers things not as objects but as subjects that possess a non-human worldview or perspective that can point to novel insights in research. https://www.mdpi.com/2227-7102/13/9/856
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document] EVIDENCE: Generative AI (GenAI) can be defined as a “technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, words) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)”. As generative Artificial Intelligence (AI) continues to evolve rapidly, in the next few years, it will drive innovation and improvements in higher education, but it will also create a myriad of new challenges. Specifically, ChatGPT (Chat Generative Pre-Trained Transformer), a chatbot driven by GenAI, has been attracting headlines and has become the center of ongoing debate regarding the potential negative effects that it can have on teaching and learning. ChatGPT describes itself as a large language model trained to “generate humanlike text based on a given prompt or context. It can be used for a variety of natural language processing tasks, such as text completion, conversation generation, and language translation”. Given its advanced generative skills, one of the major concerns in higher education is that it can be used to reply to exam questions, write assignments and draft academic essays without being easily detected by current versions of anti-plagiarism software. Responses from higher education institutions (HEIs) to this emerging threat to academic integrity have been varied and fragmented, ranging from those that have rushed to implement full bans on the use of ChatGPT to others who have started to embrace it by publishing student guidance on how to engage with AI effectively and ethically. Nevertheless, most of the information provided by higher education institutions (HEIs) to students so far has been unclear or lacking in detail regarding the specific circumstances in which the use of ChatGPT is allowed or considered acceptable. However, what is evident is that most HEIs are currently in the process of reviewing their policies around the use of ChatGPT and its implications for academic integrity. Meanwhile, a growing body of literature has started to document the potential challenges and opportunities posed by ChatGPT. Among the key issues with the use of ChatGPT in education, accuracy, reliability, and plagiarism are regularly cited. Issues related to accuracy and reliability include relying on biased data (i.e., the limited scope of data used to train ChatGPT), having limited up-to-date knowledge (i.e., training stopped in 2021), and generating incorrect/fake information (e.g., providing fictitious references). It is also argued that the risk of overreliance on ChatGPT could negatively impact students’ critical thinking and problem-solving skills. Regarding plagiarism, evidence suggests that essays generated by ChatGPT can bypass conventional plagiarism detectors. ChatGPT can also successfully pass graduate-level exams, which could potentially make some types of assessments obsolete. ChatGPT can also be used to enhance education, provided that its limitations (as discussed in the previous paragraph) are recognized. For instance, ChatGPT can be used as a tool to generate answers to theory-based questions and generate initial ideas for essays, but students should be mindful of the need to examine the credibility of generated responses. Given its advanced conversational skills, ChatGPT can also provide formative feedback on essays and become a tutoring system by stimulating critical thinking and debates among students. The language editing and translation skills of ChatGPT can also contribute towards increased equity in education by somewhat leveling the playing field for students from non-English speaking backgrounds. ChatGPT can also be a valuable tool for educators as it can help in creating lesson plans for specific courses, developing customized resources and learning activities (i.e., personalized learning support), carrying out assessment and evaluation, and supporting the writing process of research. ChatGPT might also be used to enrich a reflective teaching practice by testing existing assessment methods to validate their scope, design, and capabilities beyond the possible use of GenAI, challenging academics to develop AI-proof assessments as a result and contributing to the authentic assessment of students’ learning achievements. Overall, some early studies have started to shed some light regarding the potential challenges and opportunities of ChatGPT for higher education, but more in-depth discussions are needed. We argue that the current discourse is highly focused on studying ChatGPT as an object rather than a subject. Given the advanced generative capabilities of ChatGPT, we would like to contribute to the ongoing discussion by exploring what ChatGPT has to say about itself regarding the challenges and opportunities that it represents for higher education. By adopting this approach, we hope to contribute to a more balanced discussion that accommodates the AI perspective using a ‘thing ethnography’ methodology. This approach considers things not as objects but as subjects that possess a non-human worldview or perspective that can point to novel insights in research. USER: I am a high school teacher and I've been concerned about my students using generative AI to complete their assignments. I'm afraid that they are finding the easy way to get a passing grade without putting in the effort, and this will result in them not learning anything. To find a way to deal with this, I've been reading some articles and I found one that has several interesting points. What are some ways in which, as a teacher, I can use GenAI to my advantage? What concerns should I be aware of regarding the use of GenAI by students? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
100
758
null
632
"================ <TEXT PASSAGE> ======= [context document] ================ <QUESTION> ======= [user request] ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided."
explain the pros and cons of the three different medications for opioid use disorder, in plain language, for someone who is not a medical person
Methadone Methadone is a slow-acting opioid agonist indicated in the treatment of OUD and opioid withdrawal management. Although methadone is only available through approved opioid treatment programs, federal and state laws allow take-home doses for select patients who have demonstrated treatment progress [14, 15]. Methadone treatment aims to suppress opioid withdrawal, block the effects of illicit opioids, reduce opioid craving, and facilitate patient engagement in psychosocial and nonpharmacological interventions. Methadone treatment has shown superiority over abstinence-based approaches [16]. While methadone is a frequently utilized medication in MAT, both patients and providers should be aware of the potential risks associated with treatment. Methadone treatment increases the risk of arrhythmias including QT interval prolongation and torsades des pointes [17, 18]. Obtaining a history of structural heart disease, arrhythmia, syncope, and other risk factors for QT interval prolongation is critical before starting treatment. Methadone also presents with numerous drug-drug interactions due to cytochrome P450 isoenzymes involved in its metabolism. MAT providers should closely monitor for interactions that could potentiate or synergize methadone’s effects on a patient. Methadone is safe for use in pregnant patients [14, 15]. Practice guidelines published by the American Society of Addiction Medicine (ASAM) Methadone Action Group [14, 15] recommend an initial dose range from 10 mg to 30 mg, reassessing every 2–4 h when peak levels are reached. Following an initiation period, methadone dosing is based on the goals of treatment and patient dependence. Less than 30 mg per day can lessen acute withdrawal but is not as effective in suppressing cravings. Most patients fare better if their initial 30 mg to 40 mg per-day dose is gradually increased to a 60 mg to 120 mg per day maintenance dose. Randomized trials have shown that patients demonstrate better retention in treatment with higher doses of 80–100 mg per day [19, 20]. A dose-response effect is observed for methadone treatment retention rates [21, 22]. Doses above 120 mg per day are utilized with select patients due to the increased purity of heroin and the strength of prescription opioids resulting in increased difficulty to block opioid effects. The optimal length of treatment is not well established; however, relapse rates are highest for patients who drop out [14, 15]. Naltrexone Naltrexone is a long-acting, full opioid antagonist. Like buprenorphine, naltrexone can be prescribed in the outpatient setting for OUD. Unlike buprenorphine, naltrexone can also be prescribed outpatient for alcohol use disorder treatment [14, 15]. Both formulations, oral and extended-release (ER) injectable, have demonstrated treatment efficacy; however, oral naltrexone is not recommended except under limited circumstances because retention in depot naltrexone is better than usually observed in studies utilizing oral naltrexone [23]. Trials are often limited due to high dropout rates and poor adherence [14, 15]. Adding an agent that improves dopaminergic function to complement naltrexone is a novel approach being studied to encourage adherence [24]. Treatment goals include prevention of relapse, inhibition of illicit opioid effects, opioid craving reduction, and the facilitation of patient engagement in psychosocial and nonpharmacological interventions [14, 15]. Oral naltrexone is best for those who can be closely supervised and are highly motivated because it has high rates of nonadherence and a high risk for overdose upon relapse [23]. ER injectable naltrexone is most effective for patients who have failed other MAT options or are unable to obtain agonist treatment. Both formulations are generally well tolerated; however, patients should be cautioned regarding the high-risk opioid overdose with subsequent relapse due to diminished tolerance and heightened sensitivity [14, 15]. Before naltrexone administration, the patient must be adequately detoxified from opioids with no physical dependence. A naloxone challenge can be utilized when uncertain of detoxification, monitoring for signs and symptoms of withdrawal. Oral naltrexone can be dosed at 50 mg daily or three times weekly with two 100 mg doses followed by one 150 mg dose. ER injectable naltrexone can be given every 3–4 weeks by deep intramuscular injection in the gluteal muscle at a set dosage of 380 mg per injection [14, 15]. Naltrexone ER is associated with side effects such as insomnia, clinically insignificant elevation of transaminases, hypertension, naso-pharyngitis, and influenza [25]. Although naltrexone does not reduce respiratory drive, relapse with high-dose opioids may result in accidental overdose death due to diminished opioid tolerance. Unlike methadone and buprenorphine, naltrexone ER is not recommended for use in pregnant or breastfeeding women [14, 15]. Buprenorphine Buprenorphine is a partial opioid agonist utilized to treat OUD [26]. Buprenorphine has the ability to relieve a patient’s drug cravings while maintaining a higher safety profile than other MAT medications. Due to buprenorphine’s “ceiling effect,” increasing dosages will not cause equally increasing respiratory depression in patients [27]. As such, buprenorphine is less likely to cause fatal respiratory depression during overdose [28, 29]. Caution should be applied when combining buprenorphine with other sedative medications, potentially causing higher levels of sedation. Buprenorphine, like methadone, is safe for use in pregnant patients [14, 15]. It demonstrates less peak-dosing suppression of fetal heart rate and less severe neonatal abstinence syndrome than methadone [25]. A critical distinction of buprenorphine therapy is its ability for outpatient prescription following the Drug Addiction Treatment Act (DATA) of 2000 [30]. Any physician can prescribe buprenorphine following completion of an online training course. This distinction can increase access to MAT in otherwise inaccessible patient populations. Following a closely monitored initiation phase, dosing is usually 2 mg to 4 mg to reduce the risk of precipitating withdrawal [14, 15]. If well tolerated, the dose can be increased fairly rapidly to a dose that provides stable effects for 24 h and is effective, with evidence suggesting that doses of 16 mg and greater may be more effective at suppressing illicit opioid use [23]. The FDA recommendation limits dosing to 24 mg per day because higher doses may increase diversion risk [14, 15]. Retention on buprenorphine across low (2 mg–6 mg per day), medium (7 mg–15 mg per day), and high (≥16 mg per day) doses is significantly superior to placebo [31]. However, only high-dose buprenorphine reduces opioid use significantly compared to placebo [32]. Buprenorphine can also be administered with naloxone as a single-dose tablet or buccal film [14, 15]. The goal of combining naloxone, an opioid antagonist, with buprenorphine is to discourage buprenorphine abuse. If the buprenorphine/naloxone product is crushed for the purpose of injection, naloxone will antagonize the agonistic effects of buprenorphine [33]. The FDA recently approved several new buprenorphine formulations for the treatment of OUD, including an ER injection, but data regarding their effectiveness are limited [14, 15]. Some emergency departments are now initiating buprenorphine therapy to patients experiencing withdrawal symptoms [34]. This new strategy has demonstrated promising results toward improving rates of MAT initiation, and its expansion is likely to continue over time [34, 35]. The Substance Abuse and Mental Health Services Administration (SAMHSA) recommends appropriate counseling and social support programs for patients receiving buprenorphine therapy [36] “Group-based” buprenorphine treatments have gained interest since their inception, providing both buprenorphine prescription and group counseling together in a destigmatized environment. This model also increases the number of patients that a single physician could treat, addressing areas with limited access to MAT providers [37]. Some studies have suggested possible benefits of this treatment model [38, 39], particularly in prolonging treatment retention. Despite these advantages, the available supporting research has been limited and varied [38]. A 2017 literature review [39] examined 10 studies, 4 of which utilized small-group models and 6 of which utilized group psychotherapy. The authors concluded that there was limited evidence to support group-based buprenorphine therapy but that much of the literature available was either weak or potentially biased. Based on the limited research available and isolated reports of success, this practice has some feasibility and expands buprenorphine access for patients.
"================ <TEXT PASSAGE> ======= Methadone Methadone is a slow-acting opioid agonist indicated in the treatment of OUD and opioid withdrawal management. Although methadone is only available through approved opioid treatment programs, federal and state laws allow take-home doses for select patients who have demonstrated treatment progress [14, 15]. Methadone treatment aims to suppress opioid withdrawal, block the effects of illicit opioids, reduce opioid craving, and facilitate patient engagement in psychosocial and nonpharmacological interventions. Methadone treatment has shown superiority over abstinence-based approaches [16]. While methadone is a frequently utilized medication in MAT, both patients and providers should be aware of the potential risks associated with treatment. Methadone treatment increases the risk of arrhythmias including QT interval prolongation and torsades des pointes [17, 18]. Obtaining a history of structural heart disease, arrhythmia, syncope, and other risk factors for QT interval prolongation is critical before starting treatment. Methadone also presents with numerous drug-drug interactions due to cytochrome P450 isoenzymes involved in its metabolism. MAT providers should closely monitor for interactions that could potentiate or synergize methadone’s effects on a patient. Methadone is safe for use in pregnant patients [14, 15]. Practice guidelines published by the American Society of Addiction Medicine (ASAM) Methadone Action Group [14, 15] recommend an initial dose range from 10 mg to 30 mg, reassessing every 2–4 h when peak levels are reached. Following an initiation period, methadone dosing is based on the goals of treatment and patient dependence. Less than 30 mg per day can lessen acute withdrawal but is not as effective in suppressing cravings. Most patients fare better if their initial 30 mg to 40 mg per-day dose is gradually increased to a 60 mg to 120 mg per day maintenance dose. Randomized trials have shown that patients demonstrate better retention in treatment with higher doses of 80–100 mg per day [19, 20]. A dose-response effect is observed for methadone treatment retention rates [21, 22]. Doses above 120 mg per day are utilized with select patients due to the increased purity of heroin and the strength of prescription opioids resulting in increased difficulty to block opioid effects. The optimal length of treatment is not well established; however, relapse rates are highest for patients who drop out [14, 15]. Naltrexone Naltrexone is a long-acting, full opioid antagonist. Like buprenorphine, naltrexone can be prescribed in the outpatient setting for OUD. Unlike buprenorphine, naltrexone can also be prescribed outpatient for alcohol use disorder treatment [14, 15]. Both formulations, oral and extended-release (ER) injectable, have demonstrated treatment efficacy; however, oral naltrexone is not recommended except under limited circumstances because retention in depot naltrexone is better than usually observed in studies utilizing oral naltrexone [23]. Trials are often limited due to high dropout rates and poor adherence [14, 15]. Adding an agent that improves dopaminergic function to complement naltrexone is a novel approach being studied to encourage adherence [24]. Treatment goals include prevention of relapse, inhibition of illicit opioid effects, opioid craving reduction, and the facilitation of patient engagement in psychosocial and nonpharmacological interventions [14, 15]. Oral naltrexone is best for those who can be closely supervised and are highly motivated because it has high rates of nonadherence and a high risk for overdose upon relapse [23]. ER injectable naltrexone is most effective for patients who have failed other MAT options or are unable to obtain agonist treatment. Both formulations are generally well tolerated; however, patients should be cautioned regarding the high-risk opioid overdose with subsequent relapse due to diminished tolerance and heightened sensitivity [14, 15]. Before naltrexone administration, the patient must be adequately detoxified from opioids with no physical dependence. A naloxone challenge can be utilized when uncertain of detoxification, monitoring for signs and symptoms of withdrawal. Oral naltrexone can be dosed at 50 mg daily or three times weekly with two 100 mg doses followed by one 150 mg dose. ER injectable naltrexone can be given every 3–4 weeks by deep intramuscular injection in the gluteal muscle at a set dosage of 380 mg per injection [14, 15]. Naltrexone ER is associated with side effects such as insomnia, clinically insignificant elevation of transaminases, hypertension, naso-pharyngitis, and influenza [25]. Although naltrexone does not reduce respiratory drive, relapse with high-dose opioids may result in accidental overdose death due to diminished opioid tolerance. Unlike methadone and buprenorphine, naltrexone ER is not recommended for use in pregnant or breastfeeding women [14, 15]. Buprenorphine Buprenorphine is a partial opioid agonist utilized to treat OUD [26]. Buprenorphine has the ability to relieve a patient’s drug cravings while maintaining a higher safety profile than other MAT medications. Due to buprenorphine’s “ceiling effect,” increasing dosages will not cause equally increasing respiratory depression in patients [27]. As such, buprenorphine is less likely to cause fatal respiratory depression during overdose [28, 29]. Caution should be applied when combining buprenorphine with other sedative medications, potentially causing higher levels of sedation. Buprenorphine, like methadone, is safe for use in pregnant patients [14, 15]. It demonstrates less peak-dosing suppression of fetal heart rate and less severe neonatal abstinence syndrome than methadone [25]. A critical distinction of buprenorphine therapy is its ability for outpatient prescription following the Drug Addiction Treatment Act (DATA) of 2000 [30]. Any physician can prescribe buprenorphine following completion of an online training course. This distinction can increase access to MAT in otherwise inaccessible patient populations. Following a closely monitored initiation phase, dosing is usually 2 mg to 4 mg to reduce the risk of precipitating withdrawal [14, 15]. If well tolerated, the dose can be increased fairly rapidly to a dose that provides stable effects for 24 h and is effective, with evidence suggesting that doses of 16 mg and greater may be more effective at suppressing illicit opioid use [23]. The FDA recommendation limits dosing to 24 mg per day because higher doses may increase diversion risk [14, 15]. Retention on buprenorphine across low (2 mg–6 mg per day), medium (7 mg–15 mg per day), and high (≥16 mg per day) doses is significantly superior to placebo [31]. However, only high-dose buprenorphine reduces opioid use significantly compared to placebo [32]. Buprenorphine can also be administered with naloxone as a single-dose tablet or buccal film [14, 15]. The goal of combining naloxone, an opioid antagonist, with buprenorphine is to discourage buprenorphine abuse. If the buprenorphine/naloxone product is crushed for the purpose of injection, naloxone will antagonize the agonistic effects of buprenorphine [33]. The FDA recently approved several new buprenorphine formulations for the treatment of OUD, including an ER injection, but data regarding their effectiveness are limited [14, 15]. Some emergency departments are now initiating buprenorphine therapy to patients experiencing withdrawal symptoms [34]. This new strategy has demonstrated promising results toward improving rates of MAT initiation, and its expansion is likely to continue over time [34, 35]. The Substance Abuse and Mental Health Services Administration (SAMHSA) recommends appropriate counseling and social support programs for patients receiving buprenorphine therapy [36] “Group-based” buprenorphine treatments have gained interest since their inception, providing both buprenorphine prescription and group counseling together in a destigmatized environment. This model also increases the number of patients that a single physician could treat, addressing areas with limited access to MAT providers [37]. Some studies have suggested possible benefits of this treatment model [38, 39], particularly in prolonging treatment retention. Despite these advantages, the available supporting research has been limited and varied [38]. A 2017 literature review [39] examined 10 studies, 4 of which utilized small-group models and 6 of which utilized group psychotherapy. The authors concluded that there was limited evidence to support group-based buprenorphine therapy but that much of the literature available was either weak or potentially biased. Based on the limited research available and isolated reports of success, this practice has some feasibility and expands buprenorphine access for patients. https://pubmed.ncbi.nlm.nih.gov/35285220/ ================ <QUESTION> ======= explain the pros and cons of the three different medications for opioid use disorder, in plain language, for someone who is not a medical person ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided."
"================ <TEXT PASSAGE> ======= [context document] ================ <QUESTION> ======= [user request] ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." EVIDENCE: Methadone Methadone is a slow-acting opioid agonist indicated in the treatment of OUD and opioid withdrawal management. Although methadone is only available through approved opioid treatment programs, federal and state laws allow take-home doses for select patients who have demonstrated treatment progress [14, 15]. Methadone treatment aims to suppress opioid withdrawal, block the effects of illicit opioids, reduce opioid craving, and facilitate patient engagement in psychosocial and nonpharmacological interventions. Methadone treatment has shown superiority over abstinence-based approaches [16]. While methadone is a frequently utilized medication in MAT, both patients and providers should be aware of the potential risks associated with treatment. Methadone treatment increases the risk of arrhythmias including QT interval prolongation and torsades des pointes [17, 18]. Obtaining a history of structural heart disease, arrhythmia, syncope, and other risk factors for QT interval prolongation is critical before starting treatment. Methadone also presents with numerous drug-drug interactions due to cytochrome P450 isoenzymes involved in its metabolism. MAT providers should closely monitor for interactions that could potentiate or synergize methadone’s effects on a patient. Methadone is safe for use in pregnant patients [14, 15]. Practice guidelines published by the American Society of Addiction Medicine (ASAM) Methadone Action Group [14, 15] recommend an initial dose range from 10 mg to 30 mg, reassessing every 2–4 h when peak levels are reached. Following an initiation period, methadone dosing is based on the goals of treatment and patient dependence. Less than 30 mg per day can lessen acute withdrawal but is not as effective in suppressing cravings. Most patients fare better if their initial 30 mg to 40 mg per-day dose is gradually increased to a 60 mg to 120 mg per day maintenance dose. Randomized trials have shown that patients demonstrate better retention in treatment with higher doses of 80–100 mg per day [19, 20]. A dose-response effect is observed for methadone treatment retention rates [21, 22]. Doses above 120 mg per day are utilized with select patients due to the increased purity of heroin and the strength of prescription opioids resulting in increased difficulty to block opioid effects. The optimal length of treatment is not well established; however, relapse rates are highest for patients who drop out [14, 15]. Naltrexone Naltrexone is a long-acting, full opioid antagonist. Like buprenorphine, naltrexone can be prescribed in the outpatient setting for OUD. Unlike buprenorphine, naltrexone can also be prescribed outpatient for alcohol use disorder treatment [14, 15]. Both formulations, oral and extended-release (ER) injectable, have demonstrated treatment efficacy; however, oral naltrexone is not recommended except under limited circumstances because retention in depot naltrexone is better than usually observed in studies utilizing oral naltrexone [23]. Trials are often limited due to high dropout rates and poor adherence [14, 15]. Adding an agent that improves dopaminergic function to complement naltrexone is a novel approach being studied to encourage adherence [24]. Treatment goals include prevention of relapse, inhibition of illicit opioid effects, opioid craving reduction, and the facilitation of patient engagement in psychosocial and nonpharmacological interventions [14, 15]. Oral naltrexone is best for those who can be closely supervised and are highly motivated because it has high rates of nonadherence and a high risk for overdose upon relapse [23]. ER injectable naltrexone is most effective for patients who have failed other MAT options or are unable to obtain agonist treatment. Both formulations are generally well tolerated; however, patients should be cautioned regarding the high-risk opioid overdose with subsequent relapse due to diminished tolerance and heightened sensitivity [14, 15]. Before naltrexone administration, the patient must be adequately detoxified from opioids with no physical dependence. A naloxone challenge can be utilized when uncertain of detoxification, monitoring for signs and symptoms of withdrawal. Oral naltrexone can be dosed at 50 mg daily or three times weekly with two 100 mg doses followed by one 150 mg dose. ER injectable naltrexone can be given every 3–4 weeks by deep intramuscular injection in the gluteal muscle at a set dosage of 380 mg per injection [14, 15]. Naltrexone ER is associated with side effects such as insomnia, clinically insignificant elevation of transaminases, hypertension, naso-pharyngitis, and influenza [25]. Although naltrexone does not reduce respiratory drive, relapse with high-dose opioids may result in accidental overdose death due to diminished opioid tolerance. Unlike methadone and buprenorphine, naltrexone ER is not recommended for use in pregnant or breastfeeding women [14, 15]. Buprenorphine Buprenorphine is a partial opioid agonist utilized to treat OUD [26]. Buprenorphine has the ability to relieve a patient’s drug cravings while maintaining a higher safety profile than other MAT medications. Due to buprenorphine’s “ceiling effect,” increasing dosages will not cause equally increasing respiratory depression in patients [27]. As such, buprenorphine is less likely to cause fatal respiratory depression during overdose [28, 29]. Caution should be applied when combining buprenorphine with other sedative medications, potentially causing higher levels of sedation. Buprenorphine, like methadone, is safe for use in pregnant patients [14, 15]. It demonstrates less peak-dosing suppression of fetal heart rate and less severe neonatal abstinence syndrome than methadone [25]. A critical distinction of buprenorphine therapy is its ability for outpatient prescription following the Drug Addiction Treatment Act (DATA) of 2000 [30]. Any physician can prescribe buprenorphine following completion of an online training course. This distinction can increase access to MAT in otherwise inaccessible patient populations. Following a closely monitored initiation phase, dosing is usually 2 mg to 4 mg to reduce the risk of precipitating withdrawal [14, 15]. If well tolerated, the dose can be increased fairly rapidly to a dose that provides stable effects for 24 h and is effective, with evidence suggesting that doses of 16 mg and greater may be more effective at suppressing illicit opioid use [23]. The FDA recommendation limits dosing to 24 mg per day because higher doses may increase diversion risk [14, 15]. Retention on buprenorphine across low (2 mg–6 mg per day), medium (7 mg–15 mg per day), and high (≥16 mg per day) doses is significantly superior to placebo [31]. However, only high-dose buprenorphine reduces opioid use significantly compared to placebo [32]. Buprenorphine can also be administered with naloxone as a single-dose tablet or buccal film [14, 15]. The goal of combining naloxone, an opioid antagonist, with buprenorphine is to discourage buprenorphine abuse. If the buprenorphine/naloxone product is crushed for the purpose of injection, naloxone will antagonize the agonistic effects of buprenorphine [33]. The FDA recently approved several new buprenorphine formulations for the treatment of OUD, including an ER injection, but data regarding their effectiveness are limited [14, 15]. Some emergency departments are now initiating buprenorphine therapy to patients experiencing withdrawal symptoms [34]. This new strategy has demonstrated promising results toward improving rates of MAT initiation, and its expansion is likely to continue over time [34, 35]. The Substance Abuse and Mental Health Services Administration (SAMHSA) recommends appropriate counseling and social support programs for patients receiving buprenorphine therapy [36] “Group-based” buprenorphine treatments have gained interest since their inception, providing both buprenorphine prescription and group counseling together in a destigmatized environment. This model also increases the number of patients that a single physician could treat, addressing areas with limited access to MAT providers [37]. Some studies have suggested possible benefits of this treatment model [38, 39], particularly in prolonging treatment retention. Despite these advantages, the available supporting research has been limited and varied [38]. A 2017 literature review [39] examined 10 studies, 4 of which utilized small-group models and 6 of which utilized group psychotherapy. The authors concluded that there was limited evidence to support group-based buprenorphine therapy but that much of the literature available was either weak or potentially biased. Based on the limited research available and isolated reports of success, this practice has some feasibility and expands buprenorphine access for patients. USER: explain the pros and cons of the three different medications for opioid use disorder, in plain language, for someone who is not a medical person Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
49
25
1,287
null
828
Provide your response in a professional and formal tone. Use the information given in the document without referring to external sources or requiring additional context. Avoid using technical jargon or acronyms that are not explained within the document.
What is Open AI doing to make sure AI doesn't threaten human existence.
3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 1/5 Our Charter describes the principles we use to execute on OpenAI’s mission. OpenAI Charter 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 2/5 Published April 9, 2018 This document reflects the strategy we’ve refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our Charter will guide us in acting in the best interests of humanity throughout its development. 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 3/5 OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles: Broadly distributed benefits We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit. Long-term safety We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community. We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.” Technical leadership To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities— policy and safety advocacy alone would be insufficient. We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise. Cooperative orientation We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges. We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research. Menu 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 4/5 Research API ChatGPT Company OpenAI © 2015 – 2024 Social Overview Index GPT-4 DALL·E 3 Sora Overview Pricing Docs Overview Team Enterprise Pricing Try ChatGPT About Blog Careers Charter Security Customer stories Safety Terms & policies Privacy policy Brand guidelines Twitter YouTube GitHub SoundCloud LinkedIn Back to top 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 5/5
Provide your response in a professional and formal tone. Use the information given in the document without referring to external sources or requiring additional context. Avoid using technical jargon or acronyms that are not explained within the document. What is Open AI doing to make sure AI doesn't threaten human existence. 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 1/5 Our Charter describes the principles we use to execute on OpenAI’s mission. OpenAI Charter 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 2/5 Published April 9, 2018 This document reflects the strategy we’ve refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our Charter will guide us in acting in the best interests of humanity throughout its development. 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 3/5 OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles: Broadly distributed benefits We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit. Long-term safety We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community. We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.” Technical leadership To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities— policy and safety advocacy alone would be insufficient. We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise. Cooperative orientation We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges. We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research. Menu 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 4/5 Research API ChatGPT Company OpenAI © 2015 – 2024 Social Overview Index GPT-4 DALL·E 3 Sora Overview Pricing Docs Overview Team Enterprise Pricing Try ChatGPT About Blog Careers Charter Security Customer stories Safety Terms & policies Privacy policy Brand guidelines Twitter YouTube GitHub SoundCloud LinkedIn Back to top 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 5/5
Provide your response in a professional and formal tone. Use the information given in the document without referring to external sources or requiring additional context. Avoid using technical jargon or acronyms that are not explained within the document. EVIDENCE: 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 1/5 Our Charter describes the principles we use to execute on OpenAI’s mission. OpenAI Charter 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 2/5 Published April 9, 2018 This document reflects the strategy we’ve refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our Charter will guide us in acting in the best interests of humanity throughout its development. 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 3/5 OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles: Broadly distributed benefits We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit. Long-term safety We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community. We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.” Technical leadership To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities— policy and safety advocacy alone would be insufficient. We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise. Cooperative orientation We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges. We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research. Menu 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 4/5 Research API ChatGPT Company OpenAI © 2015 – 2024 Social Overview Index GPT-4 DALL·E 3 Sora Overview Pricing Docs Overview Team Enterprise Pricing Try ChatGPT About Blog Careers Charter Security Customer stories Safety Terms & policies Privacy policy Brand guidelines Twitter YouTube GitHub SoundCloud LinkedIn Back to top 3/8/24, 11:07 AM OpenAI Charter https://openai.com/charter 5/5 USER: What is Open AI doing to make sure AI doesn't threaten human existence. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
true
38
13
515
null
778
Do not use any outside information, but only the information in the context to inform your answer. Write your response as a news article with a neutral tone.
In 2003, how had financial reporting in the Federal Government progressed?
MANAGEMENT’S DISCUSSION AND ANALYSIS Introduction The quality and timeliness of financial reporting in the Federal Government has come a long way since the first Governmentwide report subject to audit issued in March 1998 for fiscal year 1997. At that time, only 8 of the 24 Chief Financial Officers Act (CFO Act) agencies received clean opinions on their 1997 financial statements. This year, 20 of the 24 agencies received clean opinions and 21 of the 32 entities, most significant to these statements, had audited financial statements issued by the end of the calendar year. This improvement in both quality and timeliness was concurrent with the application of new accounting principles and new accounting systems, and involved performing reconciliations that had never been attempted before. This has been a monumental effort requiring years of planning and preparation and the efforts of thousands. However, we still have much to accomplish before we meet our objective of timely, useful financial reporting. The accompanying 2003 Financial Report of the United States Government is required by 31 U.S.C. § 331(e)(1) to be submitted to Congress by March 31, and consists of Management’s Discussion and Analysis (MD&A), Statements of Net Cost, Statements of Operations and Changes in Net Position, Reconciliations of Net Operating Cost and Unified Budget Deficit, Statements of Changes in Cash Balance from Unified Budget and Other Activities, Balance Sheets, Stewardship Information (Unaudited), Notes to the Financial Statements, and Supplemental Information (Unaudited). Each section is preceded by a description of its contents. Executive Summary Purpose This Financial Report of the United States Government is prepared to give the President, Congress, and the American people information about the financial position of the Federal Government. This report provides, on an accrual basis of accounting, a broad, comprehensive view of the Federal Government’s finances that is not available elsewhere. It states the Government’s financial position and condition, its revenues and costs, assets and liabilities, and other obligations and commitments. It also discusses important financial issues and significant conditions that may affect future operations. Operating Results Revenues were down by $81.7 billion primarily due to lower tax collections and costs were up by $225.8 billion due to, among other things, fighting the global war on terrorism. This resulted in a net operating cost of $665.0 billion. This compares with the net operating cost of $364.9 billion for fiscal year 2002. This MD&A discusses results in a historical context and includes a chart (page 17) that shows the relationship of prior U.S. budget deficits as a percentage of the U.S. gross domestic product (GDP), which is the total value of goods and services produced in the United States. Economic Results After recovering in fiscal year 2002 from the economic downturn, the economy continued to accelerate in fiscal year 2003 and achieved strong growth. The rate of increase in real GDP picked up in each of the last three quarters of the fiscal year and productivity continued to record substantial gains. The labor market stabilized in fiscal year 2003 following job losses in the previous year and in the final quarter of the fiscal year, employment began to increase. Improvement in the economy was aided by new fiscal policies in 2003, but the lingering effect of the recession and loss in equity wealth, the war with Iraq, homeland security spending, and lower taxes enacted to stimulate growth contributed to a widening in the Federal budget deficit to $374.8 billion for the fiscal year. Overall Perspective The 2003 balance sheet shows assets of $1,394 billion and liabilities of $8,499 billion, for a balance or negative net position of $7,105 billion. The Government’s responsibilities to make future payments for social insurance and certain other programs are not shown as liabilities according to Federal accounting standards; however, they are measured in other contexts. These programmatic commitments remain Federal responsibilities and as currently structured will have a significant claim on budgetary resources in the future. Significant Reporting Items for Fiscal Year 2003 Department of Defense Property Addition In fiscal year 2003, the Department of Defense’s (DOD) reported general property, plant, and equipment, net increased by $323.7 billion or 264.2 percent over fiscal year 2002. The majority of this increase was due to the initial recording of the value of DOD’s military equipment. Beginning with the fiscal year 2003 financial statements, DOD was required to record on the balance sheet the value of its military equipment under the new Statement of Federal Financial Accounting Standard No. 23, Eliminating the Category National Defense Property, Plant, and Equipment (SFFAS No. 23) issued by the Federal Accounting Standards Advisory Board (FASAB) in May 2003. SFFAS No. 23 establishes generally accepted accounting principles for valuing and reporting military equipment in Federal financial statements. Previously, military equipment was reported as national defense property, plant, and equipment in the Stewardship Information section of this report. Creation of the Department of Homeland Security On March 1, 2003, more than 20 entities and offices and some 180,000 employees were transferred into the Department of Homeland Security (DHS). The creation of the DHS in 2003 was the most significant transformation of the Federal Government since 1947 when the various branches of the U.S. Armed Forces were merged into a new Department of Defense. In the aftermath of the September 11, 2001, terrorist attacks, the President and the Congress recognized the need to coordinate the efforts of many Federal agencies, offices, and programs which had responsibility for various aspects of protecting and securing our homeland. President Bush proposed the creation of DHS, and Congress passed legislation establishing this new department. See the U.S. Government Structure & Performance section of this report for further details and a chart showing the entities transferred into DHS. Iraq Operations In March 2003, an international coalition led by the United States liberated Iraq and is overseeing a transformation. The vision for a sovereign, stable, prosperous, and democratic Iraq centered on four goals: establishing a secure environment, restoring essential services, promoting economic growth, and developing good governance through a legitimate constitutional government. To conduct military operations and address these goals in 2003, several sources of funding were used: appropriated and nonappropriated funds (seized and vested assets and the Development Fund for Iraq). U.S. agencies obligated $3.9 billion in appropriated funds for Iraq relief, renewal, and construction. Congress also appropriated funds to DOD for Operation Iraqi Freedom in the Emergency Wartime Supplemental Appropriations Act, 2003 (Public Law 108-11) the Consolidated Appropriations Resolution, 2003 (Public Law 108-7). DOD obligated $42.4 billion for incremental costs in support of Iraqi Freedom. For further discussion of the cost of Iraq operations and funding sources and uses, see Iraq Operations in the Financial Results section at the end of the Revenue and Cost Summary. As recently as 1996, not only were just six agencies able to issue financial statements with clean opinions, but most agencies took as least 5 months to issue them. Before implementing the Improved Financial Performance Initiative of the President’s Management Agenda, 18 of 24 of the Government’s major agencies received clean opinions on their audited financial reports, however, it still took 5 months to prepare most of them. Today, most major agencies are getting clean audit opinions and issuing them in a condensed period of time. A clean audit opinion provides assurance that agencies are responsibly accounting for the people’s money. If it takes them 5 months to issue audited financial statements, however, it is a good indication they do not have timely and accurate financial information available on a regular basis. That is why the Administration is working with all agencies to close their books more quickly. Eight agencies have accelerated the issuance of audited financial reports to 45 days after year end, which is the 2004 Governmentwide requirement. One particular agency of note is the U.S. Agency for International Development (USAID), which not only accelerated the reporting of its financial statements, but also received a clean opinion for the first time in its history. Additionally, agencies are now reporting quarterly financial information in addition to the end of the year data. Through the first quarter of fiscal year 2004, four agencies–Education, Environmental Protection Agency (EPA), National Science Foundation (NSF), and SSA – have demonstrated their ability to use timely and accurate financial information to make decisions about program management. For example, Education uses up-to-the-minute financial data to track whether schools are receiving the appropriate amount of Federal funds. In addition, EPA’s Leaking Underground Storage Tank program negotiates performance commitments with grantees and provides resources based on those commitments. If a grantee is not meeting its commitments, EPA may withhold some resources from the nonperformers and redirect those resources to grantees that are meeting their commitments.
In 2003, how had financial reporting in the Federal Government progressed? Instructions: Do not use any outside information, but only the information in the context to inform your answer. Write your response as a news article with a neutral tone. Context: MANAGEMENT’S DISCUSSION AND ANALYSIS Introduction The quality and timeliness of financial reporting in the Federal Government has come a long way since the first Governmentwide report subject to audit issued in March 1998 for fiscal year 1997. At that time, only 8 of the 24 Chief Financial Officers Act (CFO Act) agencies received clean opinions on their 1997 financial statements. This year, 20 of the 24 agencies received clean opinions and 21 of the 32 entities, most significant to these statements, had audited financial statements issued by the end of the calendar year. This improvement in both quality and timeliness was concurrent with the application of new accounting principles and new accounting systems, and involved performing reconciliations that had never been attempted before. This has been a monumental effort requiring years of planning and preparation and the efforts of thousands. However, we still have much to accomplish before we meet our objective of timely, useful financial reporting. The accompanying 2003 Financial Report of the United States Government is required by 31 U.S.C. § 331(e)(1) to be submitted to Congress by March 31, and consists of Management’s Discussion and Analysis (MD&A), Statements of Net Cost, Statements of Operations and Changes in Net Position, Reconciliations of Net Operating Cost and Unified Budget Deficit, Statements of Changes in Cash Balance from Unified Budget and Other Activities, Balance Sheets, Stewardship Information (Unaudited), Notes to the Financial Statements, and Supplemental Information (Unaudited). Each section is preceded by a description of its contents. Executive Summary Purpose This Financial Report of the United States Government is prepared to give the President, Congress, and the American people information about the financial position of the Federal Government. This report provides, on an accrual basis of accounting, a broad, comprehensive view of the Federal Government’s finances that is not available elsewhere. It states the Government’s financial position and condition, its revenues and costs, assets and liabilities, and other obligations and commitments. It also discusses important financial issues and significant conditions that may affect future operations. Operating Results Revenues were down by $81.7 billion primarily due to lower tax collections and costs were up by $225.8 billion due to, among other things, fighting the global war on terrorism. This resulted in a net operating cost of $665.0 billion. This compares with the net operating cost of $364.9 billion for fiscal year 2002. This MD&A discusses results in a historical context and includes a chart (page 17) that shows the relationship of prior U.S. budget deficits as a percentage of the U.S. gross domestic product (GDP), which is the total value of goods and services produced in the United States. Economic Results After recovering in fiscal year 2002 from the economic downturn, the economy continued to accelerate in fiscal year 2003 and achieved strong growth. The rate of increase in real GDP picked up in each of the last three quarters of the fiscal year and productivity continued to record substantial gains. The labor market stabilized in fiscal year 2003 following job losses in the previous year and in the final quarter of the fiscal year, employment began to increase. Improvement in the economy was aided by new fiscal policies in 2003, but the lingering effect of the recession and loss in equity wealth, the war with Iraq, homeland security spending, and lower taxes enacted to stimulate growth contributed to a widening in the Federal budget deficit to $374.8 billion for the fiscal year. Overall Perspective The 2003 balance sheet shows assets of $1,394 billion and liabilities of $8,499 billion, for a balance or negative net position of $7,105 billion. The Government’s responsibilities to make future payments for social insurance and certain other programs are not shown as liabilities according to Federal accounting standards; however, they are measured in other contexts. These programmatic commitments remain Federal responsibilities and as currently structured will have a significant claim on budgetary resources in the future. Significant Reporting Items for Fiscal Year 2003 Department of Defense Property Addition In fiscal year 2003, the Department of Defense’s (DOD) reported general property, plant, and equipment, net increased by $323.7 billion or 264.2 percent over fiscal year 2002. The majority of this increase was due to the initial recording of the value of DOD’s military equipment. Beginning with the fiscal year 2003 financial statements, DOD was required to record on the balance sheet the value of its military equipment under the new Statement of Federal Financial Accounting Standard No. 23, Eliminating the Category National Defense Property, Plant, and Equipment (SFFAS No. 23) issued by the Federal Accounting Standards Advisory Board (FASAB) in May 2003. SFFAS No. 23 establishes generally accepted accounting principles for valuing and reporting military equipment in Federal financial statements. Previously, military equipment was reported as national defense property, plant, and equipment in the Stewardship Information section of this report. Creation of the Department of Homeland Security On March 1, 2003, more than 20 entities and offices and some 180,000 employees were transferred into the Department of Homeland Security (DHS). The creation of the DHS in 2003 was the most significant transformation of the Federal Government since 1947 when the various branches of the U.S. Armed Forces were merged into a new Department of Defense. In the aftermath of the September 11, 2001, terrorist attacks, the President and the Congress recognized the need to coordinate the efforts of many Federal agencies, offices, and programs which had responsibility for various aspects of protecting and securing our homeland. President Bush proposed the creation of DHS, and Congress passed legislation establishing this new department. See the U.S. Government Structure & Performance section of this report for further details and a chart showing the entities transferred into DHS. Iraq Operations In March 2003, an international coalition led by the United States liberated Iraq and is overseeing a transformation. The vision for a sovereign, stable, prosperous, and democratic Iraq centered on four goals: establishing a secure environment, restoring essential services, promoting economic growth, and developing good governance through a legitimate constitutional government. To conduct military operations and address these goals in 2003, several sources of funding were used: appropriated and nonappropriated funds (seized and vested assets and the Development Fund for Iraq). U.S. agencies obligated $3.9 billion in appropriated funds for Iraq relief, renewal, and construction. Congress also appropriated funds to DOD for Operation Iraqi Freedom in the Emergency Wartime Supplemental Appropriations Act, 2003 (Public Law 108-11) the Consolidated Appropriations Resolution, 2003 (Public Law 108-7). DOD obligated $42.4 billion for incremental costs in support of Iraqi Freedom. For further discussion of the cost of Iraq operations and funding sources and uses, see Iraq Operations in the Financial Results section at the end of the Revenue and Cost Summary. As recently as 1996, not only were just six agencies able to issue financial statements with clean opinions, but most agencies took as least 5 months to issue them. Before implementing the Improved Financial Performance Initiative of the President’s Management Agenda, 18 of 24 of the Government’s major agencies received clean opinions on their audited financial reports, however, it still took 5 months to prepare most of them. Today, most major agencies are getting clean audit opinions and issuing them in a condensed period of time. A clean audit opinion provides assurance that agencies are responsibly accounting for the people’s money. If it takes them 5 months to issue audited financial statements, however, it is a good indication they do not have timely and accurate financial information available on a regular basis. That is why the Administration is working with all agencies to close their books more quickly. Eight agencies have accelerated the issuance of audited financial reports to 45 days after year end, which is the 2004 Governmentwide requirement. One particular agency of note is the U.S. Agency for International Development (USAID), which not only accelerated the reporting of its financial statements, but also received a clean opinion for the first time in its history. Additionally, agencies are now reporting quarterly financial information in addition to the end of the year data. Through the first quarter of fiscal year 2004, four agencies–Education, Environmental Protection Agency (EPA), National Science Foundation (NSF), and SSA – have demonstrated their ability to use timely and accurate financial information to make decisions about program management. For example, Education uses up-to-the-minute financial data to track whether schools are receiving the appropriate amount of Federal funds. In addition, EPA’s Leaking Underground Storage Tank program negotiates performance commitments with grantees and provides resources based on those commitments. If a grantee is not meeting its commitments, EPA may withhold some resources from the nonperformers and redirect those resources to grantees that are meeting their commitments.
Do not use any outside information, but only the information in the context to inform your answer. Write your response as a news article with a neutral tone. EVIDENCE: MANAGEMENT’S DISCUSSION AND ANALYSIS Introduction The quality and timeliness of financial reporting in the Federal Government has come a long way since the first Governmentwide report subject to audit issued in March 1998 for fiscal year 1997. At that time, only 8 of the 24 Chief Financial Officers Act (CFO Act) agencies received clean opinions on their 1997 financial statements. This year, 20 of the 24 agencies received clean opinions and 21 of the 32 entities, most significant to these statements, had audited financial statements issued by the end of the calendar year. This improvement in both quality and timeliness was concurrent with the application of new accounting principles and new accounting systems, and involved performing reconciliations that had never been attempted before. This has been a monumental effort requiring years of planning and preparation and the efforts of thousands. However, we still have much to accomplish before we meet our objective of timely, useful financial reporting. The accompanying 2003 Financial Report of the United States Government is required by 31 U.S.C. § 331(e)(1) to be submitted to Congress by March 31, and consists of Management’s Discussion and Analysis (MD&A), Statements of Net Cost, Statements of Operations and Changes in Net Position, Reconciliations of Net Operating Cost and Unified Budget Deficit, Statements of Changes in Cash Balance from Unified Budget and Other Activities, Balance Sheets, Stewardship Information (Unaudited), Notes to the Financial Statements, and Supplemental Information (Unaudited). Each section is preceded by a description of its contents. Executive Summary Purpose This Financial Report of the United States Government is prepared to give the President, Congress, and the American people information about the financial position of the Federal Government. This report provides, on an accrual basis of accounting, a broad, comprehensive view of the Federal Government’s finances that is not available elsewhere. It states the Government’s financial position and condition, its revenues and costs, assets and liabilities, and other obligations and commitments. It also discusses important financial issues and significant conditions that may affect future operations. Operating Results Revenues were down by $81.7 billion primarily due to lower tax collections and costs were up by $225.8 billion due to, among other things, fighting the global war on terrorism. This resulted in a net operating cost of $665.0 billion. This compares with the net operating cost of $364.9 billion for fiscal year 2002. This MD&A discusses results in a historical context and includes a chart (page 17) that shows the relationship of prior U.S. budget deficits as a percentage of the U.S. gross domestic product (GDP), which is the total value of goods and services produced in the United States. Economic Results After recovering in fiscal year 2002 from the economic downturn, the economy continued to accelerate in fiscal year 2003 and achieved strong growth. The rate of increase in real GDP picked up in each of the last three quarters of the fiscal year and productivity continued to record substantial gains. The labor market stabilized in fiscal year 2003 following job losses in the previous year and in the final quarter of the fiscal year, employment began to increase. Improvement in the economy was aided by new fiscal policies in 2003, but the lingering effect of the recession and loss in equity wealth, the war with Iraq, homeland security spending, and lower taxes enacted to stimulate growth contributed to a widening in the Federal budget deficit to $374.8 billion for the fiscal year. Overall Perspective The 2003 balance sheet shows assets of $1,394 billion and liabilities of $8,499 billion, for a balance or negative net position of $7,105 billion. The Government’s responsibilities to make future payments for social insurance and certain other programs are not shown as liabilities according to Federal accounting standards; however, they are measured in other contexts. These programmatic commitments remain Federal responsibilities and as currently structured will have a significant claim on budgetary resources in the future. Significant Reporting Items for Fiscal Year 2003 Department of Defense Property Addition In fiscal year 2003, the Department of Defense’s (DOD) reported general property, plant, and equipment, net increased by $323.7 billion or 264.2 percent over fiscal year 2002. The majority of this increase was due to the initial recording of the value of DOD’s military equipment. Beginning with the fiscal year 2003 financial statements, DOD was required to record on the balance sheet the value of its military equipment under the new Statement of Federal Financial Accounting Standard No. 23, Eliminating the Category National Defense Property, Plant, and Equipment (SFFAS No. 23) issued by the Federal Accounting Standards Advisory Board (FASAB) in May 2003. SFFAS No. 23 establishes generally accepted accounting principles for valuing and reporting military equipment in Federal financial statements. Previously, military equipment was reported as national defense property, plant, and equipment in the Stewardship Information section of this report. Creation of the Department of Homeland Security On March 1, 2003, more than 20 entities and offices and some 180,000 employees were transferred into the Department of Homeland Security (DHS). The creation of the DHS in 2003 was the most significant transformation of the Federal Government since 1947 when the various branches of the U.S. Armed Forces were merged into a new Department of Defense. In the aftermath of the September 11, 2001, terrorist attacks, the President and the Congress recognized the need to coordinate the efforts of many Federal agencies, offices, and programs which had responsibility for various aspects of protecting and securing our homeland. President Bush proposed the creation of DHS, and Congress passed legislation establishing this new department. See the U.S. Government Structure & Performance section of this report for further details and a chart showing the entities transferred into DHS. Iraq Operations In March 2003, an international coalition led by the United States liberated Iraq and is overseeing a transformation. The vision for a sovereign, stable, prosperous, and democratic Iraq centered on four goals: establishing a secure environment, restoring essential services, promoting economic growth, and developing good governance through a legitimate constitutional government. To conduct military operations and address these goals in 2003, several sources of funding were used: appropriated and nonappropriated funds (seized and vested assets and the Development Fund for Iraq). U.S. agencies obligated $3.9 billion in appropriated funds for Iraq relief, renewal, and construction. Congress also appropriated funds to DOD for Operation Iraqi Freedom in the Emergency Wartime Supplemental Appropriations Act, 2003 (Public Law 108-11) the Consolidated Appropriations Resolution, 2003 (Public Law 108-7). DOD obligated $42.4 billion for incremental costs in support of Iraqi Freedom. For further discussion of the cost of Iraq operations and funding sources and uses, see Iraq Operations in the Financial Results section at the end of the Revenue and Cost Summary. As recently as 1996, not only were just six agencies able to issue financial statements with clean opinions, but most agencies took as least 5 months to issue them. Before implementing the Improved Financial Performance Initiative of the President’s Management Agenda, 18 of 24 of the Government’s major agencies received clean opinions on their audited financial reports, however, it still took 5 months to prepare most of them. Today, most major agencies are getting clean audit opinions and issuing them in a condensed period of time. A clean audit opinion provides assurance that agencies are responsibly accounting for the people’s money. If it takes them 5 months to issue audited financial statements, however, it is a good indication they do not have timely and accurate financial information available on a regular basis. That is why the Administration is working with all agencies to close their books more quickly. Eight agencies have accelerated the issuance of audited financial reports to 45 days after year end, which is the 2004 Governmentwide requirement. One particular agency of note is the U.S. Agency for International Development (USAID), which not only accelerated the reporting of its financial statements, but also received a clean opinion for the first time in its history. Additionally, agencies are now reporting quarterly financial information in addition to the end of the year data. Through the first quarter of fiscal year 2004, four agencies–Education, Environmental Protection Agency (EPA), National Science Foundation (NSF), and SSA – have demonstrated their ability to use timely and accurate financial information to make decisions about program management. For example, Education uses up-to-the-minute financial data to track whether schools are receiving the appropriate amount of Federal funds. In addition, EPA’s Leaking Underground Storage Tank program negotiates performance commitments with grantees and provides resources based on those commitments. If a grantee is not meeting its commitments, EPA may withhold some resources from the nonperformers and redirect those resources to grantees that are meeting their commitments. USER: In 2003, how had financial reporting in the Federal Government progressed? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
28
11
1,443
null
185
You must respond using only information provided in the prompt. Explain your reasoning with at least 2 supporting points without using direct quotes over 5 words.
summarize the info from the answer to question 1 in a 3-column table
Betsy L. Graseck Analyst, Morgan Stanley & Co. LLC Q So, a couple of questions here. Just, one, Jamie, could you talk through the decision to raise the dividend kind of mid-cycle, it felt like, preCCAR? And also, help us understand how you're thinking about where that payout ratio, that dividend payout ratio, range should be. Because over the past several years, it's been somewhere between 24% and 32%. And so, is this suggesting we could be towards the higher-end of that range or even expanding above that? And then I also just wanted to understand the buyback and the keeping of the CET1 at 15% here. The minimum is 11.9%. I know it's – we have to wait for Basel III Endgame re-proposal to come through and all that, but should we be expecting that, hey, we're going to hold 15% CET1 until we know all these rules? Thanks. ...................................................................................................................................................................................................................................................... Jamie Dimon Chairman & Chief Executive Officer, JPMorgan Chase & Co. A Yeah. So, Betsy, before I answer the question, I want to say something on behalf of all of us at JPMorgan and, me personally, thrilled to have you on this call. For those that don't know, Betsy has been through a terrible medical episode and it's a reminder to all of us how lucky we are to be here. But, Betsy, in particular, the amount of respect we have, not just in your work, but in your character over the last 20 plus years has been exceptional. So, on behalf of all of us, I just want to welcome you back and thrilled to have you here. And so, you're asking a pertinent question. So, we're earning a lot of money. Our capital cup runneth over, and that's why we've increased the dividend. And if you're asking me what we'd like to do is to pay out something like a third, a third of normalized earnings. Of course, it's hard to calculate always what normalized earnings are, but we don't mind being a little bit ahead of that sometimes, a little bit behind that sometimes. If I could give people kind of consistent dividend guidance, et cetera, I think the far more important question is the 15%. So, look at the 15%, I'm going to oversimplify it, that basically will prepare us for the total Basel Endgame today, roughly. The specifics don't matter that much. 5 Jamie Dimon Chairman & Chief Executive Officer, JPMorgan Chase & Co. A Remember, we can do a lot of things to change that in the short-run or the long-run, but it looks like Basel III Endgame may not be the worst case. It'll be something less than that. So, obviously, when and if that happens, it would free up a lot of capital, and I'm going to say in the order of $20 billion or something like that. And, yes, we've always had the capital hierarchy the same way, which is we're going to use capital to build our business first, I mean, pay the dividend – steady dividend, build the business, and if we think it's appropriate to buy back stock. We're continuing to buy back stock at $2 billion a quarter (sic). I personally do not want to buy back a lot more than that at these current prices. I think you've all heard me talk about the world, things like that. So, waiting in preparation for Basel. Hopefully we'll know something later, and then we can be much more specific with you all. But in the meantime, there's also – it's very important to put in mind, there are short-term uses for capital that are good for shareholders, that could reduce our CET1 too. So, you may see us do things in the short-run that will increase earnings, increase capital, that are using up that capital. Jeremy mentioned on the – on one of the things that we know, the balance sheet and how we use the balance sheet for credit and trading, we could do things now. So, it's a great position to be in. We're going to be very, very patient. I urge all the analysts to keep in mind, excess capital is not wasted capital, it's earnings in store. We will deploy it in a very good way for shareholders in due course. ...................................................................................................................................................................................................................................................... Betsy L. Graseck Analyst, Morgan Stanley & Co. LLC Q Excellent. Thank you so much. ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A And yeah, Betsy, I just wanted to add my welcome back thoughts as well, and just a very minor edit to Jamie's answer. I think he just misspoke when he said $2 billion a year in buybacks. The trajectory is $2 billion... ...................................................................................................................................................................................................................................................... Jamie Dimon Chairman & Chief Executive Officer, JPMorgan Chase & Co. A Oh. Sorry. $2 billion a quarter. Yeah. ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A ...a quarter. Otherwise, I have nothing to add to Jamie's very complete answer. But welcome back, Betsy. ...................................................................................................................................................................................................................................................... Betsy L. Graseck Analyst, Morgan Stanley & Co. LLC Q Okay. Thank you so much, and appreciate it. Looking forward to seeing you at Investor Day on May 20th. ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A Excellent. Us too. ...................................................................................................................................................................................................................................................... Operator: Thank you. Our next question comes from Jim Mitchell with Seaport Global. You may proceed. ...................................................................................................................................................................................................................................................... 6 Jim Mitchell Analyst, Seaport Global Securities LLC Q Hey. Good morning. Jeremy, can you speak to the trends you're seeing with respect to deposit migration in the quarter, if there's been any change? Have you seen that migration start to slow or not? ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A Yeah. Good question, Jim. I think the simplest and best answer to that is: not really. So, as we've been saying for a while, migration from checking and savings to CDs is sort of the dominant trend with this driving the increase in weighted average rate paid in the consumer deposit franchise, that continues. We continue to capture that money-in-motion at a very high rate. We're very happy about what that means about the consumer franchise and level of engagement that we're seeing. I'm aware that there's a little bit of a narrative out there about are we seeing the end of what people sometimes refer to as cash sorting. We've looked at that data. We see some evidence that maybe it's slowing a little bit. We're quite cautious on that. We really sort of don't think it makes sense to assume that in a world where checking and savings is paying effectively zero and the policy rate is above 5% that you're not going to see ongoing migration. And frankly, we expect to see that even in a world where – even if the current yield curve environment were to change and meaningful cuts were to get reintroduced and we would actually start to see those, we would still expect to see ongoing migration and yield-seeking behavior. So, it's quite conceivable and this is actually on the yield curve that we had in fourth quarter that had six cuts in it. We were still nonetheless expecting an increase in weighted average rate paid as that migration continues. So, I would say no meaningful change in the trends and the expectation for ongoing migration is very much still there. ...................................................................................................................................................................................................................................................... Jim Mitchell Analyst, Seaport Global Securities LLC Q Okay. And just a follow-up on that and just sort of bigger picture on NII. Is that sort of the biggest driver of your outlook? Is it migration? Is it the forward curve? Is it balances? It sounds like it's migration, but just I'd be curious to hear your thoughts on the biggest drivers of upside or downside. .................................................................................................................................................................................................................................. Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A Yeah. So, I mean I think the drivers of, let's say, what's embedded in the current guidance is actually not meaningfully different from what it was in the fourth quarter, meaning it's the current yield curve, which is a little bit stale now. But the snap from quarter-end had roughly three cuts in it. So, it's the current yield curve, it's what I just said, the expectation of ongoing internal migration. There is some meaningful offset from Card revolve growth, which while it's a little bit less than it was in prior years, is still a tailwind there.
You must respond using only information provided in the prompt. Explain your reasoning with at least 2 supporting points without using direct quotes over 5 words. Betsy L. Graseck Analyst, Morgan Stanley & Co. LLC Q So, a couple of questions here. Just, one, Jamie, could you talk through the decision to raise the dividend kind of mid-cycle, it felt like, preCCAR? And also, help us understand how you're thinking about where that payout ratio, that dividend payout ratio, range should be. Because over the past several years, it's been somewhere between 24% and 32%. And so, is this suggesting we could be towards the higher-end of that range or even expanding above that? And then I also just wanted to understand the buyback and the keeping of the CET1 at 15% here. The minimum is 11.9%. I know it's – we have to wait for Basel III Endgame re-proposal to come through and all that, but should we be expecting that, hey, we're going to hold 15% CET1 until we know all these rules? Thanks. ...................................................................................................................................................................................................................................................... Jamie Dimon Chairman & Chief Executive Officer, JPMorgan Chase & Co. A Yeah. So, Betsy, before I answer the question, I want to say something on behalf of all of us at JPMorgan and, me personally, thrilled to have you on this call. For those that don't know, Betsy has been through a terrible medical episode and it's a reminder to all of us how lucky we are to be here. But, Betsy, in particular, the amount of respect we have, not just in your work, but in your character over the last 20 plus years has been exceptional. So, on behalf of all of us, I just want to welcome you back and thrilled to have you here. And so, you're asking a pertinent question. So, we're earning a lot of money. Our capital cup runneth over, and that's why we've increased the dividend. And if you're asking me what we'd like to do is to pay out something like a third, a third of normalized earnings. Of course, it's hard to calculate always what normalized earnings are, but we don't mind being a little bit ahead of that sometimes, a little bit behind that sometimes. If I could give people kind of consistent dividend guidance, et cetera, I think the far more important question is the 15%. So, look at the 15%, I'm going to oversimplify it, that basically will prepare us for the total Basel Endgame today, roughly. The specifics don't matter that much. 5 Jamie Dimon Chairman & Chief Executive Officer, JPMorgan Chase & Co. A Remember, we can do a lot of things to change that in the short-run or the long-run, but it looks like Basel III Endgame may not be the worst case. It'll be something less than that. So, obviously, when and if that happens, it would free up a lot of capital, and I'm going to say in the order of $20 billion or something like that. And, yes, we've always had the capital hierarchy the same way, which is we're going to use capital to build our business first, I mean, pay the dividend – steady dividend, build the business, and if we think it's appropriate to buy back stock. We're continuing to buy back stock at $2 billion a quarter (sic). I personally do not want to buy back a lot more than that at these current prices. I think you've all heard me talk about the world, things like that. So, waiting in preparation for Basel. Hopefully we'll know something later, and then we can be much more specific with you all. But in the meantime, there's also – it's very important to put in mind, there are short-term uses for capital that are good for shareholders, that could reduce our CET1 too. So, you may see us do things in the short-run that will increase earnings, increase capital, that are using up that capital. Jeremy mentioned on the – on one of the things that we know, the balance sheet and how we use the balance sheet for credit and trading, we could do things now. So, it's a great position to be in. We're going to be very, very patient. I urge all the analysts to keep in mind, excess capital is not wasted capital, it's earnings in store. We will deploy it in a very good way for shareholders in due course. ...................................................................................................................................................................................................................................................... Betsy L. Graseck Analyst, Morgan Stanley & Co. LLC Q Excellent. Thank you so much. ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A And yeah, Betsy, I just wanted to add my welcome back thoughts as well, and just a very minor edit to Jamie's answer. I think he just misspoke when he said $2 billion a year in buybacks. The trajectory is $2 billion... ...................................................................................................................................................................................................................................................... Jamie Dimon Chairman & Chief Executive Officer, JPMorgan Chase & Co. A Oh. Sorry. $2 billion a quarter. Yeah. ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A ...a quarter. Otherwise, I have nothing to add to Jamie's very complete answer. But welcome back, Betsy. ...................................................................................................................................................................................................................................................... Betsy L. Graseck Analyst, Morgan Stanley & Co. LLC Q Okay. Thank you so much, and appreciate it. Looking forward to seeing you at Investor Day on May 20th. ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A Excellent. Us too. ...................................................................................................................................................................................................................................................... Operator: Thank you. Our next question comes from Jim Mitchell with Seaport Global. You may proceed. ...................................................................................................................................................................................................................................................... 6 Jim Mitchell Analyst, Seaport Global Securities LLC Q Hey. Good morning. Jeremy, can you speak to the trends you're seeing with respect to deposit migration in the quarter, if there's been any change? Have you seen that migration start to slow or not? ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A Yeah. Good question, Jim. I think the simplest and best answer to that is: not really. So, as we've been saying for a while, migration from checking and savings to CDs is sort of the dominant trend with this driving the increase in weighted average rate paid in the consumer deposit franchise, that continues. We continue to capture that money-in-motion at a very high rate. We're very happy about what that means about the consumer franchise and level of engagement that we're seeing. I'm aware that there's a little bit of a narrative out there about are we seeing the end of what people sometimes refer to as cash sorting. We've looked at that data. We see some evidence that maybe it's slowing a little bit. We're quite cautious on that. We really sort of don't think it makes sense to assume that in a world where checking and savings is paying effectively zero and the policy rate is above 5% that you're not going to see ongoing migration. And frankly, we expect to see that even in a world where – even if the current yield curve environment were to change and meaningful cuts were to get reintroduced and we would actually start to see those, we would still expect to see ongoing migration and yield-seeking behavior. So, it's quite conceivable and this is actually on the yield curve that we had in fourth quarter that had six cuts in it. We were still nonetheless expecting an increase in weighted average rate paid as that migration continues. So, I would say no meaningful change in the trends and the expectation for ongoing migration is very much still there. ...................................................................................................................................................................................................................................................... Jim Mitchell Analyst, Seaport Global Securities LLC Q Okay. And just a follow-up on that and just sort of bigger picture on NII. Is that sort of the biggest driver of your outlook? Is it migration? Is it the forward curve? Is it balances? It sounds like it's migration, but just I'd be curious to hear your thoughts on the biggest drivers of upside or downside. .................................................................................................................................................................................................................................. Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A Yeah. So, I mean I think the drivers of, let's say, what's embedded in the current guidance is actually not meaningfully different from what it was in the fourth quarter, meaning it's the current yield curve, which is a little bit stale now. But the snap from quarter-end had roughly three cuts in it. So, it's the current yield curve, it's what I just said, the expectation of ongoing internal migration. There is some meaningful offset from Card revolve growth, which while it's a little bit less than it was in prior years, is still a tailwind there. summarize the info from the answer to question 1 in a 3-column table
You must respond using only information provided in the prompt. Explain your reasoning with at least 2 supporting points without using direct quotes over 5 words. EVIDENCE: Betsy L. Graseck Analyst, Morgan Stanley & Co. LLC Q So, a couple of questions here. Just, one, Jamie, could you talk through the decision to raise the dividend kind of mid-cycle, it felt like, preCCAR? And also, help us understand how you're thinking about where that payout ratio, that dividend payout ratio, range should be. Because over the past several years, it's been somewhere between 24% and 32%. And so, is this suggesting we could be towards the higher-end of that range or even expanding above that? And then I also just wanted to understand the buyback and the keeping of the CET1 at 15% here. The minimum is 11.9%. I know it's – we have to wait for Basel III Endgame re-proposal to come through and all that, but should we be expecting that, hey, we're going to hold 15% CET1 until we know all these rules? Thanks. ...................................................................................................................................................................................................................................................... Jamie Dimon Chairman & Chief Executive Officer, JPMorgan Chase & Co. A Yeah. So, Betsy, before I answer the question, I want to say something on behalf of all of us at JPMorgan and, me personally, thrilled to have you on this call. For those that don't know, Betsy has been through a terrible medical episode and it's a reminder to all of us how lucky we are to be here. But, Betsy, in particular, the amount of respect we have, not just in your work, but in your character over the last 20 plus years has been exceptional. So, on behalf of all of us, I just want to welcome you back and thrilled to have you here. And so, you're asking a pertinent question. So, we're earning a lot of money. Our capital cup runneth over, and that's why we've increased the dividend. And if you're asking me what we'd like to do is to pay out something like a third, a third of normalized earnings. Of course, it's hard to calculate always what normalized earnings are, but we don't mind being a little bit ahead of that sometimes, a little bit behind that sometimes. If I could give people kind of consistent dividend guidance, et cetera, I think the far more important question is the 15%. So, look at the 15%, I'm going to oversimplify it, that basically will prepare us for the total Basel Endgame today, roughly. The specifics don't matter that much. 5 Jamie Dimon Chairman & Chief Executive Officer, JPMorgan Chase & Co. A Remember, we can do a lot of things to change that in the short-run or the long-run, but it looks like Basel III Endgame may not be the worst case. It'll be something less than that. So, obviously, when and if that happens, it would free up a lot of capital, and I'm going to say in the order of $20 billion or something like that. And, yes, we've always had the capital hierarchy the same way, which is we're going to use capital to build our business first, I mean, pay the dividend – steady dividend, build the business, and if we think it's appropriate to buy back stock. We're continuing to buy back stock at $2 billion a quarter (sic). I personally do not want to buy back a lot more than that at these current prices. I think you've all heard me talk about the world, things like that. So, waiting in preparation for Basel. Hopefully we'll know something later, and then we can be much more specific with you all. But in the meantime, there's also – it's very important to put in mind, there are short-term uses for capital that are good for shareholders, that could reduce our CET1 too. So, you may see us do things in the short-run that will increase earnings, increase capital, that are using up that capital. Jeremy mentioned on the – on one of the things that we know, the balance sheet and how we use the balance sheet for credit and trading, we could do things now. So, it's a great position to be in. We're going to be very, very patient. I urge all the analysts to keep in mind, excess capital is not wasted capital, it's earnings in store. We will deploy it in a very good way for shareholders in due course. ...................................................................................................................................................................................................................................................... Betsy L. Graseck Analyst, Morgan Stanley & Co. LLC Q Excellent. Thank you so much. ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A And yeah, Betsy, I just wanted to add my welcome back thoughts as well, and just a very minor edit to Jamie's answer. I think he just misspoke when he said $2 billion a year in buybacks. The trajectory is $2 billion... ...................................................................................................................................................................................................................................................... Jamie Dimon Chairman & Chief Executive Officer, JPMorgan Chase & Co. A Oh. Sorry. $2 billion a quarter. Yeah. ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A ...a quarter. Otherwise, I have nothing to add to Jamie's very complete answer. But welcome back, Betsy. ...................................................................................................................................................................................................................................................... Betsy L. Graseck Analyst, Morgan Stanley & Co. LLC Q Okay. Thank you so much, and appreciate it. Looking forward to seeing you at Investor Day on May 20th. ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A Excellent. Us too. ...................................................................................................................................................................................................................................................... Operator: Thank you. Our next question comes from Jim Mitchell with Seaport Global. You may proceed. ...................................................................................................................................................................................................................................................... 6 Jim Mitchell Analyst, Seaport Global Securities LLC Q Hey. Good morning. Jeremy, can you speak to the trends you're seeing with respect to deposit migration in the quarter, if there's been any change? Have you seen that migration start to slow or not? ...................................................................................................................................................................................................................................................... Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A Yeah. Good question, Jim. I think the simplest and best answer to that is: not really. So, as we've been saying for a while, migration from checking and savings to CDs is sort of the dominant trend with this driving the increase in weighted average rate paid in the consumer deposit franchise, that continues. We continue to capture that money-in-motion at a very high rate. We're very happy about what that means about the consumer franchise and level of engagement that we're seeing. I'm aware that there's a little bit of a narrative out there about are we seeing the end of what people sometimes refer to as cash sorting. We've looked at that data. We see some evidence that maybe it's slowing a little bit. We're quite cautious on that. We really sort of don't think it makes sense to assume that in a world where checking and savings is paying effectively zero and the policy rate is above 5% that you're not going to see ongoing migration. And frankly, we expect to see that even in a world where – even if the current yield curve environment were to change and meaningful cuts were to get reintroduced and we would actually start to see those, we would still expect to see ongoing migration and yield-seeking behavior. So, it's quite conceivable and this is actually on the yield curve that we had in fourth quarter that had six cuts in it. We were still nonetheless expecting an increase in weighted average rate paid as that migration continues. So, I would say no meaningful change in the trends and the expectation for ongoing migration is very much still there. ...................................................................................................................................................................................................................................................... Jim Mitchell Analyst, Seaport Global Securities LLC Q Okay. And just a follow-up on that and just sort of bigger picture on NII. Is that sort of the biggest driver of your outlook? Is it migration? Is it the forward curve? Is it balances? It sounds like it's migration, but just I'd be curious to hear your thoughts on the biggest drivers of upside or downside. .................................................................................................................................................................................................................................. Jeremy Barnum Chief Financial Officer, JPMorgan Chase & Co. A Yeah. So, I mean I think the drivers of, let's say, what's embedded in the current guidance is actually not meaningfully different from what it was in the fourth quarter, meaning it's the current yield curve, which is a little bit stale now. But the snap from quarter-end had roughly three cuts in it. So, it's the current yield curve, it's what I just said, the expectation of ongoing internal migration. There is some meaningful offset from Card revolve growth, which while it's a little bit less than it was in prior years, is still a tailwind there. USER: summarize the info from the answer to question 1 in a 3-column table Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
13
1,407
null
805
Your response to the prompt shall consist exclusively of information contained within the context block with no use of external sources. Your responses shall be between 150 and 200 words and shall not be presented in a list format of any sort.
Please give a short summary of the battery systems of the Mars Helicopter.
F. Telecommunication System Once separated from the host spacecraft (lander or rover), the Mars Helicopter can only communicate to or be commanded from Earth via radio link. This link is implemented using a COTS 802.15.4 (Zig-Bee) standard 900 MHz chipset, SiFlex 02, originally manufactured by LS Research. Two identical SiFlex parts are used, one of which is an integral part of a base station mounted on the host spacecraft, the other being included in the helicopter electronics. These radios are mounted on identical, custom PC boards which provide mechanical support, power, heat distribution, and other necessary infrastructure. The boards on each side of the link are connected to their respective custom antennas. The helicopter antenna is a loaded quarter wave monopole positioned near the center of the solar panel (which also serves as ground plane) at the top of the entire helicopter assembly and is fed through a miniature coaxial cable routed through the mast to the electronics below. The radio is configured and exchanges data with the helicopter and base station system computers via UART. One challenge in using off-the-shelf assemblies for electronics systems to be used on Mars is the low temperatures expected on the surface. At night, the antenna and cable assemblies will see temperatures as low as −140 C. Electronics assemblies on both base station and helicopter will be kept “warm” (not below −15 C) by heaters as required. Another challenge is antenna placement and accommodation on the larger host spacecraft. Each radio emits approximately 0.75 W power at 900 MHz with the board consuming up to 3 W supply power when transmitting and approximately 0.15 W while receiving. The link is designed to relay data at over-the-air rates of 20 kbps or 250 kbps over distances of up to 1000 m. A one-way data transmission mode is used to recover data from the helicopter in real time during its brief sorties. When landed, a secure two-way mode is used. Due to protocol overhead and channel management, a maximum return throughput in flight of 200 kbps is expected while two-way throughputs as low as 10 kbps are supported if required by marginal, landed circumstances. G. Power & Energy System The helicopter is powered by a Li-Ion battery system that is recharged daily by a solar panel. The energy in the battery is used for operating heaters to survive the cold Martian nights as well as operate the helicopter actuators and avionics during short flights lasting from 90 seconds to a few minutes. Depending on the latitude of operations and the Martian season, recharging of this battery through the solar panel could occur over one to multiple sols (Martian days). The helicopter battery shown in Fig. 12 consists of 6 Sony SE US1865o VTC4 Li-ion cells with a nameplate capacity of 2 Ah. The maximum discharge rate is greater than 25 A amd the maximum cell voltage specified by the manufacturer is 4.25 V. The continuous tested power load capability of this batterry is 480 W with a peak power capability of 510 W. Battery voltage is in the range of 15–25.2 V and the total mass of the 6 cells is 273 g. A cell balancing charge management system controlled by the FPGA ensures that the all the individual cells are at a uniform voltage. A de-rated end-of-life battery capacity of 35.75 Wh is available for use. Of this capacity, 10.73 Wh (30%) is kept as reserve, night-time survival energy usage is estimated at 21 Wh for typical operation in the northern latitudes in the spring season, and approximatley 10 Wh is available for flight. Assuming that 20% of the power is at the peak load of 510 W and 80% is at a continuous load of 360 W, approximately 90 sec of flight is possible. These energy projections represent conservative worst-case end-of-mision battery performance at 0 C initial temperature. More moderate power loads will extend the flight time. The solar panel is made from Inverted Metamorphic (IMM4J) cells from SolAero Technologies. The cells are optimized for the Mars solar spectrum and occupy a rectangular area with 680 cm2 of substrate (544 cm2 active cell area) in a region centered and immediately above the co-axial rotors. This region minimally interferes with the flow through the rotor. H. Thermal System The helicopter must survive the cold of the night on Mars where temperatures can drop to -100 C or lower. The most critical component is the battery which is kept above -15 C through the night as it powers Kapton film heaters attached to the battery cells. The avionics boards in the ECM surround the battery and are also kept at an elevated temperature by virtue of their proximity to the warm battery assembly. Insulation around the avionics boards is provided by a carbon-dioxide gap of 3 cm width. Additional insulation can be provided by replacing the carbon-dioxide gas with an Aerogel formulation. The outermost fuselage thermal coating is from Sheldahl with Solar absorptivity α = 0.8 and infra-red (IR) emissivity  = 0.1. In addition to thermal losses through the gas gap (or aerogel), additional losses occur due to conduction in the mast as well as through the copper wiring that penetrate the ECM from the mast. To minimize the latter, the wire gauges are selected to be of the thinnest gauges that can still support the current draw during operations without overheating. Prior to flight, under the control of the FPGA, the thermal system powers on heaters in the motor control boards that have been exposed to the ambient temperatures. The internal battery temperature is brought up to 5 C to allow hi-power energy extraction from the cells. During operation the ECM and battery warm up as a result of avionics operations and battery self-heating. However, the thermal inertia of the elements is such that for the short flights of the helicopter, there is no overheating.
Your response to the prompt shall consist exclusively of information contained within the context block with no use of external sources. Your responses shall be between 150 and 200 words and shall not be presented in a list format of any sort. F. Telecommunication System Once separated from the host spacecraft (lander or rover), the Mars Helicopter can only communicate to or be commanded from Earth via radio link. This link is implemented using a COTS 802.15.4 (Zig-Bee) standard 900 MHz chipset, SiFlex 02, originally manufactured by LS Research. Two identical SiFlex parts are used, one of which is an integral part of a base station mounted on the host spacecraft, the other being included in the helicopter electronics. These radios are mounted on identical, custom PC boards which provide mechanical support, power, heat distribution, and other necessary infrastructure. The boards on each side of the link are connected to their respective custom antennas. The helicopter antenna is a loaded quarter wave monopole positioned near the center of the solar panel (which also serves as ground plane) at the top of the entire helicopter assembly and is fed through a miniature coaxial cable routed through the mast to the electronics below. The radio is configured and exchanges data with the helicopter and base station system computers via UART. One challenge in using off-the-shelf assemblies for electronics systems to be used on Mars is the low temperatures expected on the surface. At night, the antenna and cable assemblies will see temperatures as low as −140 C. Electronics assemblies on both base station and helicopter will be kept “warm” (not below −15 C) by heaters as required. Another challenge is antenna placement and accommodation on the larger host spacecraft. Each radio emits approximately 0.75 W power at 900 MHz with the board consuming up to 3 W supply power when transmitting and approximately 0.15 W while receiving. The link is designed to relay data at over-the-air rates of 20 kbps or 250 kbps over distances of up to 1000 m. A one-way data transmission mode is used to recover data from the helicopter in real time during its brief sorties. When landed, a secure two-way mode is used. Due to protocol overhead and channel management, a maximum return throughput in flight of 200 kbps is expected while two-way throughputs as low as 10 kbps are supported if required by marginal, landed circumstances. G. Power & Energy System The helicopter is powered by a Li-Ion battery system that is recharged daily by a solar panel. The energy in the battery is used for operating heaters to survive the cold Martian nights as well as operate the helicopter actuators and avionics during short flights lasting from 90 seconds to a few minutes. Depending on the latitude of operations and the Martian season, recharging of this battery through the solar panel could occur over one to multiple sols (Martian days). The helicopter battery shown in Fig. 12 consists of 6 Sony SE US1865o VTC4 Li-ion cells with a nameplate capacity of 2 Ah. The maximum discharge rate is greater than 25 A amd the maximum cell voltage specified by the manufacturer is 4.25 V. The continuous tested power load capability of this batterry is 480 W with a peak power capability of 510 W. Battery voltage is in the range of 15–25.2 V and the total mass of the 6 cells is 273 g. A cell balancing charge management system controlled by the FPGA ensures that the all the individual cells are at a uniform voltage. A de-rated end-of-life battery capacity of 35.75 Wh is available for use. Of this capacity, 10.73 Wh (30%) is kept as reserve, night-time survival energy usage is estimated at 21 Wh for typical operation in the northern latitudes in the spring season, and approximatley 10 Wh is available for flight. Assuming that 20% of the power is at the peak load of 510 W and 80% is at a continuous load of 360 W, approximately 90 sec of flight is possible. These energy projections represent conservative worst-case end-of-mision battery performance at 0 C initial temperature. More moderate power loads will extend the flight time. The solar panel is made from Inverted Metamorphic (IMM4J) cells from SolAero Technologies. The cells are optimized for the Mars solar spectrum and occupy a rectangular area with 680 cm2 of substrate (544 cm2 active cell area) in a region centered and immediately above the co-axial rotors. This region minimally interferes with the flow through the rotor. H. Thermal System The helicopter must survive the cold of the night on Mars where temperatures can drop to -100 C or lower. The most critical component is the battery which is kept above -15 C through the night as it powers Kapton film heaters attached to the battery cells. The avionics boards in the ECM surround the battery and are also kept at an elevated temperature by virtue of their proximity to the warm battery assembly. Insulation around the avionics boards is provided by a carbon-dioxide gap of 3 cm width. Additional insulation can be provided by replacing the carbon-dioxide gas with an Aerogel formulation. The outermost fuselage thermal coating is from Sheldahl with Solar absorptivity α = 0.8 and infra-red (IR) emissivity  = 0.1. In addition to thermal losses through the gas gap (or aerogel), additional losses occur due to conduction in the mast as well as through the copper wiring that penetrate the ECM from the mast. To minimize the latter, the wire gauges are selected to be of the thinnest gauges that can still support the current draw during operations without overheating. Prior to flight, under the control of the FPGA, the thermal system powers on heaters in the motor control boards that have been exposed to the ambient temperatures. The internal battery temperature is brought up to 5 C to allow hi-power energy extraction from the cells. During operation the ECM and battery warm up as a result of avionics operations and battery self-heating. However, the thermal inertia of the elements is such that for the short flights of the helicopter, there is no overheating. Please give a short summary of the battery systems of the Mars Helicopter.
Your response to the prompt shall consist exclusively of information contained within the context block with no use of external sources. Your responses shall be between 150 and 200 words and shall not be presented in a list format of any sort. EVIDENCE: F. Telecommunication System Once separated from the host spacecraft (lander or rover), the Mars Helicopter can only communicate to or be commanded from Earth via radio link. This link is implemented using a COTS 802.15.4 (Zig-Bee) standard 900 MHz chipset, SiFlex 02, originally manufactured by LS Research. Two identical SiFlex parts are used, one of which is an integral part of a base station mounted on the host spacecraft, the other being included in the helicopter electronics. These radios are mounted on identical, custom PC boards which provide mechanical support, power, heat distribution, and other necessary infrastructure. The boards on each side of the link are connected to their respective custom antennas. The helicopter antenna is a loaded quarter wave monopole positioned near the center of the solar panel (which also serves as ground plane) at the top of the entire helicopter assembly and is fed through a miniature coaxial cable routed through the mast to the electronics below. The radio is configured and exchanges data with the helicopter and base station system computers via UART. One challenge in using off-the-shelf assemblies for electronics systems to be used on Mars is the low temperatures expected on the surface. At night, the antenna and cable assemblies will see temperatures as low as −140 C. Electronics assemblies on both base station and helicopter will be kept “warm” (not below −15 C) by heaters as required. Another challenge is antenna placement and accommodation on the larger host spacecraft. Each radio emits approximately 0.75 W power at 900 MHz with the board consuming up to 3 W supply power when transmitting and approximately 0.15 W while receiving. The link is designed to relay data at over-the-air rates of 20 kbps or 250 kbps over distances of up to 1000 m. A one-way data transmission mode is used to recover data from the helicopter in real time during its brief sorties. When landed, a secure two-way mode is used. Due to protocol overhead and channel management, a maximum return throughput in flight of 200 kbps is expected while two-way throughputs as low as 10 kbps are supported if required by marginal, landed circumstances. G. Power & Energy System The helicopter is powered by a Li-Ion battery system that is recharged daily by a solar panel. The energy in the battery is used for operating heaters to survive the cold Martian nights as well as operate the helicopter actuators and avionics during short flights lasting from 90 seconds to a few minutes. Depending on the latitude of operations and the Martian season, recharging of this battery through the solar panel could occur over one to multiple sols (Martian days). The helicopter battery shown in Fig. 12 consists of 6 Sony SE US1865o VTC4 Li-ion cells with a nameplate capacity of 2 Ah. The maximum discharge rate is greater than 25 A amd the maximum cell voltage specified by the manufacturer is 4.25 V. The continuous tested power load capability of this batterry is 480 W with a peak power capability of 510 W. Battery voltage is in the range of 15–25.2 V and the total mass of the 6 cells is 273 g. A cell balancing charge management system controlled by the FPGA ensures that the all the individual cells are at a uniform voltage. A de-rated end-of-life battery capacity of 35.75 Wh is available for use. Of this capacity, 10.73 Wh (30%) is kept as reserve, night-time survival energy usage is estimated at 21 Wh for typical operation in the northern latitudes in the spring season, and approximatley 10 Wh is available for flight. Assuming that 20% of the power is at the peak load of 510 W and 80% is at a continuous load of 360 W, approximately 90 sec of flight is possible. These energy projections represent conservative worst-case end-of-mision battery performance at 0 C initial temperature. More moderate power loads will extend the flight time. The solar panel is made from Inverted Metamorphic (IMM4J) cells from SolAero Technologies. The cells are optimized for the Mars solar spectrum and occupy a rectangular area with 680 cm2 of substrate (544 cm2 active cell area) in a region centered and immediately above the co-axial rotors. This region minimally interferes with the flow through the rotor. H. Thermal System The helicopter must survive the cold of the night on Mars where temperatures can drop to -100 C or lower. The most critical component is the battery which is kept above -15 C through the night as it powers Kapton film heaters attached to the battery cells. The avionics boards in the ECM surround the battery and are also kept at an elevated temperature by virtue of their proximity to the warm battery assembly. Insulation around the avionics boards is provided by a carbon-dioxide gap of 3 cm width. Additional insulation can be provided by replacing the carbon-dioxide gas with an Aerogel formulation. The outermost fuselage thermal coating is from Sheldahl with Solar absorptivity α = 0.8 and infra-red (IR) emissivity  = 0.1. In addition to thermal losses through the gas gap (or aerogel), additional losses occur due to conduction in the mast as well as through the copper wiring that penetrate the ECM from the mast. To minimize the latter, the wire gauges are selected to be of the thinnest gauges that can still support the current draw during operations without overheating. Prior to flight, under the control of the FPGA, the thermal system powers on heaters in the motor control boards that have been exposed to the ambient temperatures. The internal battery temperature is brought up to 5 C to allow hi-power energy extraction from the cells. During operation the ECM and battery warm up as a result of avionics operations and battery self-heating. However, the thermal inertia of the elements is such that for the short flights of the helicopter, there is no overheating. USER: Please give a short summary of the battery systems of the Mars Helicopter. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
42
13
981
null
663
In constructing your response, you are to exclusively rely on the information presented in the provided context source, avoiding all information from other external sources. Additionally, your response is to be presented in a paragraph format - Do not make use of markdown formatting.
What sort of differences exist in the various exceptions to the knock-and-announce rule?
Law Enforcement Identification When Executing a Warrant Overview As noted above, amid recent calls for legislative changes to police practices, another area that has received attention concerns the authority for law enforcement officers to execute a warrant by entering a home without first seeking consensual entry by announcing themselves and their purpose. As a default, law enforcement officers must comply with the knock and announce rule— an “ancient” common-law doctrine, which generally requires officers to knock and announce their presence before entering a home to execute a search warrant. The Supreme Court has interpreted the Fourth Amendment’s reasonableness requirement as generally mandating compliance with the knock and announce rule. The knock and announce rule is also codified in a federal statute, but the Supreme Court has interpreted that statute as “prohibiting nothing” and “merely [authorizing] officers to damage property [upon entry] in certain instances.” When officers violate the knock and announce rule, they may be subject to civil lawsuits and “internal police discipline.” However, in Hudson v. Michigan the Supreme Court curtailed the remedies available for knock and announce violations by concluding that evidence obtained following such a violation is not subject to the exclusionary rule, which “prevents the government from using most evidence gathered in violation of the United States Constitution.” There are two closely related exceptions to the knock and announce rule, the first of which is for exigent circumstances. Exigent circumstances are those where the “police have a ‘reasonable suspicion’ that knocking and announcing would be dangerous, futile, or destructive to the purposes of the investigation.” Typical examples include instances where police believe that the suspect is armed or likely to destroy evidence. Exigent circumstances must be based on the “particular circumstances” of each case, and may not amount to a “blanket exception to the [knock and announce] requirement” for “entire categor[ies] of criminal activity.” For example, the Supreme Court rejected an assertion that “police officers are never required to knock and announce their presence when executing a search warrant in a felony drug investigation.” Instead, “in each case, it is the duty of a court confronted with the question to determine whether the facts and circumstances of the particular entry justified dispensing with the knock-and-announce requirement.” The second exception is for no-knock warrants, which provide explicit authority for judges to grant so-called “no-knock” entry in the warrant itself, upon a finding of certain factual predicates. The justifications for no-knock warrants are similar to, and sometimes described interchangeably with, the concept of exigent circumstances. No-knock warrants, and exigent circumstances, both typically involve instances where there is a risk that knocking and announcing would endanger officers or result in the destruction of evidence. A key distinction between no-knock warrants and no-knock entry pursuant to the exigent circumstances exception is temporal. With no-knock warrants, officers “have anticipated exigent circumstances before searching, and have asked for pre-search judicial approval to enter without knocking.” In contrast, when officers lack a no-knock warrant and enter without knocking due to exigent circumstances the justification for bypassing knock and announce requirements may arise as late as when the officers are at the door. A number of states have statutes that authorize magistrate judges to grant no-knock warrants in certain circumstances. Although a federal statute previously authorized no-knock warrants for certain drug searches, Congress repealed it. As a result, the legal status of federal no-knock search warrants is unsettled, although federal officers do sometimes employ no-knock warrants or act pursuant to no-knock warrants issued by state courts when serving on joint state-federal task forces. From a Fourth Amendment standpoint, the Supreme Court has indicated some approval of “[t]he practice of allowing magistrates to issue no-knock warrants . . . when sufficient cause to do so can be demonstrated ahead of time,” assuming that the practice does not amount to a blanket exception to knock and announce. However, one unresolved question is whether federal courts have authority to issue no- Congressional Research Service 4 knock warrants in the absence of a statute expressly providing that power, as federal courts “possess only that power authorized by Constitution and statute . . . .” The DOJ has concluded that federal courts are authorized to do so, in large part because the federal rule governing search warrants has been broadly interpreted by courts in other contexts to include specific searches that it does not expressly authorize. In one sense, the legal vitality of federal no-knock warrants may be of limited practical significance; as noted, federal law enforcement officers may still be permitted to enter a home without knocking and announcing if exigent circumstances are present. However, some courts have concluded that no-knock warrants shield officers from responsibility for independently assessing the existence of exigent circumstances at the time of entry. To the extent that is true, no-knock warrants could permit no-knock entry where the exigent circumstances exception would not—for example, in an instance where the factors that justified the no-knock warrant are no longer present at the time of entry. Relatedly, if a valid no-knock warrant provides such a shield against the responsibility of reassessing exigent circumstances at the time of entry, it could limit the availability of civil lawsuits as a remedy where officers disregard knock and announce requirements pursuant to a no-knock warrant, but exigent circumstances no longer exist at the time of entry.
In constructing your response, you are to exclusively rely on the information presented in the provided context source, avoiding all information from other external sources. Additionally, your response is to be presented in a paragraph format - Do not make use of markdown formatting. Law Enforcement Identification When Executing a Warrant Overview As noted above, amid recent calls for legislative changes to police practices, another area that has received attention concerns the authority for law enforcement officers to execute a warrant by entering a home without first seeking consensual entry by announcing themselves and their purpose. As a default, law enforcement officers must comply with the knock and announce rule— an “ancient” common-law doctrine, which generally requires officers to knock and announce their presence before entering a home to execute a search warrant. The Supreme Court has interpreted the Fourth Amendment’s reasonableness requirement as generally mandating compliance with the knock and announce rule. The knock and announce rule is also codified in a federal statute, but the Supreme Court has interpreted that statute as “prohibiting nothing” and “merely [authorizing] officers to damage property [upon entry] in certain instances.” When officers violate the knock and announce rule, they may be subject to civil lawsuits and “internal police discipline.” However, in Hudson v. Michigan the Supreme Court curtailed the remedies available for knock and announce violations by concluding that evidence obtained following such a violation is not subject to the exclusionary rule, which “prevents the government from using most evidence gathered in violation of the United States Constitution.” There are two closely related exceptions to the knock and announce rule, the first of which is for exigent circumstances. Exigent circumstances are those where the “police have a ‘reasonable suspicion’ that knocking and announcing would be dangerous, futile, or destructive to the purposes of the investigation.” Typical examples include instances where police believe that the suspect is armed or likely to destroy evidence. Exigent circumstances must be based on the “particular circumstances” of each case, and may not amount to a “blanket exception to the [knock and announce] requirement” for “entire categor[ies] of criminal activity.” For example, the Supreme Court rejected an assertion that “police officers are never required to knock and announce their presence when executing a search warrant in a felony drug investigation.” Instead, “in each case, it is the duty of a court confronted with the question to determine whether the facts and circumstances of the particular entry justified dispensing with the knock-and-announce requirement.” The second exception is for no-knock warrants, which provide explicit authority for judges to grant so-called “no-knock” entry in the warrant itself, upon a finding of certain factual predicates. The justifications for no-knock warrants are similar to, and sometimes described interchangeably with, the concept of exigent circumstances. No-knock warrants, and exigent circumstances, both typically involve instances where there is a risk that knocking and announcing would endanger officers or result in the destruction of evidence. A key distinction between no-knock warrants and no-knock entry pursuant to the exigent circumstances exception is temporal. With no-knock warrants, officers “have anticipated exigent circumstances before searching, and have asked for pre-search judicial approval to enter without knocking.” In contrast, when officers lack a no-knock warrant and enter without knocking due to exigent circumstances the justification for bypassing knock and announce requirements may arise as late as when the officers are at the door. A number of states have statutes that authorize magistrate judges to grant no-knock warrants in certain circumstances. Although a federal statute previously authorized no-knock warrants for certain drug searches, Congress repealed it. As a result, the legal status of federal no-knock search warrants is unsettled, although federal officers do sometimes employ no-knock warrants or act pursuant to no-knock warrants issued by state courts when serving on joint state-federal task forces. From a Fourth Amendment standpoint, the Supreme Court has indicated some approval of “[t]he practice of allowing magistrates to issue no-knock warrants . . . when sufficient cause to do so can be demonstrated ahead of time,” assuming that the practice does not amount to a blanket exception to knock and announce. However, one unresolved question is whether federal courts have authority to issue no- Congressional Research Service 4 knock warrants in the absence of a statute expressly providing that power, as federal courts “possess only that power authorized by Constitution and statute . . . .” The DOJ has concluded that federal courts are authorized to do so, in large part because the federal rule governing search warrants has been broadly interpreted by courts in other contexts to include specific searches that it does not expressly authorize. In one sense, the legal vitality of federal no-knock warrants may be of limited practical significance; as noted, federal law enforcement officers may still be permitted to enter a home without knocking and announcing if exigent circumstances are present. However, some courts have concluded that no-knock warrants shield officers from responsibility for independently assessing the existence of exigent circumstances at the time of entry. To the extent that is true, no-knock warrants could permit no-knock entry where the exigent circumstances exception would not—for example, in an instance where the factors that justified the no-knock warrant are no longer present at the time of entry. Relatedly, if a valid no-knock warrant provides such a shield against the responsibility of reassessing exigent circumstances at the time of entry, it could limit the availability of civil lawsuits as a remedy where officers disregard knock and announce requirements pursuant to a no-knock warrant, but exigent circumstances no longer exist at the time of entry. Legislation in the 116th Congress At least two bills introduced in the 116th Congress would change the legal landscape regarding unannounced home entry by law enforcement during execution of search warrants. (A third bill, the JUSTICE Act, while not directly altering existing practices, would require reporting on the use of noknock warrants.) In the House, one section of the Justice in Policing Act of 2020 (H.R. 7120) would establish that search warrants issued in federal drug cases must “require that a law enforcement officer execute the search warrant only after providing notice of his or her authority and purpose.” The bill would also require states and localities that receive certain federal funds to “have in effect a law that prohibits the issuance of a no-knock warrant in a drug case.” At least with respect to the requirement for states and localities in H.R. 7120, it appears that unannounced entry would still be permitted in exigent circumstances. The bill only requires states and localities to prohibit the issuance of no-knock warrantsin drug cases to receive the specified federal funding, and as noted above, it is well-established that law enforcement officers may dispense with the knock-andannounce requirement when they have reasonable suspicion of exigent circumstances regardless of whether the warrant authorizes no-knock entry. The more difficult question may be what effect the requirement for federal drug warrants in H.R. 7120 would have. Under the bill’s terms, all warrants authorized in federal drug cases would have to expressly require that they be executed “only after” a law enforcement officer has provided notice of his or her authority and purpose. As such, were the bill to become law, it could possibly create tension between the “exigent circumstances” exception to the knock and announce rule and the required terms of warrants under the new statute. For example, officers might encounter a situation where knocking and announcing would be “dangerous” or “destructive of the purposes of the investigation” and thus excused under Supreme Court doctrine, yet the terms of the warrant would still expressly require knocking and announcing without exception. In this scenario, the bill’s blanket requirement might produce uncertainty as to the officers’ authority. That said, though warrants would require notice under the proposal, and officers who did not comply with that requirement would violate the terms of the warrant, it is not clear that no-knock entry in such a circumstance would lead to consequences like evidence exclusion. In other contexts where warrants have been executed in ways that exceed the warrants’ terms, some courts have declined to suppress evidence in the absence of “extreme” violations or “flagrant disregard for the terms” at issue. A court might also interpret H.R. 7120 as implicitly incorporating the exigent circumstances exception. The Supreme Court has taken this view of the federal statute that codifies the common-law knock-and-announce rule and has observed more generally that when a magistrate declines to authorize no-knock entry in advance, that decision “should Congressional Research Service 5 LSB10499· VERSION 1 · NEW not be interpreted to remove the officers’ authority to exercise independent judgment concerning the wisdom of a no-knock entry at the time the warrant is being executed.” What sort of differences exist in the various exceptions to the knock-and-announce rule?
In constructing your response, you are to exclusively rely on the information presented in the provided context source, avoiding all information from other external sources. Additionally, your response is to be presented in a paragraph format - Do not make use of markdown formatting. EVIDENCE: Law Enforcement Identification When Executing a Warrant Overview As noted above, amid recent calls for legislative changes to police practices, another area that has received attention concerns the authority for law enforcement officers to execute a warrant by entering a home without first seeking consensual entry by announcing themselves and their purpose. As a default, law enforcement officers must comply with the knock and announce rule— an “ancient” common-law doctrine, which generally requires officers to knock and announce their presence before entering a home to execute a search warrant. The Supreme Court has interpreted the Fourth Amendment’s reasonableness requirement as generally mandating compliance with the knock and announce rule. The knock and announce rule is also codified in a federal statute, but the Supreme Court has interpreted that statute as “prohibiting nothing” and “merely [authorizing] officers to damage property [upon entry] in certain instances.” When officers violate the knock and announce rule, they may be subject to civil lawsuits and “internal police discipline.” However, in Hudson v. Michigan the Supreme Court curtailed the remedies available for knock and announce violations by concluding that evidence obtained following such a violation is not subject to the exclusionary rule, which “prevents the government from using most evidence gathered in violation of the United States Constitution.” There are two closely related exceptions to the knock and announce rule, the first of which is for exigent circumstances. Exigent circumstances are those where the “police have a ‘reasonable suspicion’ that knocking and announcing would be dangerous, futile, or destructive to the purposes of the investigation.” Typical examples include instances where police believe that the suspect is armed or likely to destroy evidence. Exigent circumstances must be based on the “particular circumstances” of each case, and may not amount to a “blanket exception to the [knock and announce] requirement” for “entire categor[ies] of criminal activity.” For example, the Supreme Court rejected an assertion that “police officers are never required to knock and announce their presence when executing a search warrant in a felony drug investigation.” Instead, “in each case, it is the duty of a court confronted with the question to determine whether the facts and circumstances of the particular entry justified dispensing with the knock-and-announce requirement.” The second exception is for no-knock warrants, which provide explicit authority for judges to grant so-called “no-knock” entry in the warrant itself, upon a finding of certain factual predicates. The justifications for no-knock warrants are similar to, and sometimes described interchangeably with, the concept of exigent circumstances. No-knock warrants, and exigent circumstances, both typically involve instances where there is a risk that knocking and announcing would endanger officers or result in the destruction of evidence. A key distinction between no-knock warrants and no-knock entry pursuant to the exigent circumstances exception is temporal. With no-knock warrants, officers “have anticipated exigent circumstances before searching, and have asked for pre-search judicial approval to enter without knocking.” In contrast, when officers lack a no-knock warrant and enter without knocking due to exigent circumstances the justification for bypassing knock and announce requirements may arise as late as when the officers are at the door. A number of states have statutes that authorize magistrate judges to grant no-knock warrants in certain circumstances. Although a federal statute previously authorized no-knock warrants for certain drug searches, Congress repealed it. As a result, the legal status of federal no-knock search warrants is unsettled, although federal officers do sometimes employ no-knock warrants or act pursuant to no-knock warrants issued by state courts when serving on joint state-federal task forces. From a Fourth Amendment standpoint, the Supreme Court has indicated some approval of “[t]he practice of allowing magistrates to issue no-knock warrants . . . when sufficient cause to do so can be demonstrated ahead of time,” assuming that the practice does not amount to a blanket exception to knock and announce. However, one unresolved question is whether federal courts have authority to issue no- Congressional Research Service 4 knock warrants in the absence of a statute expressly providing that power, as federal courts “possess only that power authorized by Constitution and statute . . . .” The DOJ has concluded that federal courts are authorized to do so, in large part because the federal rule governing search warrants has been broadly interpreted by courts in other contexts to include specific searches that it does not expressly authorize. In one sense, the legal vitality of federal no-knock warrants may be of limited practical significance; as noted, federal law enforcement officers may still be permitted to enter a home without knocking and announcing if exigent circumstances are present. However, some courts have concluded that no-knock warrants shield officers from responsibility for independently assessing the existence of exigent circumstances at the time of entry. To the extent that is true, no-knock warrants could permit no-knock entry where the exigent circumstances exception would not—for example, in an instance where the factors that justified the no-knock warrant are no longer present at the time of entry. Relatedly, if a valid no-knock warrant provides such a shield against the responsibility of reassessing exigent circumstances at the time of entry, it could limit the availability of civil lawsuits as a remedy where officers disregard knock and announce requirements pursuant to a no-knock warrant, but exigent circumstances no longer exist at the time of entry. USER: What sort of differences exist in the various exceptions to the knock-and-announce rule? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
44
13
887
null
44
"================ <TEXT PASSAGE> ======= [context document] ================ <QUESTION> ======= [user request] ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided."
I'm doing a retail forecasting analysis of consumer behavior and the economy. Please summarize the changes in the fashion industry first and then tell me in two paragraphs how companies cope with them from both consumer and retailer perspectives.
Uncertainty in the face of headwinds With conflicts in Europe and the Middle East and strained international relations elsewhere, geopolitics is the number-one concern for fashion industry executives going into 2024, followed by economic volatility and inflation. Some 62 percent of executives in this year’s survey, conducted in September, cite geopolitical instability as the top risk to growth. Economic volatility is cited by 55 percent and inflation is mentioned by 51 percent (compared with 78 percent last year). The global average headline rate of inflation is predicted to moderate to 5.8 percent—still high on a historical basis—from 6.9 percent in 2023.1 Against a challenging economic backdrop, executive views of the industry’s prospects are more divided than in any year since the launch of the BoF–McKinsey Executive Survey in 2017. While 26 percent of survey respondents say they expect conditions to improve year on year, 37 percent see them remaining the same and 38 percent think they will worsen. Uncertainty within the industry reflects the broader economic situation, albeit with regional divergence. Going into 2024, pressure on household incomes is expected to dampen demand for apparel and prompt trading down across categories. Still, there are geographic outliers that may offer comfort. One is India, where consumer confidence hit a four-year high in September 2023.2 India-based executives are more optimistic than those in Western countries, with 85 percent of respondents to McKinsey’s Global Economics Intelligence survey saying that conditions have improved in the past six months.3 China’s economy is facing challenges, but the country’s consumers show a higher intent to shop for fashion in 2024 than consumers in both the United States and Europe. Ten themes for 2024 To prepare for challenges and be alert to opportunities, leading fashion companies will likely prioritize contingency planning for the coming year. A key theme will be companies keeping a firm grip on costs and inventories while driving growth by precisely managing prices. Brands and suppliers can expect an increasingly competitive environment. But they will also have opportunities, with consumers discovering new styles, tastes, and priorities—all presenting routes to value creation. As previously done, this year’s report highlights ten emerging themes that will be high on leadership agendas. Global economy: Fragmented future. In 2024, the global economic outlook will continue to be unsettled, as financial, geopolitical, and other challenges weigh on consumer confidence. Fashion markets in China, Europe, and the United States will likely face headwinds, some of which reflect individual regional dynamics. Suppliers, brands, and retailers may need to bolster contingency planning and manage for uncertainty. Climate urgency. The frequency and intensity of extreme weather-related events in 2023 mean the climate crisis is an even more urgent priority than in previous years. With physical and transition risks rising across continents, the industry must not delay in tackling emissions and building resilience into supply chains. Consumer shifts: Vacation mode. Consumers are gearing up for the biggest year of travel since before the pandemic. But a shift in values means expectations are evolving, even as shopping remains a priority. Brands and retailers should refresh distribution and category strategies to reflect the new reality. The new face of influence. It’s time for brand marketers to update their influencer playbooks, as a new guard of creative personalities wins fans. Working with opinion leaders in 2024 will require a different type of partnership, an emphasis on video, and a willingness to loosen the reins on creative control. Outdoors reinvented. Technical outdoor clothing and “gorpcore” are in demand as consumers embrace healthier lifestyles. In 2024, more outdoor brands are expected to launch lifestyle collections. At the same time, lifestyle brands will likely embed technical elements into collections, blurring the lines between functionality and style. Fashion system: Generative AI’s creative crossroads. After generative AI’s (gen AI) breakout year in 2023, more use cases are emerging across the industry. Capturing value will require fashion players to look beyond automation and explore gen AI’s potential to enhance the work of human creatives. Fast fashion’s power play. Fast-fashion competition is set to be fiercer than ever. Challengers, led by Shein and Temu, are bringing new tactics on price, customer experience, and speed. Success for disruptors and incumbents could hinge on adapting to new consumer preferences while navigating the regulatory agenda. All eyes on brand. Brand marketing is expected to be back in the spotlight as the fashion industry manages a switch away from performance marketing. Brands may benefit from forging emotional connections with consumers as marketers rewrite playbooks to emphasise long-term brand building. Sustainability rules. The era of fashion industry self-regulation is drawing to a close. Across jurisdictions, new rules will have significant effects on both consumers and fashion players. Brands and manufacturers may consider revamping business models to align with the changes ahead. Bullwhip snaps back. Shifts in consumer demand have created a “bullwhip effect,” by which order volatility reverberates unpredictably through supply chains. Suppliers will likely face pressure as brands and retailers focus on transparency and strategic partnerships. Looking ahead As the industry continues to be challenged by geopolitical and economic headwinds, fashion leaders in 2024 will look to strike a careful balance between managing uncertainty and seizing opportunities. With cost-saving tactics mostly exhausted, companies may focus on growing sales, underpinned by new pricing and promotion strategies. Across the industry, net intent to raise prices is more than 50 percent, according to the BoF–McKinsey Executive Survey. At the same time, reduced cost pressures could provide a potential boost to performance. As climate change brings increasingly extreme weather events and global temperatures rise, the coming year is likely to mark a heightened industry focus on environmental, social, and governance issues. Our survey shows that the topic is seen as both the number-one priority and number-one challenge for industry executives. The most successful companies will find a balance between sustainability initiatives, risk management, and commercial imperatives. In an uncertain world, consumer discretionary spend will be weighted toward trusted categories and brands. Hard luxury goods—jewelry, watches, and leather—will likely be in demand, reflecting their potential investment value in tough economic times. Consumers are expected to travel more and continue spending more time outdoors. And they prefer emotional connections and authenticity over celebrity endorsements. All told, executives are bracing for a strategically complex year ahead. To counter uncertainty, leading companies will prepare for a range of outcomes. The most successful will become more resilient, better equipped to manage the challenges, and ready to accelerate when the storm clouds begin to clear.
"================ <TEXT PASSAGE> ======= Uncertainty in the face of headwinds With conflicts in Europe and the Middle East and strained international relations elsewhere, geopolitics is the number-one concern for fashion industry executives going into 2024, followed by economic volatility and inflation. Some 62 percent of executives in this year’s survey, conducted in September, cite geopolitical instability as the top risk to growth. Economic volatility is cited by 55 percent and inflation is mentioned by 51 percent (compared with 78 percent last year). The global average headline rate of inflation is predicted to moderate to 5.8 percent—still high on a historical basis—from 6.9 percent in 2023.1 Against a challenging economic backdrop, executive views of the industry’s prospects are more divided than in any year since the launch of the BoF–McKinsey Executive Survey in 2017. While 26 percent of survey respondents say they expect conditions to improve year on year, 37 percent see them remaining the same and 38 percent think they will worsen. Uncertainty within the industry reflects the broader economic situation, albeit with regional divergence. Going into 2024, pressure on household incomes is expected to dampen demand for apparel and prompt trading down across categories. Still, there are geographic outliers that may offer comfort. One is India, where consumer confidence hit a four-year high in September 2023.2 India-based executives are more optimistic than those in Western countries, with 85 percent of respondents to McKinsey’s Global Economics Intelligence survey saying that conditions have improved in the past six months.3 China’s economy is facing challenges, but the country’s consumers show a higher intent to shop for fashion in 2024 than consumers in both the United States and Europe. Ten themes for 2024 To prepare for challenges and be alert to opportunities, leading fashion companies will likely prioritize contingency planning for the coming year. A key theme will be companies keeping a firm grip on costs and inventories while driving growth by precisely managing prices. Brands and suppliers can expect an increasingly competitive environment. But they will also have opportunities, with consumers discovering new styles, tastes, and priorities—all presenting routes to value creation. As previously done, this year’s report highlights ten emerging themes that will be high on leadership agendas. Global economy: Fragmented future. In 2024, the global economic outlook will continue to be unsettled, as financial, geopolitical, and other challenges weigh on consumer confidence. Fashion markets in China, Europe, and the United States will likely face headwinds, some of which reflect individual regional dynamics. Suppliers, brands, and retailers may need to bolster contingency planning and manage for uncertainty. Climate urgency. The frequency and intensity of extreme weather-related events in 2023 mean the climate crisis is an even more urgent priority than in previous years. With physical and transition risks rising across continents, the industry must not delay in tackling emissions and building resilience into supply chains. Consumer shifts: Vacation mode. Consumers are gearing up for the biggest year of travel since before the pandemic. But a shift in values means expectations are evolving, even as shopping remains a priority. Brands and retailers should refresh distribution and category strategies to reflect the new reality. The new face of influence. It’s time for brand marketers to update their influencer playbooks, as a new guard of creative personalities wins fans. Working with opinion leaders in 2024 will require a different type of partnership, an emphasis on video, and a willingness to loosen the reins on creative control. Outdoors reinvented. Technical outdoor clothing and “gorpcore” are in demand as consumers embrace healthier lifestyles. In 2024, more outdoor brands are expected to launch lifestyle collections. At the same time, lifestyle brands will likely embed technical elements into collections, blurring the lines between functionality and style. Fashion system: Generative AI’s creative crossroads. After generative AI’s (gen AI) breakout year in 2023, more use cases are emerging across the industry. Capturing value will require fashion players to look beyond automation and explore gen AI’s potential to enhance the work of human creatives. Fast fashion’s power play. Fast-fashion competition is set to be fiercer than ever. Challengers, led by Shein and Temu, are bringing new tactics on price, customer experience, and speed. Success for disruptors and incumbents could hinge on adapting to new consumer preferences while navigating the regulatory agenda. All eyes on brand. Brand marketing is expected to be back in the spotlight as the fashion industry manages a switch away from performance marketing. Brands may benefit from forging emotional connections with consumers as marketers rewrite playbooks to emphasise long-term brand building. Sustainability rules. The era of fashion industry self-regulation is drawing to a close. Across jurisdictions, new rules will have significant effects on both consumers and fashion players. Brands and manufacturers may consider revamping business models to align with the changes ahead. Bullwhip snaps back. Shifts in consumer demand have created a “bullwhip effect,” by which order volatility reverberates unpredictably through supply chains. Suppliers will likely face pressure as brands and retailers focus on transparency and strategic partnerships. Looking ahead As the industry continues to be challenged by geopolitical and economic headwinds, fashion leaders in 2024 will look to strike a careful balance between managing uncertainty and seizing opportunities. With cost-saving tactics mostly exhausted, companies may focus on growing sales, underpinned by new pricing and promotion strategies. Across the industry, net intent to raise prices is more than 50 percent, according to the BoF–McKinsey Executive Survey. At the same time, reduced cost pressures could provide a potential boost to performance. As climate change brings increasingly extreme weather events and global temperatures rise, the coming year is likely to mark a heightened industry focus on environmental, social, and governance issues. Our survey shows that the topic is seen as both the number-one priority and number-one challenge for industry executives. The most successful companies will find a balance between sustainability initiatives, risk management, and commercial imperatives. In an uncertain world, consumer discretionary spend will be weighted toward trusted categories and brands. Hard luxury goods—jewelry, watches, and leather—will likely be in demand, reflecting their potential investment value in tough economic times. Consumers are expected to travel more and continue spending more time outdoors. And they prefer emotional connections and authenticity over celebrity endorsements. All told, executives are bracing for a strategically complex year ahead. To counter uncertainty, leading companies will prepare for a range of outcomes. The most successful will become more resilient, better equipped to manage the challenges, and ready to accelerate when the storm clouds begin to clear. https://www.mckinsey.com/industries/retail/our-insights/state-of-fashion ================ <QUESTION> ======= I'm doing a retail forecasting analysis of consumer behavior and the economy. Please summarize the changes in the fashion industry first and then tell me in two paragraphs how companies cope with them from both consumer and retailer perspectives. ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided."
"================ <TEXT PASSAGE> ======= [context document] ================ <QUESTION> ======= [user request] ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." EVIDENCE: Uncertainty in the face of headwinds With conflicts in Europe and the Middle East and strained international relations elsewhere, geopolitics is the number-one concern for fashion industry executives going into 2024, followed by economic volatility and inflation. Some 62 percent of executives in this year’s survey, conducted in September, cite geopolitical instability as the top risk to growth. Economic volatility is cited by 55 percent and inflation is mentioned by 51 percent (compared with 78 percent last year). The global average headline rate of inflation is predicted to moderate to 5.8 percent—still high on a historical basis—from 6.9 percent in 2023.1 Against a challenging economic backdrop, executive views of the industry’s prospects are more divided than in any year since the launch of the BoF–McKinsey Executive Survey in 2017. While 26 percent of survey respondents say they expect conditions to improve year on year, 37 percent see them remaining the same and 38 percent think they will worsen. Uncertainty within the industry reflects the broader economic situation, albeit with regional divergence. Going into 2024, pressure on household incomes is expected to dampen demand for apparel and prompt trading down across categories. Still, there are geographic outliers that may offer comfort. One is India, where consumer confidence hit a four-year high in September 2023.2 India-based executives are more optimistic than those in Western countries, with 85 percent of respondents to McKinsey’s Global Economics Intelligence survey saying that conditions have improved in the past six months.3 China’s economy is facing challenges, but the country’s consumers show a higher intent to shop for fashion in 2024 than consumers in both the United States and Europe. Ten themes for 2024 To prepare for challenges and be alert to opportunities, leading fashion companies will likely prioritize contingency planning for the coming year. A key theme will be companies keeping a firm grip on costs and inventories while driving growth by precisely managing prices. Brands and suppliers can expect an increasingly competitive environment. But they will also have opportunities, with consumers discovering new styles, tastes, and priorities—all presenting routes to value creation. As previously done, this year’s report highlights ten emerging themes that will be high on leadership agendas. Global economy: Fragmented future. In 2024, the global economic outlook will continue to be unsettled, as financial, geopolitical, and other challenges weigh on consumer confidence. Fashion markets in China, Europe, and the United States will likely face headwinds, some of which reflect individual regional dynamics. Suppliers, brands, and retailers may need to bolster contingency planning and manage for uncertainty. Climate urgency. The frequency and intensity of extreme weather-related events in 2023 mean the climate crisis is an even more urgent priority than in previous years. With physical and transition risks rising across continents, the industry must not delay in tackling emissions and building resilience into supply chains. Consumer shifts: Vacation mode. Consumers are gearing up for the biggest year of travel since before the pandemic. But a shift in values means expectations are evolving, even as shopping remains a priority. Brands and retailers should refresh distribution and category strategies to reflect the new reality. The new face of influence. It’s time for brand marketers to update their influencer playbooks, as a new guard of creative personalities wins fans. Working with opinion leaders in 2024 will require a different type of partnership, an emphasis on video, and a willingness to loosen the reins on creative control. Outdoors reinvented. Technical outdoor clothing and “gorpcore” are in demand as consumers embrace healthier lifestyles. In 2024, more outdoor brands are expected to launch lifestyle collections. At the same time, lifestyle brands will likely embed technical elements into collections, blurring the lines between functionality and style. Fashion system: Generative AI’s creative crossroads. After generative AI’s (gen AI) breakout year in 2023, more use cases are emerging across the industry. Capturing value will require fashion players to look beyond automation and explore gen AI’s potential to enhance the work of human creatives. Fast fashion’s power play. Fast-fashion competition is set to be fiercer than ever. Challengers, led by Shein and Temu, are bringing new tactics on price, customer experience, and speed. Success for disruptors and incumbents could hinge on adapting to new consumer preferences while navigating the regulatory agenda. All eyes on brand. Brand marketing is expected to be back in the spotlight as the fashion industry manages a switch away from performance marketing. Brands may benefit from forging emotional connections with consumers as marketers rewrite playbooks to emphasise long-term brand building. Sustainability rules. The era of fashion industry self-regulation is drawing to a close. Across jurisdictions, new rules will have significant effects on both consumers and fashion players. Brands and manufacturers may consider revamping business models to align with the changes ahead. Bullwhip snaps back. Shifts in consumer demand have created a “bullwhip effect,” by which order volatility reverberates unpredictably through supply chains. Suppliers will likely face pressure as brands and retailers focus on transparency and strategic partnerships. Looking ahead As the industry continues to be challenged by geopolitical and economic headwinds, fashion leaders in 2024 will look to strike a careful balance between managing uncertainty and seizing opportunities. With cost-saving tactics mostly exhausted, companies may focus on growing sales, underpinned by new pricing and promotion strategies. Across the industry, net intent to raise prices is more than 50 percent, according to the BoF–McKinsey Executive Survey. At the same time, reduced cost pressures could provide a potential boost to performance. As climate change brings increasingly extreme weather events and global temperatures rise, the coming year is likely to mark a heightened industry focus on environmental, social, and governance issues. Our survey shows that the topic is seen as both the number-one priority and number-one challenge for industry executives. The most successful companies will find a balance between sustainability initiatives, risk management, and commercial imperatives. In an uncertain world, consumer discretionary spend will be weighted toward trusted categories and brands. Hard luxury goods—jewelry, watches, and leather—will likely be in demand, reflecting their potential investment value in tough economic times. Consumers are expected to travel more and continue spending more time outdoors. And they prefer emotional connections and authenticity over celebrity endorsements. All told, executives are bracing for a strategically complex year ahead. To counter uncertainty, leading companies will prepare for a range of outcomes. The most successful will become more resilient, better equipped to manage the challenges, and ready to accelerate when the storm clouds begin to clear. USER: I'm doing a retail forecasting analysis of consumer behavior and the economy. Please summarize the changes in the fashion industry first and then tell me in two paragraphs how companies cope with them from both consumer and retailer perspectives. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
49
39
1,075
null
825
Answer the question using only information from the provided context block.
What are some of the benefits of online education?
INTRODUCTION Historically, postsecondary education in the United States was founded on the principles of the European system, requiring the physical presence of professors and students in the same location (Knowles, 1994). From 1626, with the founding of Harvard University (The Harvard Guide, 2004), to the development of junior colleges and vocational schools in the early 1900s (Cohen & Brawer, 1996; Jacobs & Grubb, 2003), the higher education system developed to prepare post-high school students for one of three separate tiers. The college and university system in the United States developed its own set of structures designed to prepare students for baccalaureate and graduate degrees. Junior colleges were limited to associate degrees, while vocational education institutions offered occupational certificates. In many cases, there was inadequate recognition of the postsecondary education offered at junior colleges and vocational education institutions, resulting in the inability of students to transfer to 4-year institutions (National Center for Education Statistics, 2006). In the mid-20th century, some junior colleges began to provide academic, vocational, and personal development educational offerings for members of the local communities. During this same period, junior or community colleges developed a role as transfer institutions for students who, because of time, preparedness, economics, or distance, could not begin their postsecondary education at a 4-year institution (Cohen & Brawer, 1996). Until the mid-1990s, the majority of transfer programs involved Associate of Arts (AA) and Associate of Science (AS) degrees. Associate of Applied Science (AAS) degrees were developed during the 1990s. The AAS degree was granted to those 2 who successfully completed the majority of their college program in vocational education. The creation of a variety of applied baccalaureate degrees allowed students who had previously thought of the AAS degree as a terminal program to complete a baccalaureate degree (Kansas Board of Regents, 2002-2003). Online education also became a strategy for students to access higher education in the 1990s (Allen & Seaman, 2007b). The proliferation of online courses alleviated some of the location-bound barriers to higher education, but online education was criticized as less rigorous than traditional classroom-based course work by traditional academicians. Russell attempted to address this argument with his 1999 meta-analysis of studies dating from the 1920s and covering multiple delivery models, including online education. Russell concluded there was no statistically significant difference in student achievement between courses offered online and those offered in the traditional classroom setting. Since the development of correspondence courses in the 1920s, researchers have attempted to ascertain if students participating in distance education are being shortchanged in their educational goals. No significant difference in grades has been found in the majority of studies designed to address this issue. Studies analyzing online student retention have shown significantly lower retention for online students. In the last 10 years, research studies have expanded to include variations of online education. These include strictly online, hybrid courses, Web-assisted classroom settings, and the traditional higher education course offered only as face-to-face instruction (Carmel & Gold, 2007). Online education continues to proliferate at the same time the number of secondary students in the United States overall is projected to increase (National Center 3 for Education Statistics [NCES], 2006). The projected increase of potential postsecondary students and online postsecondary options provides opportunities for increases in online education programs and courses. In 2000, NCES reported that over 65% of students in higher education were participating in online courses. In a 2007 study, Allen and Seaman estimated only 16% of those enrolled in online education courses are undergraduate students seeking their first degree, counter to the projected increase in traditional-age students. The majority of enrollees in online education are adults updating or advancing their credentials, creating an additional educational market for colleges and universities seeking to expand enrollment without adding physical space (Allen & Seaman, 2007a). For states and localities faced with a contradictory traditional-age enrollment decrease, these figures present an untapped market for higher education courses and programs. Background Researchers attempted to analyze the efficacy of distance education as far back as the 1920s when correspondence courses were created to meet the need of students not willing to attend a traditional classroom-based higher education setting. A meta-analysis of these studies resulted in “The No Significant Difference Phenomenon,” reported by Russell (2001). The results of over 355 studies were compiled, comparing various modes of delivery including correspondence, audio, television courses, and the newest wave of computer-facilitated instruction. Following analyses of studies completed prior to 2001, Russell concluded there was no difference in learning between students enrolled in distance education and those completing courses in the traditional setting. Studies completed since then have provided mixed results. Summers, Waigand, and Whittaker (2005) found there was no difference in GPA and retention between the 4 online and traditional classroom. Arle (2002) found higher achievement by online students, and Brown and Liedholm (2002) found GPA and student retention better in a traditional classroom setting. Student retention is an integral part of the student achievement conversation and is an issue for all forms of higher education. Degree-seeking students’ overall retention has been reported as less than 56% by NCES (2001). Long considered a problem in higher education, attention to the distance education model has shown even lower retention rates in online students than in students attending at the traditional college setting (Phipps & Meristosis, 1999). Research on different modalities, such as fully online and hybrid online courses, has produced mixed results (Carmel & Gold, 2007). No significant trend toward increased retention of students in any of the online modalities has been documented. Retention studies of transfer students have primarily included traditionally defined students transfering from a community college. Statistics have consistantly shown a lower retention rate for students transfering from a community college to a 4-year university than for students who began their post-high school education at a 4-year institution (NCES, 2006). Townsend’s studies of transfer students at the University of Missouri-Columbia also showed a lower baccalaureate retention rate for students who had completed an AAS degree than for students beginning their education at a 4-year institution (Townsend, 2002). Occupationally oriented bachelor’s degree completion programs are relatively new to higher education. Transfer programs in the liberal arts from community colleges to 4-year institutions were common by the 1990s. Townsend (2001), in her study 5 conducted at the University of Missouri–Columbia, observed the blurring of the lines between non-transferrable occupationally oriented undergraduate degrees and undergraduate degrees and certificates that were easily transferred. The study conducted by Townsend was among the first to recognize that many students who began their education at community and technical colleges had bachelor’s degree aspirations that grew after their completion of an occupationally-oriented degree. Laanan proposed that the increase in institutions offering AAS degrees necessitated new ways to transfer undergraduate credits (2003). The setting of this study is a medium-sized Midwestern campus located in Topeka, Kansas. Washburn University enrolls approximately 6000 students a year in undergraduate and graduate programs, including liberal arts, professional schools, and a law school (Washburn University, 2008). The Technology Administration (TA) program selected for the present study began in the 1990s as a baccalaureate degree completion program for students who had received an occupationally oriented associate degree at a Kansas community college or through Washburn’s articulation agreement with Kansas vocational-technical schools. This program provided students who previously had obtained an Associate of Applied Science degree in an occupational area an opportunity to earn a bachelor’s degree. Peterson, Dean of Continuing Education, Washburn University, stated that in early 1999, Washburn University began online courses and programs at the behest of a neighboring community college (personal communication, April 18, 2008). Washburn was asked to develop an online bachelor’s degree completion program for students graduating from community colleges and technical colleges with an Associate of Applied 6 Science degree. The TA program was among the first programs to offer the online bachelor’s degree completion option. The TA program offered its first online courses in Spring 2000. Online education at Washburn expanded to other programs and courses, to include over 200 courses (Washburn University, 2008). The original online partnership with two community colleges expanded to include 16 additional community colleges and four technical colleges in Kansas, as well as colleges in Missouri, California, Wisconsin, South Carolina, and Nebraska (Washburn University, 2008). An initial study in 2002 of student’s course grades and retention in online courses offered at Washburn showed no significant difference between students enrolled in online courses and students enrolled in traditional face-to-face course work (Peterson, personal communication, April 18, 2008). No studies of program retention have been completed. In 2008, Atkins reported overall enrollment at Washburn University decreased 6.7% from Fall 2004 to Fall 2008, from 7400 to 6901 students. During the same period, online course enrollment patterns increased 65%, from 3550 students to 5874 in 2007- 2008 (Washburn University, 2008). Atkins also reported that between 1998 and 2008, the ratio of traditional post-high school age students to nontraditional students enrolling at Washburn University reversed from 40:60 to 60:40. The shift in enrollment patterns produced an increase in enrollment in the early part of the 21st century; however, Washburn University anticipated a decrease in high school graduates in Kansas through 2016, based on demographic patterns of the state. The state figures are opposite the anticipated increase of traditional-age students nationally (NCES, 2008). The increase in 7 distance education students in relation to the anticipated decline in traditional-age students provided the focus for the study. Purpose of the Study Online education has become an important strategy for the higher education institution that was the setting of this study. First, the purpose of the study was to determine if there was a significant difference between the course grades of students participating in TA online courses and their traditional classroom-based counterparts. The second purpose of the study was to determine if there was a significant difference between course retention of students participating in TA online courses and their traditional classroom-based counterparts. The second part of the study was a replication of studies comparing modes of online course delivery to traditional classroom-based instruction (Carmel & Gold, 2007; Russell, 1999). A third purpose of the study was to determine if there was a significant difference between program retention of students who began the TA program in online courses and those who began the program enrolled in traditional face-to-face courses. The study’s purpose was to expand the knowledge base concerning online education to include its efficacy in providing baccalaureate degree completion opportunities. Research Questions Roberts (2004) stated research questions guide the study and usually provide the structure for presenting the results of the research. The research questions guiding this study were: 8 1. Is there is a statistically significant difference between students’ grades in online classes and traditional face-to-face classes? 2. Is there a statistically significant difference between course retention rates in online classes and traditional face-to-face classes? 3. Is there a statistically significant difference between program retention for students entering the program enrolled in online classes and students entering the program enrolled in traditional face-to-face classes? Overview of the Methodology A quantitative study was utilized to compare grades by course, course retention, and program retention of students enrolled in the online and traditional face-to-face TA program at Washburn University. Archival data from the student system at Washburn University were utilized from comparative online and traditional face-to-face classes in two separate courses. In order to answer Research Question 1, a sample of 885 students enrolled in online and traditional face-to-face courses was identified. The sample included students entering the program in the Fall semesters of 2002, 2003, 2004, 2005, and 2006 in both the online and traditional face-to-face classes. Two instructors were responsible for concurrent instruction of both the online and face-to-face classes for the period analyzed. A two-factor analysis of variance was used to analyze for the potential difference in the dependent variables, course grades due to delivery method (online and face-to-face), instructor (instructors A and B), and the potential interaction between the two independent variables (Research Question 1). A chi-square test for differences among proportions was used to analyze course and program retention (Research Questions 2 and 3). 9 Delimitations Roberts (2004) defined delimitations as the boundaries of the study that are controlled principally by the researcher. The delimitations for this study were 1. Only data from 2002 through 2008 from Technology Administration online and face-to-face courses were utilized. 2. The study was confined to students enrolled at Washburn University in the Technology Administration program. 3. Only grades and retention were analyzed. Assumptions Assumptions are defined as those things presupposed in a study (Roberts, 2004). The study was based on the following assumptions: 1. Delivery of content was consistent between online and face-to-face courses and instructors, 2. Course objectives were the same for paired online and traditional face-toface courses, 3. All students enrolled in the TA program met the same criteria for admission to the University, 4. All data entered in the Excel spreadsheets were correct, 5. All students enrolled in the TA program met the same criteria for grade point average and program prerequisites. 10 Definitions The following terms are defined for the purpose of this study: Distance education. Education or training courses delivered to remote locations via postal delivery, or broadcast by audio, video, or computer technologies (Allen, 2007). Dropout. A dropout is defined as a student who has left school and discontinued studies (Merriam-Webster's Collegiate Dictionary, 1998). Face-to-face delivery. This is a course that uses no online technology; content is delivered in person, either in written or oral form (Allen, 2007). Hybrid course. This course is a blend of the online and face-to-face course. A substantial proportion of the content is delivered online, typically using some online discussions and some face-to-face meetings (Allen, 2007). Online course. This defines a course where most or all of the content is delivered online via computer technologies. Typically, there are no face-to-face meetings (Allen, 2007). 2+2 PLAN. The Partnership for Learning and Networking is a collaborative set of online 2+2 baccalaureate degree programs developed by Washburn University. The programs require completion of an associate degree from one of the partner community or technical colleges (Washburn University, 2008). Retention. This term refers to the completion of a course by receiving a letter grade in a course, or a certificate of completion or degree for program completion (Washburn University, 2008). Web-assisted. A course that uses Web-based technology to facilitate what is essentially a face-to-face course (Allen, 2007). 11 Organization of the Study This study consists of five chapters. Chapter One introduced the role of distance education in higher education. Chapter One included the background of the study, the research questions, overview of the methodology, the delimitations of the study, and the definition of terms. Chapter Two presents a literature review, which includes the history of occupational postsecondary education, distance education, and studies relating to grades and retention of students involved in distance education. Chapter Three describes the methodology used for the research study. It includes the selection of participants, design, data collection, and statistical procedures of the study. Chapter Four presents the findings of the research study. Finally, Chapter Five provides a discussion of the results, conclusions, and implications for further research and practice. 12 CHAPTER TWO LITERATURE REVIEW This chapter presents the background for research into the efficacy of distance education in the delivery of higher education. Research studies have focused primarily on grades as a measure of the quality of distance education courses as compared to traditional face-to-face instruction. Utilizing grades has produced a dividing line among education researchers concerning the use of distance education as a delivery model. Retention in distance education has focused primarily on single courses, with little program retention data available. Data from retention studies in higher education have focused primarily on the traditional 4-year university student. Retention studies of community college students have produced quantitative results; however, these studies have been directed at community college students who identify themselves as transfer students early in their community college careers. Retention studies of students enrolled in occupationally oriented programs are limited. Statistical data of higher education shows an increased use of distance education for traditional academic courses as well as occupationally oriented courses. The increase in distance education courses and programs has provided a new dimension to studies of both grades and retention. The recognition of this increase, as well as questions concerning its impact on student learning and retention, produced the impetus for this study. The following review of the literature represents the literature related to this research study. Through examination of previous research, the direction of the present study was formulated. Specifically, the chapter is organized into four sections: (a) the 13 history of occupational transfer programs; (b) the history and research of distance education, including occupational transfer programs utilizing distance education; (c) research utilizing grades as an indicator of student learning in online education; and (d) research focusing on student retention in higher education, including student retention issues in transfer education and online transfer courses and programs. History of Occupational Transfer Programs The measure of success in higher education has been characterized as the attainment of a bachelor’s degree at a 4-year university. Occupationally oriented education was considered primarily a function of job preparation, and until the 1990s was not considered transferrable to other higher education institutions. Occupational transfer programs are a recent occurrence within the postsecondary system that provides an additional pathway to bachelor’s degree completion. Historically, the postsecondary experience in the United States developed as a three-track system. Colleges were established in the United States in 1636 with the founding of Harvard College (The Harvard Guide, 2004). Junior colleges were first founded in 1901 as experimental post-high school graduate programs (Joliet Junior College History, 2008). Their role was initially as a transfer institution to the university. When the Smith-Hughes Act was passed in 1917, a system of vocational education was born in the United States (Jacobs & Grubb, 2003), and was designed to provide further education to those students not viewed as capable of success in a university setting. Vocational education, currently referred to as occupational or technical education, was not originally designed to be a path to higher education. The first programs were designed to help agricultural workers complete their education and increase their skills. 14 More vocational programs were developed during the early 20th century as industrialization developed and as increasing numbers of skills were needed by workers in blue-collar occupations (Jacobs & Grubb, 2003). In the mid-20th century, some junior colleges expanded their programs beyond academic selections to provide occupational development and continuing education. Because of the geographic area from which they attracted students, junior colleges developed a role as “community” colleges. They also solidified their role as transfer institutions for students who, because of time, preparedness, economics, or distance, could not begin their postsecondary education at a 4-year institution (Cohen & Brawer, 1996). Until the mid-1990s, the majority of transfer programs to 4-year universities involved traditional academic degrees, including the Associate of Arts (AA) and Associate of Science (AS) degrees. Occupational programs and continuing education were viewed as terminal and non-transferrable. In 1984, Congress authorized the Carl Perkins Vocational and Technical Education Act (P.L. 98-524). In the legislation, Congress responded to employers’ concerns about the lack of basic skills in employees by adding academic requirements to vocational education legislation. Vocational program curriculum was expanded to include language arts, mathematics, and science principles, and the curriculum reflected the context of the program. The Secretary’s Commission on Achieving Necessary Skills (SCANS) was created in 1990 to determine the skills young people need to succeed in the world of work (U.S. Department of Labor, 2000). In the second Carl Perkins reauthorization in 1990 (P.L. 105-332), Congress responded to the report, which targeted academic and job skills, by outlining a seamless system of vocational and academic 15 education to prepare vocational students to progress into and through higher education. This emphasis led to the development of Associate of Applied Science (AAS) degrees during the 1990s. Granted to those who have successfully completed programs in the applied arts and sciences for careers, AAS degrees were seen as terminal (Kansas Board of Regents, 2002-2003). But as one goal was attained, conversation turned to creating a pathway from occupational associate degrees to bachelor’s degree completion. The desire of students to continue from technical degrees to a baccalaureate was not a new idea. In a paper presented in 1989 to the American Technical Association national conference, TrouttErvin and Morgan’s overview of 2+2 programs showed acceptance of AAS degrees at traditional universities was generally non-existent. Their suggestion for an academic bridge from early technical education to baccalaureate programs highlighted programs accepting AAS degrees toward baccalaureate completion were an exception rather than a rule (Troutt-Ervin & Morgan, 1989). It was not until the late 1990s that applied baccalaureate degrees recognized credits from technical degree students who had previously thought of themselves in a terminal program to complete their baccalaureate degree (Wellman, 2002). Despite the advance of recognition of AAS degrees, standard definitions of transfer students continued to exclude students who completed technical programs. The U.S. Department of Education did not include students receiving an Associate of Applied Science degree in the definition of students preparing for transfer to 4-year colleges (Bradburn, Hurst, & Peng, 2001; Carnevale, 2006). Most states had comparable policies in place concerning core academic curriculum, articulation agreements, transfer of credit, 16 and statewide transfer guides. There was no general recognition of occupational credit transfer. Only a few states, including Kansas, Missouri, and Washington, allowed credits earned in occupationally oriented degrees to transfer to 4-year institutions (Townsend, 2001). No state had set clear goals for the transference of occupational credits between institutions or for the state as a whole (Wellman, 2002). Despite the lack of recognition of occupational transfer credit at the federal level, a new definition of transfer education had emerged. Initially defined as the general education component of the first 2 years of a baccalaureate, the definition of transfer education now included any courses that transferred to a 4-year college, regardless of the nature of the courses (Townsend, 2001). The line between vocational schools, community colleges, and 4-year institutions blurred in the United States as employers and students increasingly made business decisions regarding education and workforce development. Employers increasingly asked for employees with academic and technical skills, as well as critical thinking skills and personal responsibility (U.S. Department of Labor, 2000). Returning students themselves were more attuned to the demands of the 21st century workforce. Their desire to return to higher education, coupled with the economy and the variety of options available to them, required a more adaptive higher education system (Carnevale, 2006). There was growing demand among new and returning students for higher education opportunities responsive to their needs. The expanding needs of the returning student provided opportunities for higher education to respond by utilizing different delivery models. 17 Distance Education Online education became a strategy for postsecondary institutions when the first correspondence courses were initiated with the mail service in the early 20th century (Russell, 1999). As various technologies emerged, distance education utilized television and video models, in addition to paper-based correspondence courses. The expansion of distance education utilizing computer technologies renewed academic debate over the efficacy of the delivery model. Online education utilizing the Internet became a significant factor in the 1990s, prompting renewed evaluation of the use of distance learning opportunities (Russell, 1999, Phipps & Meristosis, 1999). In 1999–2000, the number of students who took any distance education courses was 8.4% of total undergraduates enrolled in postsecondary education (NCES, 2000). In 2000, the report of the Web-Based Education Commission to the President and Congress concluded that the Internet was no longer in question as a tool to transform the way teaching and learning was offered. The Commission recommended that the nation embrace E-learning as a strategy to provide on-demand, high-quality teaching and professional development to keep the United States competitive in the global workforce. They also recommended continued funding of research into teaching and learning utilizing web-based resources (Web-Based Education Commission, 2000). The acceptance of the importance of the Internet for delivery of higher education opened new opportunities for research and continued the academic debate of the quality of instruction delivered in online education courses and programs. In a longitudinal study from 2002-2007, The Sloan Consortium, a group of higher education institutions actively involved in online education, began studies of online 18 education in the United States over a period of 5 years. In the first study, researchers Allen and Seaman (2003) conducted polls of postsecondary institutions involved with online education and found that students overwhelming responded to the availability of online education, with over 1.6 million students taking at least one online course during the Fall semester of 2002. Over one third of these students took all of their courses online. The survey also found that in 2002, 81% of all institutions of higher education offered at least one fully online or blended course (Allen & Seaman, 2003). In their intermediate report in 2005, Allen and Seaman postulated that online education had continued to make inroads in postsecondary education, with 65% of schools offering graduate courses and programs face-to-face also offering graduate courses online. Sixty-three percent of undergraduate institutions offering face-to-face courses also offered courses online. From 2003 to 2005, the survey results showed that online education, as a long-term strategy for institutions, had increased from 49% to 56%. In addition, core education online course offerings had increased (Allen & Seaman, 2005). In Allen and Seaman’s final report (2007b) for the Sloan Consortium, the researchers reported that almost 3.5 million students participated in at least one online course during the Fall 2006 term, a nearly 10% increase over the number reported in the previous year. Allen and Seaman also reported a 9.7% increase in online enrollment, compared to the 1.5% growth in overall higher education. They found by 2007, 2-year institutions had the highest growth rates and accounted for over the half the online enrollments in the previous 5 years. The researchers concluded, based on a survey 19 conducted as part of the research, institutions believed that improved student access was the top reason for offering online courses and programs (Allen & Seaman, 2007b). Community colleges began embracing distance education in the 1920s as part of their mission to provide low-cost, time-effective education. Community colleges initially provided correspondence courses by mail, but later switched to television and video courses as technology improved (Cohen & Brawer, 1996). In 2001, over 90% of public 2- year colleges in the United States provided distance education courses over the Internet (NCES, 2001). Vocational education, by the nature of its instructional format, was among the last of the educational institutions to participate in distance education. Because of the kinesthetic nature of instruction, vocational education leaders began investigating distance education opportunities in the 1990s, relying on the method to provide only the lecture portion of instruction. By 2004, only 31% of students enrolled in vocational schools had participated in some form of distance education during their program of study (NCES, 2005). In 2008, hands-on instruction in programs such as automobile mechanics and welding, and the clinical portion of health occupations programs, continued to be taught in the traditional classroom setting (NCES, 2008). Analysis of data reported by the NCES indicated that distance education had become a staple for higher education institutions. At both the 4-year and 2-year university level, over 65% of institutions offered more than 12 million courses in 2006-2007 by distance education. While vocational education had traditionally been more hands-on, distance education had become more prevalent in providing opportunities for students to participate in components of the system over the Internet (NCES, 2008). 20 Distance education became the prevalent strategy for higher education institutions to expand their services to new and returning students, without the financial implications of capital expansion. Higher education utilized the strategy to market to students outside their traditional geographic reach by utilizing the power of the Internet. The increasing demand from students of all ages for online opportunities provided new ground for the expansion of higher education opportunities. Grades as an Indicator of Quality of Student Learning The grading system in the United States educational system has served as an indicator of knowledge for over 100 years. Educators have utilized high school grades as a sorting mechanism in American schools to determine postsecondary opportunities. Modern society has accepted honors attainment, graduation honors, and course grades as an indicator of knowledge acquisition in postsecondary education. Stray (2001) reported that the use of grading in schools can be traced to the industrial revolution and the development of factories. William Farish of Cambridge University developed the first grading system in higher education in 1792 (Stray, 2001). Farish mimicked the system established by factories of the time: grade A being the best. The thought was that Farish employed the grading system in order to teach more students, an aberration at that time when instructors rarely had more than a few. The demand for more higher education opportunities prompted Farish to open his class to more students, and as such, led to his use of a sorting system. This was the first known record of grading utilized in classrooms to measure student achievement (Stray, 2001). 21 Smallwood (1935) reported the first grading in higher education at Yale University in 1792. Stiles, President of Yale University, directed the use of the scale in the late 18th century. However, Smallwood noted it was not until 1813 that any record of grades or marking appeared. Using a scale of 100, philosophy and mathematic professors instituted the first use of a marking instrument in the 1800s at Harvard. Smallwood noted early systems were experimental, utilizing different numerical scales, with no standardized system in place between higher education institutions. It was not until the late 1800s that faculty began using descriptors, such as A and B, to rank students according to a predetermined numerical scale (Smallwood, 1935). Experimentation with evaluation of achievement continued into the early 20th century, when educational psychologists, including Dewey and Thorndike, attempted to compare grading scales with intelligence testing. Thorndike’s philosophy of standardized testing and grading survived the 20th century, and his quote, “Whatever exists at all exists in some amount” (Thorndike, 1916, as cited in Ebel & Frisbie, p. 26) has been utilized in educational measurement textbooks as a validation of the use of standards of measurement to measure achievement (Ebel & Frisbie, 1991). The use of grades expanded to community colleges, high schools, and elementary schools in the early 1900s (Pressey, 1920). The use of grades throughout the educational system is fairly standardized today with the 4.0 scale. It is this standardization that allows comparison of grades as achievement between educational levels and institutions (Ebel & Frisbie, 1991) and allows grades to be utilized as a measure for comparison of educational achievement. 22 Researchers analyzing the success of community college transfer students have traditionally studied the grades of the traditional transfer student with an AA or AS degree. Keeley and House’s 1993 study of sophomore and junior transfer students at Northern Illinois University analyzed “transfer shock” (p. 2) for students matriculating from community colleges. The researchers found students who transferred from a community college obtained a grade point average significantly lower in their first semester than did students who began their college career at a 4-year institution. However, the results of the longitudinal studies showed that transfer students who persisted to graduation showed an equivalent GPA at baccalaureate completion (Keeley & House, 1993). Students who transferred from occupationally oriented degree programs typically were not included in traditional studies of transfer students. While the research in general does not include AAS students in traditional transfer data, limited conclusions were available comparing AAS students to traditional 4-year college attendees. Townsend’s study at the University of Missouri-Columbia (2002) showed no difference in grades at baccalaureate graduation between students with an AA/AS degree and students with an AAS degree. The use of grades as an indicator of the level of student achievement has been relied upon by studies comparing traditional classroom instruction and distance instruction. Research analyzing the effectiveness of student learning in distance education began with the first correspondence courses offered utilizing the mail service (Russell, 1999). The study of effectiveness of correspondence courses expanded to include new technologies, such as television and video courses, and increased with the proliferation of 23 online educational offerings. Researchers continued to challenge the effectiveness of learning methods not delivered in traditional higher education settings. In 1991, Russell reviewed over 355 studies, dating from the 1930s and continuing through the late 1980s, and found no significant difference in student learning using any form of distance education, as compared with students in classroom-based instruction (Russell, 1999). Russell’s conclusion formed the basis for a series of works collectively known as “No Significant Difference.” Russell’s conclusion from his studies follows: The fact is the findings of comparative studies are absolutely conclusive; one can bank on them. No matter how it is produced, how it is delivered, whether or not it is interactive, low tech or high tech, students learn equally well with each technology and learn as well as their on-campus, face-to-face counterparts even though students would rather be on campus with the instructor if that were a real choice. (p. xviii) Overwhelmingly, studies have supported Russell’s conclusions, including Neuhauser’s (2002) study of traditional face-to-face education and online education in a business communications class at a large urban university in North Carolina. Neuhauser concluded there was no significant difference in pre- and post-test scores of students enrolled in online and traditional communications classes. In addition, Neuhauser found no significant difference in final grades, homework grades, and grades on research papers, even though learners in the online course were significantly older than were learners in the traditional face-to-face section. The Summers et al. (2005) research included a comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class. 24 The study, conducted at the University of Missouri-Columbia, included undergraduate nursing students who were tested on both their pre- and post-course knowledge of statistics. Their results indicated that utilizing grades as an indicator of knowledge showed no significant difference between the online and traditional classroom students. In their meta-analysis, Machtmes and Asher (2002) reviewed 30 studies and concluded there did not appear to be a difference in achievement, as measured by grades, between distance and traditional learners. As technology use continued to evolve in online education, various studies were conducted to determine whether different delivery methods created a difference in the grades of online students compared to their face-to-face counterparts. A study conducted by Carmel and Gold (2007) supported Russell’s original conclusion by analyzing specific types of online platforms and delivery models. Carmel and Gold’s study included hybrid and traditional classroom-based instruction. They analyzed results from 164 students in 110 courses and found no significant difference in student achievement based on grades between students enrolled in either delivery method. Additional studies supporting Russell’s theory have crossed multiple content areas and delivery models. Brown and Liedholm’s (2002) study at Michigan State University included microeconomics students in virtual, hybrid, and traditional classroom-based instruction. The study included 389 students in the traditional setting, 258 in the hybrid delivery section and 89 students enrolled in online education. No significant difference in student learning as measured by end of course grades was found. Research also showed type of course discipline is not affected by the online delivery model. Schulman and Simms (1999) compared pretest and posttest scores of 25 students enrolled in an online course and a traditional course at Nova Southeastern University. The researchers compared 40 undergraduate students enrolled in online courses and 59 undergraduate students enrolled in the classroom setting of the same course. Results indicated that the students who select online courses scored higher than traditional students scored on the pretest results. However, posttest results showed no significant difference for the online students versus the in-class students. Schulman and Simms concluded that online students were learning equally as well as their classroombased counterparts. Reigle’s (2007) analysis across disciplines at the University of San Francisco and the University of California found no significant difference between online and face-to-face student grade attainment. Shachar and Neumann (2003) conducted a meta-analysis that estimated and compared the differences between the academic performance of students enrolled in distance education compared to those enrolled in traditional settings over the period from 1990-2002. Eighty-six studies containing data from over 15,000 participating students were included in their analysis. The results of the meta-analysis showed that in two-thirds of the cases, students taking courses by distance education outperformed their student counterparts enrolled in traditionally instructed courses. Lynch, during the use of the “Tegrity” system, a brand-specific online platform at Louisiana State University, found that students’ grades were slightly better after utilizing the technology than when the traditional approach was used (Lynch, 2002). Initial results of a University of Wisconsin-Milwaukee study of 5000 students over 2 years indicated that the U-Pace online students performed 12% better than their traditional Psychology 101 counterparts on the same cumulative test (Perez, 2009). Arle’s (2002) study found 26 students enrolled in online human anatomy courses at Rio Salado College scored an average of 6.3% higher on assessments than the national achievement average. Students were assessed using a national standardized test generated by the Human Anatomy and Physiology Society, whose norming sample is based entirely on traditional classroom delivery (Arle, 2002). In a study conducted by Stephenson, Brown, and Griffin (2008), comparing three different delivery styles (traditional, asynchronous electronic courseware, and synchronous e-lectures), results indicated no increased effectiveness of any delivery style when all question types were taken into account. However, when results were analyzed, students receiving traditional lectures showed the lowest levels on questions designed to assess comprehension. Research found supporters in higher education academic leaders. In a 2006 survey of Midwestern postsecondary institutions concerning their online offerings, 56 % of academic leaders in the 11 states rated the learning outcomes in online education as the same or superior to those in face-to-face instructional settings. The proportion of higher education institutions believing that online learning outcomes were superior to those for face-to-face outcomes was still relatively small, but had grown by 34% since 2003, from 10.2 to 13.7 % (Allen & Seaman, 2007b). This belief added merit to the conclusions supported by Russell and others. Russell’s (1999) “no significant difference” conclusion had its detractors. The most commonly cited is Phipps and Merisotis (1999), who reviewed Russell’s original meta-analysis (1999) and reported a much different conclusion. They concluded that the overall quality of the original research was questionable, that much of the research did 27 not control for extraneous variables, and therefore it could not show cause and effect. They included in their findings evidence that the studies utilized by Russell (2000) in the meta-analysis did not use randomly selected subjects, did not take into effect the differences among students, and did not include tests of validity and reliability. The Phipps and Merisotis (1999) analysis included the conclusion that research has focused too much on individual courses rather than on academic programs, and has not taken into account differences among students. They postulated that based on these conclusions, there is a significant difference in the learning results, as evidenced by grades, of students participating in distance education as compared to their classroombased peers. Their analysis of Russell’s original work questioned both the quality and effectiveness of research comparing distance and traditional education delivery. While there has been ongoing conjecture that online education students are not receiving an equivalent learning experience compared to their traditional classroom counterparts, studies utilizing grades as an indicator of student learning have produced little evidence of the disparity. The incidence of studies showing significant negative differences in grades of online learners is small. Higher education institutions have indicated their support for online education, and its continued growth has allowed studies such as the present research to contribute to ongoing dialogue. Student Retention in Postsecondary Education Persistence and retention in higher education is an issue that has intrigued researchers for over 50 years. Quantitative studies conducted in the mid-20th century produced data that caused researchers to look at low retention rates in higher education 28 and search for answers. This question has continued to consume researchers and higher education institutions. In 1987, Tinto attempted to summarize studies of individual student retention in higher education by proposing a theory to allow higher education administrators to predict success and support students (Tinto, 1987). Tinto’s model of student engagement has been in use for over 20 years as higher education administrators and faculty attempt to explain student retention issues at universities and colleges. Tinto’s model primarily focused on factors of student engagement: How students respond to instructors, the higher education community itself, and students’ own engagement in learning are the primary factors Tinto theorized as determining the student’s retention. In the concluding remarks to his 1987 treatise on retention, Tinto acknowledged that persistence in higher education is but one facet of human growth and development, and one that cannot necessarily be attributed to a single factor or strategy. Tinto’s (1987) original study of student retention included the observation that student retention is a complicated web of events that shape student leaving and persistence. He observed that the view of student retention had changed since the 1950s, when students were thought to leave due to lack of motivation, persistence, and skills, hence the name dropout. In the 1970s, research began to focus on the role of the environment in student decisions to stay or leave. In the 1990s, Tinto proposed that the actions of the faculty were the key to institutional efforts to enhance student retention (Tinto, 2007). This was a significant addition to his theory, placing the cause on the instructor instead of the student, and it has done much to influence retention strategies 29 utilized in higher education institutions (Tinto, 2007). Tinto’s studies have driven research in both traditional retention studies and those involving distance education. Studies of the persistence of the postsecondary student routinely focus on 4-year postsecondary education. It is only within the last 20 years that persistence studies have included community college students and occupational students, acknowledging that their reasons for entering the postsecondary community are different from the traditional 4- year higher education participant (Cohen & Brawer, 1996). With different avenues to a baccalaureate degree more prevalent, the research into college persistence has expanded to include other types of programs and students. Postsecondary student retention rates routinely utilize data from longitudinal studies of students entering in a Fall semester and completing a bachelor’s program no more than 6 years later (NCES, 2003). The National Center for Education Statistics reported that 55% of those seeking a baccalaureate degree would complete in 6 years (NCES, 2003). The report acknowledged institutions are unable to follow students who transfer to other institutions; they are able to report only the absence of enrollment in their own institution. Research has also found a large gap between community college entrants and 4- year college entrants in rates of attaining a bachelor’s degree. Dougherty (1992) reported that students entering community college receive 11 to 19% fewer bachelor’s degrees than students beginning at a 4-year university. Dougherty postulated that the lower baccalaureate attainment rate of community college entrants was attributable to both their individual traits and the institution they entered (Dougherty, 1992). 30 Studies of student retention of community college also vary based on the types of students. Community college retention rates are routinely reported as lower than traditional 4-year institutions (NCES, 2007). Cohen and Brawer (1996) attributed the differences in retention to the difference in the mission. In many instances, students did not enroll in a community college in order to attain a degree (Cohen & Brawer, 1996). The most recent longitudinal study in 1993 showed a retention rate of 55.4% of students after 3 years (NCES, 2001). Of community college students, only 60.9% indicated a desire to transfer later to a baccalaureate degree completion program (NCES, 2003). While retention data collected by the federal government (NCES, 2003) did not include students with an AAS degree, Townsend’s studies of the transfer rates and baccalaureate attainment rates of students in Missouri who had completed an Associate of Arts and students who had completed an Associate of Applied Science degree was 61% compared to 54% (Townsend, 2001). Vocational or occupational programs have reported retention rates as “program completion,” a definition involving completion of specific tasks and competencies instead of grades and tied to a limited program length. This state and federal requirement indicates program quality and ensures continued federal funding. In 2001, the U.S. Department of Education reported a 60.1% completion rate of postsecondary students enrolled in occupational education (NCES, 2007). Until 1995, the reasons for students leaving was neither delineated nor reported; it was not until federal reporting requirements under the Carl Perkins Act of 1994 that institutions were required to explore why students were not retained in vocational programs (P.L. 105-332). 31 Distance education provided a new arena for the study of student persistence. Theorists and researchers have attempted to utilize Tinto’s model of student persistence to explain retention issues involved with distance education. However, Rovai (2003) analyzed the differing student characteristics of distance learners as compared to the traditional students targeted by Tinto’s original models and concluded that student retention theories proposed from that population were no longer applicable to distance education learners. Rovai proposed that distance educators could address retention in ways that traditional higher education has not. He suggested that distance educators utilize strategies such as capitalizing on students’ expectations of technology, addressing economic benefits and specific educational needs to increase student retention in courses (Rovai, 2003). The expanded use of technology created a distinct subset of research into student retention issues. In 2004, Berge and Huang developed an overview of models of student retention, with special emphasis on models developed to explain the retention rates in distance education. Their studies primarily focused on the variables in student demographics and external factors, such as age and gender, which influence persistence and retention in online learning. Berge and Huang found that traditional models of student retention such as Tinto’s did not acknowledge the differences in student expectations and goals that are ingrained in the student’s selection of the online learning option. Other researchers have attempted to study retention issues specifically for online education. In a meta-analysis, Nora and Snyder (2009) found the majority of studies of online education focused on students’ individual characteristics and individual 32 perceptions of technology. Nora and Snyder concluded that researchers attempt to utilize traditional models of student engagement to explain student retention issues in distance or online learning courses, with little or no success. This supported Berge and Huard’s conclusions. Nora and Snyder (2009) also noted a dearth of quantitative research. Few quantitative studies exist that support higher or equal retention in online students compared to their classroom-based counterparts. One example is the Carmel and Gold (2007) study. They found no significant difference in student retention rates between students in distance education courses and their traditional classroom-based counterparts. The study utilized data from 164 students, 95 enrolled in classroom-based courses and 69 enrolled in a hybrid online format. Participants randomly self-selected and were not all enrolled in the same course, introducing variables not attributed in the study. The majority of quantitative studies instead concluded there is a higher retention rate in traditional classrooms than in distance education. In the Phipps and Merisotis (1999) review of Russell’s original research, which included online education, results indicated that research has shown even lower retention rates in online students than in students attending classes in the traditional college setting. The high dropout rate among distance education students was not addressed in Russell’s meta-analysis, and Phipps and Merisotis found no suitable explanation in the research. They postulated that the decreased retention rate documented within distance education studies skews achievement data by excluding the dropouts. Diaz (2002) found a high drop rate for online students compared to traditional classroom-based students in an online health education course at Nova Southeastern. Other studies have supported the theory that retention of online students is far below that 33 of the traditional campus students. In 2002, Carr, reporting for The Chronicle of Higher Education, noted that online courses routinely lose 50 % of students who originally enrolled, as compared to a retention rate of 70-75% of traditional face-to-face students. Carr reported dropout rates of up to 75% in online courses as a likely indicator of the difficultly faced in retaining distance education students who do not routinely meet with faculty. The data have not been refuted. As community colleges began utilizing distance education, retention rates were reported as higher than traditional students (Nash, 1984). However, the California Community College System report for Fall 2008 courses showed inconsistent retention results for distance education learners, varying by the type of course. Results indicated equivalent retention rates for online instruction compared to traditional coursework in the majority of courses. Lower retention rates were indicated in online engineering, social sciences, and mathematics courses as compared to traditional classroom instructional models (California Community Colleges Chancellor's Office, 2009). Due to the limited number of vocational/technical or occupational courses taught in the online mode, there was little data on student retention. In 1997, Hogan studied technical course and program completion of students in distance and traditional vocational education and found that course completion rates were higher for distance education students. However, program completion rates were higher for traditional students than for students enrolled in distance education (Hogan, 1997). In summary, studies of retention have focused primarily on student characteristics while acknowledging that postsecondary retention rates vary according to a variety of factors. Research showed mixed results concerning the retention rate of online students, 34 though quantitative data leans heavily toward a lower course retention rate in online students. Data from 4-year universities have shown lower retention rates for online students than for traditional face-to-face students, while community colleges have shown inconsistent results. Data from vocational-technical education has been limited, but course retention rates are higher for online students, while program retention rates are lower. No significant research factor affecting retention has been isolated between students in online baccalaureate completion programs and students participating in traditional classroom-based settings. Summary Research studies have been conducted analyzing student retention in higher education, transfer and retention of students from community colleges to universities, the impact of distance education, and student achievement and retention factors related to distance education. However, no comparative research was identified that compared the achievement and retention of students participating in an occupationally oriented transfer program utilizing both online education and traditional classroom-based instruction. Chapter Three addresses the topics of research design, hypotheses, and research questions. Additionally, population and sample, data collection, and data analysis are discussed. 35 CHAPTER THREE METHODOLOGY The purpose of this study was to determine if there is a significant difference between course grades of students enrolled in online Technology Administration courses and their traditional classroom-based counterparts. The study also examined if there is a significant difference between course retention and program retention of students enrolled in online Technology Administration courses and their traditional classroombased counterparts. The methodology employed to test the research hypotheses is presented in this chapter. The chapter is organized into the following sections: research design, hypotheses and research questions, population and sample, data collection, data analysis, and summary. Research Design A quantitative, quasi-experimental research design was selected to study grades, course retention, and program retention in students enrolled in the Technology Administration program. The design was chosen as a means to determine if significant differences occur between online and face-to-face students by examining numerical scores from all participants enrolled, and retention rates in both courses and programs in the Technology Administration program. Hypotheses and Research Questions This study focused on three research questions with accompanying hypotheses. The research questions and hypotheses guiding the study follow. 36 Research Question 1: Is there is a statistically significant difference between students’ grades in online classes and traditional face-to-face classes? H1: There is a statistically significant difference in course grades of students participating in online courses and students enrolled in a traditional classroom setting at the 0.05 level of significance. Research Question 2: Is there a statistically significant difference between course retention rate of students in online classes and traditional face-to-face classes? H2: There is a statistically significant difference in student course retention between students participating in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. Research Question 3: Is there a statistically significant difference in program retention between students who entered the program in online classes and students who entered the program in traditional face-to-face classes? H3: There is a statistically significant difference in program retention between students who begin the Technology Administration program in online courses and students who begin in face-to-face courses at the 0.05 level of significance. Population and Sample The two populations selected were students enrolled in online and face-to-face courses. The sample included students enrolled in Technology Administration courses. Student enrollment was analyzed for all Technology Administration courses in the program sequence to determine the number of samples available in online and face-toface classes. The course enrollment data for the sample are outlined in Table E1. The subsample of the data utilized for the study is presented in Table 1. 37 Table 1 Technology Administration Enrollment Data Year Instructor TA 300 TA310 FTF OL FTF OL Spring 02 A 14 25 Fall 02 A 11 20 9 26 Spring 03 A 29 38 Fall 03 A 20 29 13 34 Spring 04 B 32 25 Fall 04 B 18 32 10 28 Spring 05 B 23 31 Fall 05 B 15 28 11 28 Spring 06 B 13 30 Fall 06 B 14 24 24 32 Spring 07 B 15 33 Fall 07 B 16 23 27 30 Spring 08 B 22 3529 TOTAL 94 156 242 395 Note: TA 300 Evolution and Development of Technology, TA 310 Technology and Society The subsample for hypothesis 1 and hypothesis 2 included all students enrolled in two entry-level courses required for completion of the Technology Administration program: TA 300 Evolution and Development of Technology, and TA 310 Society and 38 Technology. The university offered the courses in online and face-to-face formats during the period of the study. Two instructors, identified as A and B, were involved with teaching the online and face-to-face courses. Two courses were selected that met the following criteria: (a) the same faculty member taught both courses, (b) the courses were offered over the period of the study consistently in online and face-to-face instruction, and (c) the syllabi for simultaneous online and face-to-face sections were identical. For hypothesis 3, data included records of all students enrolled in TA 300 Evolution and Development of Technology for the Fall semesters of 2002, 2003, 2004, 2005, and 2006. The course was selected for inclusion in the study based on the following criteria: (a) student enrollment in the course was the result of declaration of the Technology Administration program major and (b) parameters of the study allowed students 2 or more years to complete the program requirements. For the purpose of the study, all student names were removed. Data Collection An Institutional Review Board (IRB) form was prepared for Washburn University approval prior to data collection. The study was designated as an exempt study. The Washburn University IRB form is provided in Appendix A. Approval of the IRB was transmitted by e-mail. A copy is located in Appendix B. In addition, an IRB was submitted to Baker University. The form is located in Appendix C. The Baker IRB approval letter is located in Appendix D. Washburn University had two types of data collection systems in place during the period identified for the study, Spring 2002 through Spring 2008. The AS 400 data collection system generated paper reports for 2002 and 2003. The researcher was allowed 39 access to paper records for 2002 and 2003. Enrollment results for all technology administration sections for 2002-2003 were entered manually into an Excel spreadsheet. In 2004, the University transferred to the Banner electronic student data management system. All records since 2004 were archived electronically and were retrieved utilizing the following filters for data specific to students enrolled in the identified Technology Administration courses: TA course designation and specific coding for year and semester to be analyzed (01 = Spring semester, 03 = Fall semester, 200X for specified year). Results retrieved under the Banner system were saved as an Excel spreadsheet by the researcher. The course enrollment data for the sample are presented in Tables E1 and E2. Student transcripts and records were analyzed to determine program completion or continued enrollment in the program for program retention analysis. Documents examined included paper student advising files located within the Technology Administration department and specific student records housed within the Banner reporting system. Technology Administration course TA 300 was selected based on the following: (a) It is a required entry course only for Technology Administration majors, and (b) TA 310 is a dual enrollment course for business department majors. Data Analysis Data analysis for all hypothesis testing was conducted utilizing SPSS software version 16.0. The software system provided automated analysis of the statistical measures. To address Research Question 1, a two-factor analysis of variance was used to analyze for a potential difference in delivery method (online and face-to-face), potential 40 difference in instructor (instructors A and B), and potential interaction between the two factors. When the analysis of variance reveals a difference between the levels of any factor, Salkind (2008) referred to this as the main effect. This analysis produces three F statistics: to determine if a difference in grades of online students as compared to their classroom based counterparts was affected by a main effect for delivery, a main effect for instructor, and for interaction between instructor and delivery. Chi-square testing was selected to address research questions 2 and 3. The rationale for selecting chi-square testing was to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Salkind, 2008). If the obtained chi-square value is greater than the critical value, it indicates there is sufficient evidence to believe the research hypothesis is true. For research question 2, a chi-square test for differences between proportions analyzed course retention of online and face-to-face students at the end of semester. For Research Question 3, a chi-square test for differences between proportions analyzed program retention comparing students who began the program in the online section of TA 300 to the students who began in the face-to-face section. Limitations of the Study Roberts (2004) defined the limitations of the study as those features of the study that may affect the results of the study or the ability to generalize the results. The limitations of this study included (a) potential for data entry error, (b) curriculum modifications not reflected in the syllabi made by instructors over the period of the study, (c) behavior of the instructors during delivery in the two different formats, and (d) 41 rationale of students for selecting one course delivery method over another. These may affect the generalizability of this study to other populations. Summary This chapter described the research design, population and sample, hypotheses, data collection, and analysis used in this research study. Statistical analysis using twoway analysis of variance and chi-square were used to determine if there are significant statistical differences in the course grades, course retention, and program retention of students enrolled in online classes as compared to their face-to face counterparts. The results of this study are presented in Chapter Four. 42 CHAPTER FOUR RESULTS The study had three main purposes. The first purpose was to determine if there was a difference in grades between students in online classes and students in traditional face-to-face classes in the Technology Administration program. In addition, the study was designed to examine the difference in course retention rates of students in the online classes as compared to the face-to-face classes. The third part of the study was designed to examine program retention rates of students who began the program in online classes and students who began the program in traditional face-to-face classes. This chapter begins with the descriptive statistics for the sample: gender, age, grades by gender, and course selection of students in online or face-to-face courses by gender. From the three research questions, research hypotheses were developed, and the results of statistical analyses used to test each hypothesis are presented. Descriptive Statistics Demographic data for the sample was collected from the student data system for 2002 through 2009. The descriptive statistics presented below include gender (n = 884), age (n = 880), grades by gender (n = 884) and course selection online or face-to-face by gender (n = 884). Table 2 describes the cross-tabulation of the frequencies for gender and of the sample selected for the study. The mean age for the sample tested was 31.06 years, with a standard deviation of 9.46 years. The age range of the sample was from 18 to 66 years. One participant did not report gender. Age was not available for three participants. 43 Table 2 Participant Age Group by Gender (n=880) Age Range By Years < 20 20-29 30-39 40-49 50-59 60-69 Female 0 198 121 62 29 3 Male 5 281 104 53 19 5 Note: Gender not reported for one participant; Age not reported for four participants Females = 413 Males = 467 Table 3 presents the frequency of course grades by gender and total number of students receiving each grade. Grades were distributed across the continuum, with slightly more females than males receiving A’s, more males than females receiving B’s, C’s and F’s, and an equal distribution of students receiving D’s. More males withdrew from classes than did females. 44 Table 3 Average Grades by Gender (n=884) Grades Female Male Total A 245 208 453 B 53 79 132 C 32 70 102 D 17 16 33 F 37 55 92 No Credit 1 0 1 Passing 0 1 1 Withdraw 25 42 67 Withdraw Failing 3 0 3 Total 413 471 884 Note: Gender not reported for one participant Table 4 presents the course selection patterns of male and female students. Overall, more students selected online courses than face-to-face courses. Females and males enrolled in online courses in equal numbers; however, proportionally more females (68.7%) chose the online instructional format instead of face-to-face compared with males (60.1%). 45 Table 4 Course Selection by Gender (n=884) Course Type Female Male Total Face-to-face 129 184 313 Online 284 287 571 Total 413 471 884 Note: Gender not reported for one participant Hypothesis Testing H1: There is a statistically significant difference in the course grades of students enrolled in online classes and students enrolled in a traditional classroom setting at the 0.05 level of significance. The sample consisted of 815 students enrolled in online and face-to-face Technology Administration courses at Washburn University. A two-factor analysis of variance was used to analyze for the potential difference in course grades due to delivery method (online and face-to-face), the potential difference due to instructor (instructors A and B), and the potential interaction between the two independent variables. Mean and standard deviation for grades were calculated by delivery type and instructor. Table 5 presents the descriptive statistics. The mean of grades by delivery showed no significant difference between online and face-to-face instruction. Additionally, no significant difference in mean grade was evident when analyzed by instructor. 46 Table 5 Means and Standard Deviations by Course Type and Instructor Course type Instructor Mean Standard Deviation n Face-to-face A 3.0690` 1.41247 29 B 2.9586 1.39073 266 Total 2.9695 1.39084 295 Online A 2.9024 1.52979 41 B 3.0271 1.35579 479 Total 3.0271 1.36911 520 Total A 2.9714 1.47414 70 B 3.0027 1.36783 745 Total 3.000 1.37635 815 The results of the two-factor ANOVA, presented in Table 6, indicated there was no statistically significant difference in grades due to delivery method (F = 0.078, p = 0.780, df = 1, 811). This test was specific for hypothesis 1. In addition, there was no statistically significant difference in grades due to instructor (F = 0.002, p = .967, df = 1, 811), and no significant interaction between the two factors (F = 0.449, p = 0.503, df = 1, 811). The research hypothesis was not supported. 47 Table 6 Two-Factor Analysis of Variance (ANOVA) of Delivery by Instructor df F p Delivery 1 0.148 0.780 Instructor 1 0.003 0.967 Delivery*Instructor 1 0.449 0.503 Error 811 Total 815 H2: There is a statistically significant difference in student course retention between students enrolled in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. The sample consisted of 885 students enrolled in TA 300 and TA 320 online and face-to-face courses. The hypothesis testing began with the analysis of the contingency data presented in Table 7. The data are organized with course selection (online or face-to-face) as the row variable and retention in the course as the column variable. Data were included in the retained column if a final grade was reported for participant. Participants who were coded as withdraw or withdraw failing were labeled as not retained. Chi-square analysis was selected to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Roberts, 2004). The result of the chi square testing (X2 = 2.524, p = .112, df = 1, 884) indicated there was no statistically significant difference between retention of students enrolled in online courses compared to students enrolled in face-to-face courses in the TA program. Additional results indicated that 93.92% (294/313) of the online students were retained, 48 compared to 90.89% (519/571) of the face-to-face students. The research hypothesis was not supported. Table 7 Course retention of online and face-to-face TA students Retained Not retained Total Face-to-face students 294 19 313 Online students 519 52 571 Total 813 71 884 H3: There is a statistically significant difference in program retention between students who begin the Technology Administration program in online courses and students who begin in face-to-face courses at the 0.05 level of significance. The sample consisted of 249 students enrolled in TA 300 in the online and face-to-face courses from Fall 2002 through Fall 2008. The hypothesis testing began with the analysis of the contingency data located in Table 8. The table is organized with course selection (online or face-to-face) as the row variable and program retention as the column variable. Data were included in the retention column if students had successfully met requirements for a Bachelors of Applied Science in Technology Administration or if they were enrolled in the program in Spring 2009. Data were included in the non-retained column if students had not fulfilled degree requirements and they were not enrolled in Spring 2009. Chisquare analysis was selected to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Roberts, 2004). 49 The result of the chi-square testing (X2 = .132, p = .717, df = 1, 249) indicated there was no statistically significant difference between the program retention rate of students who began the TA program in the online courses compared to the students who began the program in the face-to-face courses. Additional results showed that 91.57% (163/178) of students who began in online courses were retained compared to 92.96% (66/71) of students who began the TA program in face-to-face courses. The research hypothesis was not supported. Table 8 Program retention of online and face-to-face TA students Retained Not retained Total Face-to-face 66 5 71 Online 163 15 178 Total 229 20 249 Summary In this chapter, an introduction provided a summary of the analysis and statistical testing and in the order in which it was presented. This was followed by descriptive statistics of the sample, including age range of participants, grades by gender, and course selection by gender. Results from testing of H1 revealed no significant difference between course grades of online students and students enrolled in traditional face-to-face classes. Chisquare testing was utilized for testing of H2. Results indicated there was no significant 50 difference in course retention of students enrolled in online courses and students enrolled in traditional face-to-face courses. H3 was also tested utilizing chi-square testing. The results indicated no significant difference in program retention of students who began the TA program in online courses and students who began in traditional face-to-face courses. Chapter Five provides a summary of the study, discussion of the findings in relationship to the literature, implications for practice, recommendations for further research, and conclusions. 51 CHAPTER FIVE INTERPRETATION AND RECOMMENDATIONS Introduction In the preceding chapter, the results of the analysis were reported. Chapter Five consists of the summary of the study, an overview of the problem, purpose statement and research questions, review of the methodology, major findings, and findings related to the literature. Chapter Five also contains implications for further action and recommendations for further research. The purpose of the latter sections is to expand on the research into distance education, including implications for expansion of course and program delivery and future research. Finally, a summary is offered to capture the scope and substance of what has been offered in the research. Study Summary The online delivery of course content in higher education has increased dramatically in the past decade. Allen and Seaman (2007a) reported that almost 3.5 million students participated in at least one online course during the Fall 2006 term, a nearly 10% increase over the number reported in the previous year. They also reported a 9.7% increase in online enrollment compared to the 1.5% growth in overall higher education. As online delivery has grown, so has criticism of its efficacy. Online delivery of education has become an important strategy for the institution that is the setting of this study. The purpose of this study was three-fold. The first purpose of the study was to determine if there was a significant difference between the course grades of students participating in TA online courses and their traditional classroombased counterparts. The second purpose of the study was to determine if there was a 52 significant difference between course retention of students participating in TA online courses and their traditional classroom-based counterparts. A third purpose of the study was to determine if there was a significant difference between program retention of students who began the TA program in online courses and those who began the program enrolled in traditional face-to-face courses. The study was designed to expand the knowledge base concerning online education and its efficacy in providing baccalaureate degree completion opportunities. The research design was a quantitative study to compare course grades, course retention, and program retention of students enrolled in the online and traditional face-toface TA program at Washburn University. Archival data from the student system at Washburn University was utilized to compare online and traditional face-to-face students. In order to answer Research Question 1, a sample of students enrolled in TA 300 and TA 310 online and traditional face-to-face courses was analyzed. The sample included students entering the program in the Fall semesters of 2002, 2003, 2004, 2005, and 2006. Two instructors were responsible for concurrent instruction of both the online and faceto-face classes for the period analyzed. A two-factor analysis of variance was used to analyze for a potential difference in the dependent variable, course grades, due to delivery method (online and face-to-face), the instructor (instructors A and B), and the potential interaction between the two independent variables (Research Question 1). A chi-square test for differences among proportions was used to analyze both course and program retention (Research Questions 2 and 3). For Research Question 2, archived data from the Washburn University student system was analyzed for students enrolled in TA 300 and TA 310. Additional variables identified for this sample included 53 course selection and instructor (A or B). For Research Question 3, archived data from the Washburn University system was used, which identified students with declared Technology Administration majors who began the TA program enrolled in online and face-to-face courses. A single gatekeeper course (TA 300) was identified for testing. Two instructors (A and B) were responsible for instruction during the testing period. A two-factor ANOVA was utilized to test H1: There is a statistically significant difference in course grades of students participating in online courses and students enrolled in a traditional classroom setting at the 0.05 level of significance. ANOVA testing was utilized to account for the two delivery methods and two instructors involved for the period of the study. The results of the test indicated there was no statistically significant difference in grades due to delivery method. The results of the testing also indicated no statistically significant difference in grades due to instructor and no interaction between the two independent variables. The research hypothesis was not supported. To test the next hypothesis, chi-square testing was utilized. H2: There is a statistically significant difference in student course retention between students participating in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. The result of the chi-square testing indicated there was no statistically significant difference in course retention of students enrolled in online courses and students enrolled in face-to-face courses in the TA program. The research hypothesis was not supported. To test the final hypothesis, chi-square testing was also used. H3: There is a statistically significant difference in program retention between students who begin the 54 Technology Administration program in online courses and students who begin in face-toface courses at the 0.05 level of significance. The result of the chi-square testing indicated there was no statistically significant difference in the program retention rate of students who began the TA program in the online courses and students who began the program in the face-to-face courses. The research hypothesis was not supported. Testing found that course retention was high in both formats, leading to interpretation that higher results may be due to the age of participants or prior degree completion. The results found no significant difference in grades, course, or program retention for students in online TA courses and students enrolled in traditional face-to-face instruction. The implication of these results compared to current literature is discussed in the next section. Findings Related to the Literature Online education has become a strategy for higher education to provide instruction to students limited by distance or time, or who, for other reasons, do not wish to attend traditional classroom-based university classes. Additionally, online education allows higher education institutions to expand their geographic base. Institutions have utilized distance education for over a century to provide instruction, but it was only within the last two decades that instruction over the Internet had replaced correspondence, television, and video courses as the method of choice for delivery (Russell, 1999). Utilizing grades as a measure of achievement, meta-analyses conducted by Russell (1999), Shachar and Neumann (2003), and Machtmes and Asher (2002) found no significant difference in grades of online students and traditional classroom-based 55 students. These analyses utilized multiple studies of course information, comparing grades of online students and traditional face-to-face students, primarily utilizing t tests as the preferred methodology. The results of previous research were supported by the present study. Additionally, this study went further, analyzing data over more than one semester, controlling for the effect of different instructors. These results were contrary to the conclusion reached by Phipps and Merisotis (1999). The second purpose of the study was to determine if a significant difference existed between the course retention of students enrolled in online TA courses and students enrolled in face-to-face courses. Meta-analyses conducted by Phipps and Merisotis (1999) and Nora and Snyder (2009) concluded a much lower course retention rate in online students as compared to their face-to-face counterparts. The previous metaanalyses examined retention of online students and traditional face-to-face students in distinct courses, utilizing t tests as the primary methodology. The chosen method of t tests was used instead of the chi square testing due to the limitations of the studies to one course taught by one instructor, limited to one semester or cycle. Carr (2002) reported in The Chronicle of Higher Education that retention of online students was 50% less than that of traditional face-to-face students. Carr’s results were based on the examination of longitudinal retention data from universities as reported to the United States Department of Education. The results of the present study found no significant difference in the course retention rates. These results are supported by the findings of Carmel and Gold (2007) in which they reported no significant difference in course retention rates of online students compared to traditional face-to-face students in their analysis of students in multiple 56 courses in disciplines across a 4-year university. The present study expanded those results, examining course data in the same discipline over a 6-year period and controlling for delivery by two separate instructors. Research into program completion rates of AAS students has been conducted primarily in traditional university settings, including Townsend’s (2002) studies at the University of Missouri-Columbia. Townsend’s results showed a lower baccalaureate completion rate for students entering with an AAS than students who transferred to 4- year universities with an AA degree. Studies by Hogan (1997) of vocational-education programs also found a lower program completion rate for online students compared to students in traditional delivery vocational education programs. Analysis of the data in the current study showed no significant difference in program completion rate of students who began in online TA courses as compared to students who began the program in faceto-face courses. Conclusions The use of distance education for postsecondary instruction, primarily in the form of the Internet, has both changed and challenged the views of traditional university-based instruction. Multiple studies have been designed in an effort to examine whether online students have the same level of academic achievement as their traditional higher education peers. The present study agrees with the research indicating there is no statistically significant difference in the grades of online students and their face-to-face counterparts. In addition, with student retention an issue for all postsecondary institutions, the data from previous studies indicated a lower retention rate for online students than for their traditional face-to-face classmates. The current study contradicted 57 those arguments. In the following sections, implications for action, recommendations for research, and concluding remarks are addressed. Implications for Action As postsecondary institutions move into the 21st century, many have examined issues of student recruitment and retention in an effort to meet the demands of both their students and their communities. The majority of postsecondary institutions have initiated online education as a strategy to recruit students from beyond their traditional geographic areas. This study supported existing research utilizing grades as a measure of achievement and should alleviate doubt that online students are shortchanged in their education. The transition of existing face-to-face to courses to an online delivery model can be accomplished without sacrificing achievement of course and program goals. The study also examined course and program retention data, finding no significant differences between online and traditional students in the TA program. The findings of this study support the expansion of additional online courses and programs within the School of Applied Studies. Finally, this study can provide the basis for further action, including analyzing other programs and courses offered in the online format by the University. The analysis of other programs offered in an online delivery model would enhance further development of online courses and programs. Recommendations for Future Research Distance education delivery has expanded dramatically with the use of the Internet for online instruction. The present study could be continued in future years to measure the effects of specific curriculum delivery models and changes made to online 58 delivery platforms. In addition, the study could be expanded to include specific characteristics of student retention named in the literature, such as examining whether the age and entering GPA of students provides any insight into course and program retention. The study could also be expanded to include other universities with similar baccalaureate-degree completion programs and other disciplines. Because the body of research is limited concerning the baccalaureate-degree completion of students who begin their postsecondary education in career-oriented instruction, there is value in continuing to study baccalaureate completion rates, both in an online format and in more traditionally based settings. Concluding Remarks The current study examined a Technology Administration program that has been offered in both online and face-to-face format, utilizing data from Fall 2002 through Spring 2008. The TA program was developed to allow students who had completed an occupationally oriented AAS degree to complete a bachelor’s degree program. Three hypotheses were tested in this study, examining course grades, course retention, and program retention of students enrolled in online and face-to-face courses in Technology Administration. No significant difference was found for the three hypotheses. These results form a strong foundation for expanding online courses and programs at Washburn University. By addressing two of the major concerns of educators, achievement and retention, the study results allow expansion of online courses and programs to benefit from data-driven decision-making. Other institutions can and should utilize data to examine existing online course and program data. 59 REFERENCES Allen, I. E., & Seaman, J. (2003). Seizing the opportunity: The quality and extent of online education in the United States, 2002 and 2003. Needham, MA: The Sloan Consortium. Allen, I. E., & Seaman, J. (2005). Growing by degrees: Online education in the United States, 2005. Needham, MA: The Sloan Consortium. Allen, I. E., & Seaman, J. (2007a). Making the grade: Online education in the United States. Needham, MA: The Sloan Consortium Allen, I. E., & Seaman, J. (2007b). Online nation: Five years of growth in online learning. Needham, MA: The Sloan Consortium. Arle, J. (2002). Rio Salado College online human anatomy. In C. Twigg, Innovations in online learning: Moving beyond no significant difference (p. 18). Troy, NY: Center for Academic Transformation. Atkins, T. (2008, May 13). Changing times bring recruiting challenges at WU. Retrieved May 15, 2008, from CJOnline Web site at http://cjonline.com/stories/ 051308/loc_278440905.shtml Berge, Z., & Huang, L. P. (2004, May). A model for sustainable student retention: A holistic perspective on the student dropout problem with special attention to elearning. American Center for the Study of Distance Education. Retrieved April 17, 2009, from DEOSNEWS Web site at http://www.ed.psu.edu/acsde/deos/deosnews/deosarchives.asp 60 Bradburn, E., Hurst, D., & Peng, S. (2001). Community college transfer rates to 4-year institutions using alternative definitions of transfer. Washington, DC: National Center for Education Statistics. Brown, B. W., & Liedholm, C. (2002, May). Can Web courses replace the classroom in principles of microeconomics? The American Economic Review, 92, 444-448. California Community Colleges Chancellor's Office. (2009, April 20). Retention rates for community colleges. Retrieved April 20, 2009, from https://misweb.cccco.edu/mis/onlinestat/ret_suc_rpt.cfm?timeout=800 Carmel, A. & Gold, S. S.. (2007). The effects of course delivery modality on student satisfaction and retention and GPA in on-site vs. hybrid courses. Retrieved September 15, 2008, from ERIC database. (Doc. No. ED496527) Carnevale, D. (2006, November 17). Company's survey suggests strong growth potential for online education. The Chronicle of Higher Education , p. 35. Carr, S. (2000, February 11). As distance education comes of age, the challenge is keeping the students. The Chronicle of Higher Education , pp. 1-5. Cohen, A., & Brawer, F. (1996). The American community college. San Francisco: Jossey-Bass. Diaz, D. (2002, May-June). Online drop rates revisited. Retrieved April 8, 2008, from The Technology Source Archives Web site at http://www.technologysource.org/article/online_drop_rates-revisited/ Dougherty, K. J. (1992). Community colleges and baccalaureate attainment. The Journal of Higher Education, 63, 188-214. 61 Ebel, R., & Frisbie, D. (1991). Essentials of educational measurement. Prentice Hall: Englewood Cliffs, NJ. The Harvard guide. (2004). Retrieved May 20, 2008, from http://www.news.harvard.edu/guide Hogan, R. (1997, July). Analysis of student success in distance learning courses compared to traditional courses. Paper presented at Sixth Annual Conference on Multimedia in Education and Industry, Chattanoga, TN. Jacobs, J., & Grubb, W. N. (2003). The federal role in vocational education. New York: Community College Research Center. Joliet Junior College history. (2008). Retrieved May 20, 2008, from Joliet Junior College Web site at http://www.jjc.edu/campus_info/history/ Kansas Board of Regents. (2002-2003). Degree and program inventory. Retrieved May 14, 2008, from http://www.kansasregents.org Keeley, E. J., & House, J. D. (1993). Transfer shock revisited: A longitudinal study of transfer academic performance. Paper presented at the 33rd Annual Forum of the Association for Institutional Research, Chicago, IL. Knowles, M. S. (1994). A history of the adult education movement in the United States. Melbourne, FL: Krieger. Laanan, F. (2003). Degree aspirations of two-year students. Community College Journal of Research and Practice, 27, 495-518. Lynch, T. (2002). LSU expands distance learning program through online learning solution. T H E Journal (Technological Horizons in Education), 29(6), 47. 62 Machtmes, K., & Asher, J. W. (2000). A meta-analysis of the effectiveness of telecourses in distance education. The American Journal of Distance Education, 14(1), 27-41. Gilman, E. W., Lowe, J., McHenry, R., & Pease, R. (Eds.). (1998). Merriam-Webster’s collegiate dictionary. Springfield, MA: Merriam. Nash, R. (1984, Winter). Course completion rates among distance learners: Identifying possible methods to improve retention. Retrieved April 19, 2009, from Online Journal of Distance Education Web site at http://www.westga.edu/~distance/ojdla/winter84/nash84.htm National Center for Education Statistics. (2000). Distance education statistics 1999-2000. Retrieved March 13, 2008, from at http://nces.ed.gov/das/library/tables_listing National Center for Education Statistics. (2001). Percentage of undergraduates who took any distance education courses in 1999-2000
INTRODUCTION Historically, postsecondary education in the United States was founded on the principles of the European system, requiring the physical presence of professors and students in the same location (Knowles, 1994). From 1626, with the founding of Harvard University (The Harvard Guide, 2004), to the development of junior colleges and vocational schools in the early 1900s (Cohen & Brawer, 1996; Jacobs & Grubb, 2003), the higher education system developed to prepare post-high school students for one of three separate tiers. The college and university system in the United States developed its own set of structures designed to prepare students for baccalaureate and graduate degrees. Junior colleges were limited to associate degrees, while vocational education institutions offered occupational certificates. In many cases, there was inadequate recognition of the postsecondary education offered at junior colleges and vocational education institutions, resulting in the inability of students to transfer to 4-year institutions (National Center for Education Statistics, 2006). In the mid-20th century, some junior colleges began to provide academic, vocational, and personal development educational offerings for members of the local communities. During this same period, junior or community colleges developed a role as transfer institutions for students who, because of time, preparedness, economics, or distance, could not begin their postsecondary education at a 4-year institution (Cohen & Brawer, 1996). Until the mid-1990s, the majority of transfer programs involved Associate of Arts (AA) and Associate of Science (AS) degrees. Associate of Applied Science (AAS) degrees were developed during the 1990s. The AAS degree was granted to those 2 who successfully completed the majority of their college program in vocational education. The creation of a variety of applied baccalaureate degrees allowed students who had previously thought of the AAS degree as a terminal program to complete a baccalaureate degree (Kansas Board of Regents, 2002-2003). Online education also became a strategy for students to access higher education in the 1990s (Allen & Seaman, 2007b). The proliferation of online courses alleviated some of the location-bound barriers to higher education, but online education was criticized as less rigorous than traditional classroom-based course work by traditional academicians. Russell attempted to address this argument with his 1999 meta-analysis of studies dating from the 1920s and covering multiple delivery models, including online education. Russell concluded there was no statistically significant difference in student achievement between courses offered online and those offered in the traditional classroom setting. Since the development of correspondence courses in the 1920s, researchers have attempted to ascertain if students participating in distance education are being shortchanged in their educational goals. No significant difference in grades has been found in the majority of studies designed to address this issue. Studies analyzing online student retention have shown significantly lower retention for online students. In the last 10 years, research studies have expanded to include variations of online education. These include strictly online, hybrid courses, Web-assisted classroom settings, and the traditional higher education course offered only as face-to-face instruction (Carmel & Gold, 2007). Online education continues to proliferate at the same time the number of secondary students in the United States overall is projected to increase (National Center 3 for Education Statistics [NCES], 2006). The projected increase of potential postsecondary students and online postsecondary options provides opportunities for increases in online education programs and courses. In 2000, NCES reported that over 65% of students in higher education were participating in online courses. In a 2007 study, Allen and Seaman estimated only 16% of those enrolled in online education courses are undergraduate students seeking their first degree, counter to the projected increase in traditional-age students. The majority of enrollees in online education are adults updating or advancing their credentials, creating an additional educational market for colleges and universities seeking to expand enrollment without adding physical space (Allen & Seaman, 2007a). For states and localities faced with a contradictory traditional-age enrollment decrease, these figures present an untapped market for higher education courses and programs. Background Researchers attempted to analyze the efficacy of distance education as far back as the 1920s when correspondence courses were created to meet the need of students not willing to attend a traditional classroom-based higher education setting. A meta-analysis of these studies resulted in “The No Significant Difference Phenomenon,” reported by Russell (2001). The results of over 355 studies were compiled, comparing various modes of delivery including correspondence, audio, television courses, and the newest wave of computer-facilitated instruction. Following analyses of studies completed prior to 2001, Russell concluded there was no difference in learning between students enrolled in distance education and those completing courses in the traditional setting. Studies completed since then have provided mixed results. Summers, Waigand, and Whittaker (2005) found there was no difference in GPA and retention between the 4 online and traditional classroom. Arle (2002) found higher achievement by online students, and Brown and Liedholm (2002) found GPA and student retention better in a traditional classroom setting. Student retention is an integral part of the student achievement conversation and is an issue for all forms of higher education. Degree-seeking students’ overall retention has been reported as less than 56% by NCES (2001). Long considered a problem in higher education, attention to the distance education model has shown even lower retention rates in online students than in students attending at the traditional college setting (Phipps & Meristosis, 1999). Research on different modalities, such as fully online and hybrid online courses, has produced mixed results (Carmel & Gold, 2007). No significant trend toward increased retention of students in any of the online modalities has been documented. Retention studies of transfer students have primarily included traditionally defined students transfering from a community college. Statistics have consistantly shown a lower retention rate for students transfering from a community college to a 4-year university than for students who began their post-high school education at a 4-year institution (NCES, 2006). Townsend’s studies of transfer students at the University of Missouri-Columbia also showed a lower baccalaureate retention rate for students who had completed an AAS degree than for students beginning their education at a 4-year institution (Townsend, 2002). Occupationally oriented bachelor’s degree completion programs are relatively new to higher education. Transfer programs in the liberal arts from community colleges to 4-year institutions were common by the 1990s. Townsend (2001), in her study 5 conducted at the University of Missouri–Columbia, observed the blurring of the lines between non-transferrable occupationally oriented undergraduate degrees and undergraduate degrees and certificates that were easily transferred. The study conducted by Townsend was among the first to recognize that many students who began their education at community and technical colleges had bachelor’s degree aspirations that grew after their completion of an occupationally-oriented degree. Laanan proposed that the increase in institutions offering AAS degrees necessitated new ways to transfer undergraduate credits (2003). The setting of this study is a medium-sized Midwestern campus located in Topeka, Kansas. Washburn University enrolls approximately 6000 students a year in undergraduate and graduate programs, including liberal arts, professional schools, and a law school (Washburn University, 2008). The Technology Administration (TA) program selected for the present study began in the 1990s as a baccalaureate degree completion program for students who had received an occupationally oriented associate degree at a Kansas community college or through Washburn’s articulation agreement with Kansas vocational-technical schools. This program provided students who previously had obtained an Associate of Applied Science degree in an occupational area an opportunity to earn a bachelor’s degree. Peterson, Dean of Continuing Education, Washburn University, stated that in early 1999, Washburn University began online courses and programs at the behest of a neighboring community college (personal communication, April 18, 2008). Washburn was asked to develop an online bachelor’s degree completion program for students graduating from community colleges and technical colleges with an Associate of Applied 6 Science degree. The TA program was among the first programs to offer the online bachelor’s degree completion option. The TA program offered its first online courses in Spring 2000. Online education at Washburn expanded to other programs and courses, to include over 200 courses (Washburn University, 2008). The original online partnership with two community colleges expanded to include 16 additional community colleges and four technical colleges in Kansas, as well as colleges in Missouri, California, Wisconsin, South Carolina, and Nebraska (Washburn University, 2008). An initial study in 2002 of student’s course grades and retention in online courses offered at Washburn showed no significant difference between students enrolled in online courses and students enrolled in traditional face-to-face course work (Peterson, personal communication, April 18, 2008). No studies of program retention have been completed. In 2008, Atkins reported overall enrollment at Washburn University decreased 6.7% from Fall 2004 to Fall 2008, from 7400 to 6901 students. During the same period, online course enrollment patterns increased 65%, from 3550 students to 5874 in 2007- 2008 (Washburn University, 2008). Atkins also reported that between 1998 and 2008, the ratio of traditional post-high school age students to nontraditional students enrolling at Washburn University reversed from 40:60 to 60:40. The shift in enrollment patterns produced an increase in enrollment in the early part of the 21st century; however, Washburn University anticipated a decrease in high school graduates in Kansas through 2016, based on demographic patterns of the state. The state figures are opposite the anticipated increase of traditional-age students nationally (NCES, 2008). The increase in 7 distance education students in relation to the anticipated decline in traditional-age students provided the focus for the study. Purpose of the Study Online education has become an important strategy for the higher education institution that was the setting of this study. First, the purpose of the study was to determine if there was a significant difference between the course grades of students participating in TA online courses and their traditional classroom-based counterparts. The second purpose of the study was to determine if there was a significant difference between course retention of students participating in TA online courses and their traditional classroom-based counterparts. The second part of the study was a replication of studies comparing modes of online course delivery to traditional classroom-based instruction (Carmel & Gold, 2007; Russell, 1999). A third purpose of the study was to determine if there was a significant difference between program retention of students who began the TA program in online courses and those who began the program enrolled in traditional face-to-face courses. The study’s purpose was to expand the knowledge base concerning online education to include its efficacy in providing baccalaureate degree completion opportunities. Research Questions Roberts (2004) stated research questions guide the study and usually provide the structure for presenting the results of the research. The research questions guiding this study were: 8 1. Is there is a statistically significant difference between students’ grades in online classes and traditional face-to-face classes? 2. Is there a statistically significant difference between course retention rates in online classes and traditional face-to-face classes? 3. Is there a statistically significant difference between program retention for students entering the program enrolled in online classes and students entering the program enrolled in traditional face-to-face classes? Overview of the Methodology A quantitative study was utilized to compare grades by course, course retention, and program retention of students enrolled in the online and traditional face-to-face TA program at Washburn University. Archival data from the student system at Washburn University were utilized from comparative online and traditional face-to-face classes in two separate courses. In order to answer Research Question 1, a sample of 885 students enrolled in online and traditional face-to-face courses was identified. The sample included students entering the program in the Fall semesters of 2002, 2003, 2004, 2005, and 2006 in both the online and traditional face-to-face classes. Two instructors were responsible for concurrent instruction of both the online and face-to-face classes for the period analyzed. A two-factor analysis of variance was used to analyze for the potential difference in the dependent variables, course grades due to delivery method (online and face-to-face), instructor (instructors A and B), and the potential interaction between the two independent variables (Research Question 1). A chi-square test for differences among proportions was used to analyze course and program retention (Research Questions 2 and 3). 9 Delimitations Roberts (2004) defined delimitations as the boundaries of the study that are controlled principally by the researcher. The delimitations for this study were 1. Only data from 2002 through 2008 from Technology Administration online and face-to-face courses were utilized. 2. The study was confined to students enrolled at Washburn University in the Technology Administration program. 3. Only grades and retention were analyzed. Assumptions Assumptions are defined as those things presupposed in a study (Roberts, 2004). The study was based on the following assumptions: 1. Delivery of content was consistent between online and face-to-face courses and instructors, 2. Course objectives were the same for paired online and traditional face-toface courses, 3. All students enrolled in the TA program met the same criteria for admission to the University, 4. All data entered in the Excel spreadsheets were correct, 5. All students enrolled in the TA program met the same criteria for grade point average and program prerequisites. 10 Definitions The following terms are defined for the purpose of this study: Distance education. Education or training courses delivered to remote locations via postal delivery, or broadcast by audio, video, or computer technologies (Allen, 2007). Dropout. A dropout is defined as a student who has left school and discontinued studies (Merriam-Webster's Collegiate Dictionary, 1998). Face-to-face delivery. This is a course that uses no online technology; content is delivered in person, either in written or oral form (Allen, 2007). Hybrid course. This course is a blend of the online and face-to-face course. A substantial proportion of the content is delivered online, typically using some online discussions and some face-to-face meetings (Allen, 2007). Online course. This defines a course where most or all of the content is delivered online via computer technologies. Typically, there are no face-to-face meetings (Allen, 2007). 2+2 PLAN. The Partnership for Learning and Networking is a collaborative set of online 2+2 baccalaureate degree programs developed by Washburn University. The programs require completion of an associate degree from one of the partner community or technical colleges (Washburn University, 2008). Retention. This term refers to the completion of a course by receiving a letter grade in a course, or a certificate of completion or degree for program completion (Washburn University, 2008). Web-assisted. A course that uses Web-based technology to facilitate what is essentially a face-to-face course (Allen, 2007). 11 Organization of the Study This study consists of five chapters. Chapter One introduced the role of distance education in higher education. Chapter One included the background of the study, the research questions, overview of the methodology, the delimitations of the study, and the definition of terms. Chapter Two presents a literature review, which includes the history of occupational postsecondary education, distance education, and studies relating to grades and retention of students involved in distance education. Chapter Three describes the methodology used for the research study. It includes the selection of participants, design, data collection, and statistical procedures of the study. Chapter Four presents the findings of the research study. Finally, Chapter Five provides a discussion of the results, conclusions, and implications for further research and practice. 12 CHAPTER TWO LITERATURE REVIEW This chapter presents the background for research into the efficacy of distance education in the delivery of higher education. Research studies have focused primarily on grades as a measure of the quality of distance education courses as compared to traditional face-to-face instruction. Utilizing grades has produced a dividing line among education researchers concerning the use of distance education as a delivery model. Retention in distance education has focused primarily on single courses, with little program retention data available. Data from retention studies in higher education have focused primarily on the traditional 4-year university student. Retention studies of community college students have produced quantitative results; however, these studies have been directed at community college students who identify themselves as transfer students early in their community college careers. Retention studies of students enrolled in occupationally oriented programs are limited. Statistical data of higher education shows an increased use of distance education for traditional academic courses as well as occupationally oriented courses. The increase in distance education courses and programs has provided a new dimension to studies of both grades and retention. The recognition of this increase, as well as questions concerning its impact on student learning and retention, produced the impetus for this study. The following review of the literature represents the literature related to this research study. Through examination of previous research, the direction of the present study was formulated. Specifically, the chapter is organized into four sections: (a) the 13 history of occupational transfer programs; (b) the history and research of distance education, including occupational transfer programs utilizing distance education; (c) research utilizing grades as an indicator of student learning in online education; and (d) research focusing on student retention in higher education, including student retention issues in transfer education and online transfer courses and programs. History of Occupational Transfer Programs The measure of success in higher education has been characterized as the attainment of a bachelor’s degree at a 4-year university. Occupationally oriented education was considered primarily a function of job preparation, and until the 1990s was not considered transferrable to other higher education institutions. Occupational transfer programs are a recent occurrence within the postsecondary system that provides an additional pathway to bachelor’s degree completion. Historically, the postsecondary experience in the United States developed as a three-track system. Colleges were established in the United States in 1636 with the founding of Harvard College (The Harvard Guide, 2004). Junior colleges were first founded in 1901 as experimental post-high school graduate programs (Joliet Junior College History, 2008). Their role was initially as a transfer institution to the university. When the Smith-Hughes Act was passed in 1917, a system of vocational education was born in the United States (Jacobs & Grubb, 2003), and was designed to provide further education to those students not viewed as capable of success in a university setting. Vocational education, currently referred to as occupational or technical education, was not originally designed to be a path to higher education. The first programs were designed to help agricultural workers complete their education and increase their skills. 14 More vocational programs were developed during the early 20th century as industrialization developed and as increasing numbers of skills were needed by workers in blue-collar occupations (Jacobs & Grubb, 2003). In the mid-20th century, some junior colleges expanded their programs beyond academic selections to provide occupational development and continuing education. Because of the geographic area from which they attracted students, junior colleges developed a role as “community” colleges. They also solidified their role as transfer institutions for students who, because of time, preparedness, economics, or distance, could not begin their postsecondary education at a 4-year institution (Cohen & Brawer, 1996). Until the mid-1990s, the majority of transfer programs to 4-year universities involved traditional academic degrees, including the Associate of Arts (AA) and Associate of Science (AS) degrees. Occupational programs and continuing education were viewed as terminal and non-transferrable. In 1984, Congress authorized the Carl Perkins Vocational and Technical Education Act (P.L. 98-524). In the legislation, Congress responded to employers’ concerns about the lack of basic skills in employees by adding academic requirements to vocational education legislation. Vocational program curriculum was expanded to include language arts, mathematics, and science principles, and the curriculum reflected the context of the program. The Secretary’s Commission on Achieving Necessary Skills (SCANS) was created in 1990 to determine the skills young people need to succeed in the world of work (U.S. Department of Labor, 2000). In the second Carl Perkins reauthorization in 1990 (P.L. 105-332), Congress responded to the report, which targeted academic and job skills, by outlining a seamless system of vocational and academic 15 education to prepare vocational students to progress into and through higher education. This emphasis led to the development of Associate of Applied Science (AAS) degrees during the 1990s. Granted to those who have successfully completed programs in the applied arts and sciences for careers, AAS degrees were seen as terminal (Kansas Board of Regents, 2002-2003). But as one goal was attained, conversation turned to creating a pathway from occupational associate degrees to bachelor’s degree completion. The desire of students to continue from technical degrees to a baccalaureate was not a new idea. In a paper presented in 1989 to the American Technical Association national conference, TrouttErvin and Morgan’s overview of 2+2 programs showed acceptance of AAS degrees at traditional universities was generally non-existent. Their suggestion for an academic bridge from early technical education to baccalaureate programs highlighted programs accepting AAS degrees toward baccalaureate completion were an exception rather than a rule (Troutt-Ervin & Morgan, 1989). It was not until the late 1990s that applied baccalaureate degrees recognized credits from technical degree students who had previously thought of themselves in a terminal program to complete their baccalaureate degree (Wellman, 2002). Despite the advance of recognition of AAS degrees, standard definitions of transfer students continued to exclude students who completed technical programs. The U.S. Department of Education did not include students receiving an Associate of Applied Science degree in the definition of students preparing for transfer to 4-year colleges (Bradburn, Hurst, & Peng, 2001; Carnevale, 2006). Most states had comparable policies in place concerning core academic curriculum, articulation agreements, transfer of credit, 16 and statewide transfer guides. There was no general recognition of occupational credit transfer. Only a few states, including Kansas, Missouri, and Washington, allowed credits earned in occupationally oriented degrees to transfer to 4-year institutions (Townsend, 2001). No state had set clear goals for the transference of occupational credits between institutions or for the state as a whole (Wellman, 2002). Despite the lack of recognition of occupational transfer credit at the federal level, a new definition of transfer education had emerged. Initially defined as the general education component of the first 2 years of a baccalaureate, the definition of transfer education now included any courses that transferred to a 4-year college, regardless of the nature of the courses (Townsend, 2001). The line between vocational schools, community colleges, and 4-year institutions blurred in the United States as employers and students increasingly made business decisions regarding education and workforce development. Employers increasingly asked for employees with academic and technical skills, as well as critical thinking skills and personal responsibility (U.S. Department of Labor, 2000). Returning students themselves were more attuned to the demands of the 21st century workforce. Their desire to return to higher education, coupled with the economy and the variety of options available to them, required a more adaptive higher education system (Carnevale, 2006). There was growing demand among new and returning students for higher education opportunities responsive to their needs. The expanding needs of the returning student provided opportunities for higher education to respond by utilizing different delivery models. 17 Distance Education Online education became a strategy for postsecondary institutions when the first correspondence courses were initiated with the mail service in the early 20th century (Russell, 1999). As various technologies emerged, distance education utilized television and video models, in addition to paper-based correspondence courses. The expansion of distance education utilizing computer technologies renewed academic debate over the efficacy of the delivery model. Online education utilizing the Internet became a significant factor in the 1990s, prompting renewed evaluation of the use of distance learning opportunities (Russell, 1999, Phipps & Meristosis, 1999). In 1999–2000, the number of students who took any distance education courses was 8.4% of total undergraduates enrolled in postsecondary education (NCES, 2000). In 2000, the report of the Web-Based Education Commission to the President and Congress concluded that the Internet was no longer in question as a tool to transform the way teaching and learning was offered. The Commission recommended that the nation embrace E-learning as a strategy to provide on-demand, high-quality teaching and professional development to keep the United States competitive in the global workforce. They also recommended continued funding of research into teaching and learning utilizing web-based resources (Web-Based Education Commission, 2000). The acceptance of the importance of the Internet for delivery of higher education opened new opportunities for research and continued the academic debate of the quality of instruction delivered in online education courses and programs. In a longitudinal study from 2002-2007, The Sloan Consortium, a group of higher education institutions actively involved in online education, began studies of online 18 education in the United States over a period of 5 years. In the first study, researchers Allen and Seaman (2003) conducted polls of postsecondary institutions involved with online education and found that students overwhelming responded to the availability of online education, with over 1.6 million students taking at least one online course during the Fall semester of 2002. Over one third of these students took all of their courses online. The survey also found that in 2002, 81% of all institutions of higher education offered at least one fully online or blended course (Allen & Seaman, 2003). In their intermediate report in 2005, Allen and Seaman postulated that online education had continued to make inroads in postsecondary education, with 65% of schools offering graduate courses and programs face-to-face also offering graduate courses online. Sixty-three percent of undergraduate institutions offering face-to-face courses also offered courses online. From 2003 to 2005, the survey results showed that online education, as a long-term strategy for institutions, had increased from 49% to 56%. In addition, core education online course offerings had increased (Allen & Seaman, 2005). In Allen and Seaman’s final report (2007b) for the Sloan Consortium, the researchers reported that almost 3.5 million students participated in at least one online course during the Fall 2006 term, a nearly 10% increase over the number reported in the previous year. Allen and Seaman also reported a 9.7% increase in online enrollment, compared to the 1.5% growth in overall higher education. They found by 2007, 2-year institutions had the highest growth rates and accounted for over the half the online enrollments in the previous 5 years. The researchers concluded, based on a survey 19 conducted as part of the research, institutions believed that improved student access was the top reason for offering online courses and programs (Allen & Seaman, 2007b). Community colleges began embracing distance education in the 1920s as part of their mission to provide low-cost, time-effective education. Community colleges initially provided correspondence courses by mail, but later switched to television and video courses as technology improved (Cohen & Brawer, 1996). In 2001, over 90% of public 2- year colleges in the United States provided distance education courses over the Internet (NCES, 2001). Vocational education, by the nature of its instructional format, was among the last of the educational institutions to participate in distance education. Because of the kinesthetic nature of instruction, vocational education leaders began investigating distance education opportunities in the 1990s, relying on the method to provide only the lecture portion of instruction. By 2004, only 31% of students enrolled in vocational schools had participated in some form of distance education during their program of study (NCES, 2005). In 2008, hands-on instruction in programs such as automobile mechanics and welding, and the clinical portion of health occupations programs, continued to be taught in the traditional classroom setting (NCES, 2008). Analysis of data reported by the NCES indicated that distance education had become a staple for higher education institutions. At both the 4-year and 2-year university level, over 65% of institutions offered more than 12 million courses in 2006-2007 by distance education. While vocational education had traditionally been more hands-on, distance education had become more prevalent in providing opportunities for students to participate in components of the system over the Internet (NCES, 2008). 20 Distance education became the prevalent strategy for higher education institutions to expand their services to new and returning students, without the financial implications of capital expansion. Higher education utilized the strategy to market to students outside their traditional geographic reach by utilizing the power of the Internet. The increasing demand from students of all ages for online opportunities provided new ground for the expansion of higher education opportunities. Grades as an Indicator of Quality of Student Learning The grading system in the United States educational system has served as an indicator of knowledge for over 100 years. Educators have utilized high school grades as a sorting mechanism in American schools to determine postsecondary opportunities. Modern society has accepted honors attainment, graduation honors, and course grades as an indicator of knowledge acquisition in postsecondary education. Stray (2001) reported that the use of grading in schools can be traced to the industrial revolution and the development of factories. William Farish of Cambridge University developed the first grading system in higher education in 1792 (Stray, 2001). Farish mimicked the system established by factories of the time: grade A being the best. The thought was that Farish employed the grading system in order to teach more students, an aberration at that time when instructors rarely had more than a few. The demand for more higher education opportunities prompted Farish to open his class to more students, and as such, led to his use of a sorting system. This was the first known record of grading utilized in classrooms to measure student achievement (Stray, 2001). 21 Smallwood (1935) reported the first grading in higher education at Yale University in 1792. Stiles, President of Yale University, directed the use of the scale in the late 18th century. However, Smallwood noted it was not until 1813 that any record of grades or marking appeared. Using a scale of 100, philosophy and mathematic professors instituted the first use of a marking instrument in the 1800s at Harvard. Smallwood noted early systems were experimental, utilizing different numerical scales, with no standardized system in place between higher education institutions. It was not until the late 1800s that faculty began using descriptors, such as A and B, to rank students according to a predetermined numerical scale (Smallwood, 1935). Experimentation with evaluation of achievement continued into the early 20th century, when educational psychologists, including Dewey and Thorndike, attempted to compare grading scales with intelligence testing. Thorndike’s philosophy of standardized testing and grading survived the 20th century, and his quote, “Whatever exists at all exists in some amount” (Thorndike, 1916, as cited in Ebel & Frisbie, p. 26) has been utilized in educational measurement textbooks as a validation of the use of standards of measurement to measure achievement (Ebel & Frisbie, 1991). The use of grades expanded to community colleges, high schools, and elementary schools in the early 1900s (Pressey, 1920). The use of grades throughout the educational system is fairly standardized today with the 4.0 scale. It is this standardization that allows comparison of grades as achievement between educational levels and institutions (Ebel & Frisbie, 1991) and allows grades to be utilized as a measure for comparison of educational achievement. 22 Researchers analyzing the success of community college transfer students have traditionally studied the grades of the traditional transfer student with an AA or AS degree. Keeley and House’s 1993 study of sophomore and junior transfer students at Northern Illinois University analyzed “transfer shock” (p. 2) for students matriculating from community colleges. The researchers found students who transferred from a community college obtained a grade point average significantly lower in their first semester than did students who began their college career at a 4-year institution. However, the results of the longitudinal studies showed that transfer students who persisted to graduation showed an equivalent GPA at baccalaureate completion (Keeley & House, 1993). Students who transferred from occupationally oriented degree programs typically were not included in traditional studies of transfer students. While the research in general does not include AAS students in traditional transfer data, limited conclusions were available comparing AAS students to traditional 4-year college attendees. Townsend’s study at the University of Missouri-Columbia (2002) showed no difference in grades at baccalaureate graduation between students with an AA/AS degree and students with an AAS degree. The use of grades as an indicator of the level of student achievement has been relied upon by studies comparing traditional classroom instruction and distance instruction. Research analyzing the effectiveness of student learning in distance education began with the first correspondence courses offered utilizing the mail service (Russell, 1999). The study of effectiveness of correspondence courses expanded to include new technologies, such as television and video courses, and increased with the proliferation of 23 online educational offerings. Researchers continued to challenge the effectiveness of learning methods not delivered in traditional higher education settings. In 1991, Russell reviewed over 355 studies, dating from the 1930s and continuing through the late 1980s, and found no significant difference in student learning using any form of distance education, as compared with students in classroom-based instruction (Russell, 1999). Russell’s conclusion formed the basis for a series of works collectively known as “No Significant Difference.” Russell’s conclusion from his studies follows: The fact is the findings of comparative studies are absolutely conclusive; one can bank on them. No matter how it is produced, how it is delivered, whether or not it is interactive, low tech or high tech, students learn equally well with each technology and learn as well as their on-campus, face-to-face counterparts even though students would rather be on campus with the instructor if that were a real choice. (p. xviii) Overwhelmingly, studies have supported Russell’s conclusions, including Neuhauser’s (2002) study of traditional face-to-face education and online education in a business communications class at a large urban university in North Carolina. Neuhauser concluded there was no significant difference in pre- and post-test scores of students enrolled in online and traditional communications classes. In addition, Neuhauser found no significant difference in final grades, homework grades, and grades on research papers, even though learners in the online course were significantly older than were learners in the traditional face-to-face section. The Summers et al. (2005) research included a comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class. 24 The study, conducted at the University of Missouri-Columbia, included undergraduate nursing students who were tested on both their pre- and post-course knowledge of statistics. Their results indicated that utilizing grades as an indicator of knowledge showed no significant difference between the online and traditional classroom students. In their meta-analysis, Machtmes and Asher (2002) reviewed 30 studies and concluded there did not appear to be a difference in achievement, as measured by grades, between distance and traditional learners. As technology use continued to evolve in online education, various studies were conducted to determine whether different delivery methods created a difference in the grades of online students compared to their face-to-face counterparts. A study conducted by Carmel and Gold (2007) supported Russell’s original conclusion by analyzing specific types of online platforms and delivery models. Carmel and Gold’s study included hybrid and traditional classroom-based instruction. They analyzed results from 164 students in 110 courses and found no significant difference in student achievement based on grades between students enrolled in either delivery method. Additional studies supporting Russell’s theory have crossed multiple content areas and delivery models. Brown and Liedholm’s (2002) study at Michigan State University included microeconomics students in virtual, hybrid, and traditional classroom-based instruction. The study included 389 students in the traditional setting, 258 in the hybrid delivery section and 89 students enrolled in online education. No significant difference in student learning as measured by end of course grades was found. Research also showed type of course discipline is not affected by the online delivery model. Schulman and Simms (1999) compared pretest and posttest scores of 25 students enrolled in an online course and a traditional course at Nova Southeastern University. The researchers compared 40 undergraduate students enrolled in online courses and 59 undergraduate students enrolled in the classroom setting of the same course. Results indicated that the students who select online courses scored higher than traditional students scored on the pretest results. However, posttest results showed no significant difference for the online students versus the in-class students. Schulman and Simms concluded that online students were learning equally as well as their classroombased counterparts. Reigle’s (2007) analysis across disciplines at the University of San Francisco and the University of California found no significant difference between online and face-to-face student grade attainment. Shachar and Neumann (2003) conducted a meta-analysis that estimated and compared the differences between the academic performance of students enrolled in distance education compared to those enrolled in traditional settings over the period from 1990-2002. Eighty-six studies containing data from over 15,000 participating students were included in their analysis. The results of the meta-analysis showed that in two-thirds of the cases, students taking courses by distance education outperformed their student counterparts enrolled in traditionally instructed courses. Lynch, during the use of the “Tegrity” system, a brand-specific online platform at Louisiana State University, found that students’ grades were slightly better after utilizing the technology than when the traditional approach was used (Lynch, 2002). Initial results of a University of Wisconsin-Milwaukee study of 5000 students over 2 years indicated that the U-Pace online students performed 12% better than their traditional Psychology 101 counterparts on the same cumulative test (Perez, 2009). Arle’s (2002) study found 26 students enrolled in online human anatomy courses at Rio Salado College scored an average of 6.3% higher on assessments than the national achievement average. Students were assessed using a national standardized test generated by the Human Anatomy and Physiology Society, whose norming sample is based entirely on traditional classroom delivery (Arle, 2002). In a study conducted by Stephenson, Brown, and Griffin (2008), comparing three different delivery styles (traditional, asynchronous electronic courseware, and synchronous e-lectures), results indicated no increased effectiveness of any delivery style when all question types were taken into account. However, when results were analyzed, students receiving traditional lectures showed the lowest levels on questions designed to assess comprehension. Research found supporters in higher education academic leaders. In a 2006 survey of Midwestern postsecondary institutions concerning their online offerings, 56 % of academic leaders in the 11 states rated the learning outcomes in online education as the same or superior to those in face-to-face instructional settings. The proportion of higher education institutions believing that online learning outcomes were superior to those for face-to-face outcomes was still relatively small, but had grown by 34% since 2003, from 10.2 to 13.7 % (Allen & Seaman, 2007b). This belief added merit to the conclusions supported by Russell and others. Russell’s (1999) “no significant difference” conclusion had its detractors. The most commonly cited is Phipps and Merisotis (1999), who reviewed Russell’s original meta-analysis (1999) and reported a much different conclusion. They concluded that the overall quality of the original research was questionable, that much of the research did 27 not control for extraneous variables, and therefore it could not show cause and effect. They included in their findings evidence that the studies utilized by Russell (2000) in the meta-analysis did not use randomly selected subjects, did not take into effect the differences among students, and did not include tests of validity and reliability. The Phipps and Merisotis (1999) analysis included the conclusion that research has focused too much on individual courses rather than on academic programs, and has not taken into account differences among students. They postulated that based on these conclusions, there is a significant difference in the learning results, as evidenced by grades, of students participating in distance education as compared to their classroombased peers. Their analysis of Russell’s original work questioned both the quality and effectiveness of research comparing distance and traditional education delivery. While there has been ongoing conjecture that online education students are not receiving an equivalent learning experience compared to their traditional classroom counterparts, studies utilizing grades as an indicator of student learning have produced little evidence of the disparity. The incidence of studies showing significant negative differences in grades of online learners is small. Higher education institutions have indicated their support for online education, and its continued growth has allowed studies such as the present research to contribute to ongoing dialogue. Student Retention in Postsecondary Education Persistence and retention in higher education is an issue that has intrigued researchers for over 50 years. Quantitative studies conducted in the mid-20th century produced data that caused researchers to look at low retention rates in higher education 28 and search for answers. This question has continued to consume researchers and higher education institutions. In 1987, Tinto attempted to summarize studies of individual student retention in higher education by proposing a theory to allow higher education administrators to predict success and support students (Tinto, 1987). Tinto’s model of student engagement has been in use for over 20 years as higher education administrators and faculty attempt to explain student retention issues at universities and colleges. Tinto’s model primarily focused on factors of student engagement: How students respond to instructors, the higher education community itself, and students’ own engagement in learning are the primary factors Tinto theorized as determining the student’s retention. In the concluding remarks to his 1987 treatise on retention, Tinto acknowledged that persistence in higher education is but one facet of human growth and development, and one that cannot necessarily be attributed to a single factor or strategy. Tinto’s (1987) original study of student retention included the observation that student retention is a complicated web of events that shape student leaving and persistence. He observed that the view of student retention had changed since the 1950s, when students were thought to leave due to lack of motivation, persistence, and skills, hence the name dropout. In the 1970s, research began to focus on the role of the environment in student decisions to stay or leave. In the 1990s, Tinto proposed that the actions of the faculty were the key to institutional efforts to enhance student retention (Tinto, 2007). This was a significant addition to his theory, placing the cause on the instructor instead of the student, and it has done much to influence retention strategies 29 utilized in higher education institutions (Tinto, 2007). Tinto’s studies have driven research in both traditional retention studies and those involving distance education. Studies of the persistence of the postsecondary student routinely focus on 4-year postsecondary education. It is only within the last 20 years that persistence studies have included community college students and occupational students, acknowledging that their reasons for entering the postsecondary community are different from the traditional 4- year higher education participant (Cohen & Brawer, 1996). With different avenues to a baccalaureate degree more prevalent, the research into college persistence has expanded to include other types of programs and students. Postsecondary student retention rates routinely utilize data from longitudinal studies of students entering in a Fall semester and completing a bachelor’s program no more than 6 years later (NCES, 2003). The National Center for Education Statistics reported that 55% of those seeking a baccalaureate degree would complete in 6 years (NCES, 2003). The report acknowledged institutions are unable to follow students who transfer to other institutions; they are able to report only the absence of enrollment in their own institution. Research has also found a large gap between community college entrants and 4- year college entrants in rates of attaining a bachelor’s degree. Dougherty (1992) reported that students entering community college receive 11 to 19% fewer bachelor’s degrees than students beginning at a 4-year university. Dougherty postulated that the lower baccalaureate attainment rate of community college entrants was attributable to both their individual traits and the institution they entered (Dougherty, 1992). 30 Studies of student retention of community college also vary based on the types of students. Community college retention rates are routinely reported as lower than traditional 4-year institutions (NCES, 2007). Cohen and Brawer (1996) attributed the differences in retention to the difference in the mission. In many instances, students did not enroll in a community college in order to attain a degree (Cohen & Brawer, 1996). The most recent longitudinal study in 1993 showed a retention rate of 55.4% of students after 3 years (NCES, 2001). Of community college students, only 60.9% indicated a desire to transfer later to a baccalaureate degree completion program (NCES, 2003). While retention data collected by the federal government (NCES, 2003) did not include students with an AAS degree, Townsend’s studies of the transfer rates and baccalaureate attainment rates of students in Missouri who had completed an Associate of Arts and students who had completed an Associate of Applied Science degree was 61% compared to 54% (Townsend, 2001). Vocational or occupational programs have reported retention rates as “program completion,” a definition involving completion of specific tasks and competencies instead of grades and tied to a limited program length. This state and federal requirement indicates program quality and ensures continued federal funding. In 2001, the U.S. Department of Education reported a 60.1% completion rate of postsecondary students enrolled in occupational education (NCES, 2007). Until 1995, the reasons for students leaving was neither delineated nor reported; it was not until federal reporting requirements under the Carl Perkins Act of 1994 that institutions were required to explore why students were not retained in vocational programs (P.L. 105-332). 31 Distance education provided a new arena for the study of student persistence. Theorists and researchers have attempted to utilize Tinto’s model of student persistence to explain retention issues involved with distance education. However, Rovai (2003) analyzed the differing student characteristics of distance learners as compared to the traditional students targeted by Tinto’s original models and concluded that student retention theories proposed from that population were no longer applicable to distance education learners. Rovai proposed that distance educators could address retention in ways that traditional higher education has not. He suggested that distance educators utilize strategies such as capitalizing on students’ expectations of technology, addressing economic benefits and specific educational needs to increase student retention in courses (Rovai, 2003). The expanded use of technology created a distinct subset of research into student retention issues. In 2004, Berge and Huang developed an overview of models of student retention, with special emphasis on models developed to explain the retention rates in distance education. Their studies primarily focused on the variables in student demographics and external factors, such as age and gender, which influence persistence and retention in online learning. Berge and Huang found that traditional models of student retention such as Tinto’s did not acknowledge the differences in student expectations and goals that are ingrained in the student’s selection of the online learning option. Other researchers have attempted to study retention issues specifically for online education. In a meta-analysis, Nora and Snyder (2009) found the majority of studies of online education focused on students’ individual characteristics and individual 32 perceptions of technology. Nora and Snyder concluded that researchers attempt to utilize traditional models of student engagement to explain student retention issues in distance or online learning courses, with little or no success. This supported Berge and Huard’s conclusions. Nora and Snyder (2009) also noted a dearth of quantitative research. Few quantitative studies exist that support higher or equal retention in online students compared to their classroom-based counterparts. One example is the Carmel and Gold (2007) study. They found no significant difference in student retention rates between students in distance education courses and their traditional classroom-based counterparts. The study utilized data from 164 students, 95 enrolled in classroom-based courses and 69 enrolled in a hybrid online format. Participants randomly self-selected and were not all enrolled in the same course, introducing variables not attributed in the study. The majority of quantitative studies instead concluded there is a higher retention rate in traditional classrooms than in distance education. In the Phipps and Merisotis (1999) review of Russell’s original research, which included online education, results indicated that research has shown even lower retention rates in online students than in students attending classes in the traditional college setting. The high dropout rate among distance education students was not addressed in Russell’s meta-analysis, and Phipps and Merisotis found no suitable explanation in the research. They postulated that the decreased retention rate documented within distance education studies skews achievement data by excluding the dropouts. Diaz (2002) found a high drop rate for online students compared to traditional classroom-based students in an online health education course at Nova Southeastern. Other studies have supported the theory that retention of online students is far below that 33 of the traditional campus students. In 2002, Carr, reporting for The Chronicle of Higher Education, noted that online courses routinely lose 50 % of students who originally enrolled, as compared to a retention rate of 70-75% of traditional face-to-face students. Carr reported dropout rates of up to 75% in online courses as a likely indicator of the difficultly faced in retaining distance education students who do not routinely meet with faculty. The data have not been refuted. As community colleges began utilizing distance education, retention rates were reported as higher than traditional students (Nash, 1984). However, the California Community College System report for Fall 2008 courses showed inconsistent retention results for distance education learners, varying by the type of course. Results indicated equivalent retention rates for online instruction compared to traditional coursework in the majority of courses. Lower retention rates were indicated in online engineering, social sciences, and mathematics courses as compared to traditional classroom instructional models (California Community Colleges Chancellor's Office, 2009). Due to the limited number of vocational/technical or occupational courses taught in the online mode, there was little data on student retention. In 1997, Hogan studied technical course and program completion of students in distance and traditional vocational education and found that course completion rates were higher for distance education students. However, program completion rates were higher for traditional students than for students enrolled in distance education (Hogan, 1997). In summary, studies of retention have focused primarily on student characteristics while acknowledging that postsecondary retention rates vary according to a variety of factors. Research showed mixed results concerning the retention rate of online students, 34 though quantitative data leans heavily toward a lower course retention rate in online students. Data from 4-year universities have shown lower retention rates for online students than for traditional face-to-face students, while community colleges have shown inconsistent results. Data from vocational-technical education has been limited, but course retention rates are higher for online students, while program retention rates are lower. No significant research factor affecting retention has been isolated between students in online baccalaureate completion programs and students participating in traditional classroom-based settings. Summary Research studies have been conducted analyzing student retention in higher education, transfer and retention of students from community colleges to universities, the impact of distance education, and student achievement and retention factors related to distance education. However, no comparative research was identified that compared the achievement and retention of students participating in an occupationally oriented transfer program utilizing both online education and traditional classroom-based instruction. Chapter Three addresses the topics of research design, hypotheses, and research questions. Additionally, population and sample, data collection, and data analysis are discussed. 35 CHAPTER THREE METHODOLOGY The purpose of this study was to determine if there is a significant difference between course grades of students enrolled in online Technology Administration courses and their traditional classroom-based counterparts. The study also examined if there is a significant difference between course retention and program retention of students enrolled in online Technology Administration courses and their traditional classroombased counterparts. The methodology employed to test the research hypotheses is presented in this chapter. The chapter is organized into the following sections: research design, hypotheses and research questions, population and sample, data collection, data analysis, and summary. Research Design A quantitative, quasi-experimental research design was selected to study grades, course retention, and program retention in students enrolled in the Technology Administration program. The design was chosen as a means to determine if significant differences occur between online and face-to-face students by examining numerical scores from all participants enrolled, and retention rates in both courses and programs in the Technology Administration program. Hypotheses and Research Questions This study focused on three research questions with accompanying hypotheses. The research questions and hypotheses guiding the study follow. 36 Research Question 1: Is there is a statistically significant difference between students’ grades in online classes and traditional face-to-face classes? H1: There is a statistically significant difference in course grades of students participating in online courses and students enrolled in a traditional classroom setting at the 0.05 level of significance. Research Question 2: Is there a statistically significant difference between course retention rate of students in online classes and traditional face-to-face classes? H2: There is a statistically significant difference in student course retention between students participating in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. Research Question 3: Is there a statistically significant difference in program retention between students who entered the program in online classes and students who entered the program in traditional face-to-face classes? H3: There is a statistically significant difference in program retention between students who begin the Technology Administration program in online courses and students who begin in face-to-face courses at the 0.05 level of significance. Population and Sample The two populations selected were students enrolled in online and face-to-face courses. The sample included students enrolled in Technology Administration courses. Student enrollment was analyzed for all Technology Administration courses in the program sequence to determine the number of samples available in online and face-toface classes. The course enrollment data for the sample are outlined in Table E1. The subsample of the data utilized for the study is presented in Table 1. 37 Table 1 Technology Administration Enrollment Data Year Instructor TA 300 TA310 FTF OL FTF OL Spring 02 A 14 25 Fall 02 A 11 20 9 26 Spring 03 A 29 38 Fall 03 A 20 29 13 34 Spring 04 B 32 25 Fall 04 B 18 32 10 28 Spring 05 B 23 31 Fall 05 B 15 28 11 28 Spring 06 B 13 30 Fall 06 B 14 24 24 32 Spring 07 B 15 33 Fall 07 B 16 23 27 30 Spring 08 B 22 3529 TOTAL 94 156 242 395 Note: TA 300 Evolution and Development of Technology, TA 310 Technology and Society The subsample for hypothesis 1 and hypothesis 2 included all students enrolled in two entry-level courses required for completion of the Technology Administration program: TA 300 Evolution and Development of Technology, and TA 310 Society and 38 Technology. The university offered the courses in online and face-to-face formats during the period of the study. Two instructors, identified as A and B, were involved with teaching the online and face-to-face courses. Two courses were selected that met the following criteria: (a) the same faculty member taught both courses, (b) the courses were offered over the period of the study consistently in online and face-to-face instruction, and (c) the syllabi for simultaneous online and face-to-face sections were identical. For hypothesis 3, data included records of all students enrolled in TA 300 Evolution and Development of Technology for the Fall semesters of 2002, 2003, 2004, 2005, and 2006. The course was selected for inclusion in the study based on the following criteria: (a) student enrollment in the course was the result of declaration of the Technology Administration program major and (b) parameters of the study allowed students 2 or more years to complete the program requirements. For the purpose of the study, all student names were removed. Data Collection An Institutional Review Board (IRB) form was prepared for Washburn University approval prior to data collection. The study was designated as an exempt study. The Washburn University IRB form is provided in Appendix A. Approval of the IRB was transmitted by e-mail. A copy is located in Appendix B. In addition, an IRB was submitted to Baker University. The form is located in Appendix C. The Baker IRB approval letter is located in Appendix D. Washburn University had two types of data collection systems in place during the period identified for the study, Spring 2002 through Spring 2008. The AS 400 data collection system generated paper reports for 2002 and 2003. The researcher was allowed 39 access to paper records for 2002 and 2003. Enrollment results for all technology administration sections for 2002-2003 were entered manually into an Excel spreadsheet. In 2004, the University transferred to the Banner electronic student data management system. All records since 2004 were archived electronically and were retrieved utilizing the following filters for data specific to students enrolled in the identified Technology Administration courses: TA course designation and specific coding for year and semester to be analyzed (01 = Spring semester, 03 = Fall semester, 200X for specified year). Results retrieved under the Banner system were saved as an Excel spreadsheet by the researcher. The course enrollment data for the sample are presented in Tables E1 and E2. Student transcripts and records were analyzed to determine program completion or continued enrollment in the program for program retention analysis. Documents examined included paper student advising files located within the Technology Administration department and specific student records housed within the Banner reporting system. Technology Administration course TA 300 was selected based on the following: (a) It is a required entry course only for Technology Administration majors, and (b) TA 310 is a dual enrollment course for business department majors. Data Analysis Data analysis for all hypothesis testing was conducted utilizing SPSS software version 16.0. The software system provided automated analysis of the statistical measures. To address Research Question 1, a two-factor analysis of variance was used to analyze for a potential difference in delivery method (online and face-to-face), potential 40 difference in instructor (instructors A and B), and potential interaction between the two factors. When the analysis of variance reveals a difference between the levels of any factor, Salkind (2008) referred to this as the main effect. This analysis produces three F statistics: to determine if a difference in grades of online students as compared to their classroom based counterparts was affected by a main effect for delivery, a main effect for instructor, and for interaction between instructor and delivery. Chi-square testing was selected to address research questions 2 and 3. The rationale for selecting chi-square testing was to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Salkind, 2008). If the obtained chi-square value is greater than the critical value, it indicates there is sufficient evidence to believe the research hypothesis is true. For research question 2, a chi-square test for differences between proportions analyzed course retention of online and face-to-face students at the end of semester. For Research Question 3, a chi-square test for differences between proportions analyzed program retention comparing students who began the program in the online section of TA 300 to the students who began in the face-to-face section. Limitations of the Study Roberts (2004) defined the limitations of the study as those features of the study that may affect the results of the study or the ability to generalize the results. The limitations of this study included (a) potential for data entry error, (b) curriculum modifications not reflected in the syllabi made by instructors over the period of the study, (c) behavior of the instructors during delivery in the two different formats, and (d) 41 rationale of students for selecting one course delivery method over another. These may affect the generalizability of this study to other populations. Summary This chapter described the research design, population and sample, hypotheses, data collection, and analysis used in this research study. Statistical analysis using twoway analysis of variance and chi-square were used to determine if there are significant statistical differences in the course grades, course retention, and program retention of students enrolled in online classes as compared to their face-to face counterparts. The results of this study are presented in Chapter Four. 42 CHAPTER FOUR RESULTS The study had three main purposes. The first purpose was to determine if there was a difference in grades between students in online classes and students in traditional face-to-face classes in the Technology Administration program. In addition, the study was designed to examine the difference in course retention rates of students in the online classes as compared to the face-to-face classes. The third part of the study was designed to examine program retention rates of students who began the program in online classes and students who began the program in traditional face-to-face classes. This chapter begins with the descriptive statistics for the sample: gender, age, grades by gender, and course selection of students in online or face-to-face courses by gender. From the three research questions, research hypotheses were developed, and the results of statistical analyses used to test each hypothesis are presented. Descriptive Statistics Demographic data for the sample was collected from the student data system for 2002 through 2009. The descriptive statistics presented below include gender (n = 884), age (n = 880), grades by gender (n = 884) and course selection online or face-to-face by gender (n = 884). Table 2 describes the cross-tabulation of the frequencies for gender and of the sample selected for the study. The mean age for the sample tested was 31.06 years, with a standard deviation of 9.46 years. The age range of the sample was from 18 to 66 years. One participant did not report gender. Age was not available for three participants. 43 Table 2 Participant Age Group by Gender (n=880) Age Range By Years < 20 20-29 30-39 40-49 50-59 60-69 Female 0 198 121 62 29 3 Male 5 281 104 53 19 5 Note: Gender not reported for one participant; Age not reported for four participants Females = 413 Males = 467 Table 3 presents the frequency of course grades by gender and total number of students receiving each grade. Grades were distributed across the continuum, with slightly more females than males receiving A’s, more males than females receiving B’s, C’s and F’s, and an equal distribution of students receiving D’s. More males withdrew from classes than did females. 44 Table 3 Average Grades by Gender (n=884) Grades Female Male Total A 245 208 453 B 53 79 132 C 32 70 102 D 17 16 33 F 37 55 92 No Credit 1 0 1 Passing 0 1 1 Withdraw 25 42 67 Withdraw Failing 3 0 3 Total 413 471 884 Note: Gender not reported for one participant Table 4 presents the course selection patterns of male and female students. Overall, more students selected online courses than face-to-face courses. Females and males enrolled in online courses in equal numbers; however, proportionally more females (68.7%) chose the online instructional format instead of face-to-face compared with males (60.1%). 45 Table 4 Course Selection by Gender (n=884) Course Type Female Male Total Face-to-face 129 184 313 Online 284 287 571 Total 413 471 884 Note: Gender not reported for one participant Hypothesis Testing H1: There is a statistically significant difference in the course grades of students enrolled in online classes and students enrolled in a traditional classroom setting at the 0.05 level of significance. The sample consisted of 815 students enrolled in online and face-to-face Technology Administration courses at Washburn University. A two-factor analysis of variance was used to analyze for the potential difference in course grades due to delivery method (online and face-to-face), the potential difference due to instructor (instructors A and B), and the potential interaction between the two independent variables. Mean and standard deviation for grades were calculated by delivery type and instructor. Table 5 presents the descriptive statistics. The mean of grades by delivery showed no significant difference between online and face-to-face instruction. Additionally, no significant difference in mean grade was evident when analyzed by instructor. 46 Table 5 Means and Standard Deviations by Course Type and Instructor Course type Instructor Mean Standard Deviation n Face-to-face A 3.0690` 1.41247 29 B 2.9586 1.39073 266 Total 2.9695 1.39084 295 Online A 2.9024 1.52979 41 B 3.0271 1.35579 479 Total 3.0271 1.36911 520 Total A 2.9714 1.47414 70 B 3.0027 1.36783 745 Total 3.000 1.37635 815 The results of the two-factor ANOVA, presented in Table 6, indicated there was no statistically significant difference in grades due to delivery method (F = 0.078, p = 0.780, df = 1, 811). This test was specific for hypothesis 1. In addition, there was no statistically significant difference in grades due to instructor (F = 0.002, p = .967, df = 1, 811), and no significant interaction between the two factors (F = 0.449, p = 0.503, df = 1, 811). The research hypothesis was not supported. 47 Table 6 Two-Factor Analysis of Variance (ANOVA) of Delivery by Instructor df F p Delivery 1 0.148 0.780 Instructor 1 0.003 0.967 Delivery*Instructor 1 0.449 0.503 Error 811 Total 815 H2: There is a statistically significant difference in student course retention between students enrolled in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. The sample consisted of 885 students enrolled in TA 300 and TA 320 online and face-to-face courses. The hypothesis testing began with the analysis of the contingency data presented in Table 7. The data are organized with course selection (online or face-to-face) as the row variable and retention in the course as the column variable. Data were included in the retained column if a final grade was reported for participant. Participants who were coded as withdraw or withdraw failing were labeled as not retained. Chi-square analysis was selected to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Roberts, 2004). The result of the chi square testing (X2 = 2.524, p = .112, df = 1, 884) indicated there was no statistically significant difference between retention of students enrolled in online courses compared to students enrolled in face-to-face courses in the TA program. Additional results indicated that 93.92% (294/313) of the online students were retained, 48 compared to 90.89% (519/571) of the face-to-face students. The research hypothesis was not supported. Table 7 Course retention of online and face-to-face TA students Retained Not retained Total Face-to-face students 294 19 313 Online students 519 52 571 Total 813 71 884 H3: There is a statistically significant difference in program retention between students who begin the Technology Administration program in online courses and students who begin in face-to-face courses at the 0.05 level of significance. The sample consisted of 249 students enrolled in TA 300 in the online and face-to-face courses from Fall 2002 through Fall 2008. The hypothesis testing began with the analysis of the contingency data located in Table 8. The table is organized with course selection (online or face-to-face) as the row variable and program retention as the column variable. Data were included in the retention column if students had successfully met requirements for a Bachelors of Applied Science in Technology Administration or if they were enrolled in the program in Spring 2009. Data were included in the non-retained column if students had not fulfilled degree requirements and they were not enrolled in Spring 2009. Chisquare analysis was selected to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Roberts, 2004). 49 The result of the chi-square testing (X2 = .132, p = .717, df = 1, 249) indicated there was no statistically significant difference between the program retention rate of students who began the TA program in the online courses compared to the students who began the program in the face-to-face courses. Additional results showed that 91.57% (163/178) of students who began in online courses were retained compared to 92.96% (66/71) of students who began the TA program in face-to-face courses. The research hypothesis was not supported. Table 8 Program retention of online and face-to-face TA students Retained Not retained Total Face-to-face 66 5 71 Online 163 15 178 Total 229 20 249 Summary In this chapter, an introduction provided a summary of the analysis and statistical testing and in the order in which it was presented. This was followed by descriptive statistics of the sample, including age range of participants, grades by gender, and course selection by gender. Results from testing of H1 revealed no significant difference between course grades of online students and students enrolled in traditional face-to-face classes. Chisquare testing was utilized for testing of H2. Results indicated there was no significant 50 difference in course retention of students enrolled in online courses and students enrolled in traditional face-to-face courses. H3 was also tested utilizing chi-square testing. The results indicated no significant difference in program retention of students who began the TA program in online courses and students who began in traditional face-to-face courses. Chapter Five provides a summary of the study, discussion of the findings in relationship to the literature, implications for practice, recommendations for further research, and conclusions. 51 CHAPTER FIVE INTERPRETATION AND RECOMMENDATIONS Introduction In the preceding chapter, the results of the analysis were reported. Chapter Five consists of the summary of the study, an overview of the problem, purpose statement and research questions, review of the methodology, major findings, and findings related to the literature. Chapter Five also contains implications for further action and recommendations for further research. The purpose of the latter sections is to expand on the research into distance education, including implications for expansion of course and program delivery and future research. Finally, a summary is offered to capture the scope and substance of what has been offered in the research. Study Summary The online delivery of course content in higher education has increased dramatically in the past decade. Allen and Seaman (2007a) reported that almost 3.5 million students participated in at least one online course during the Fall 2006 term, a nearly 10% increase over the number reported in the previous year. They also reported a 9.7% increase in online enrollment compared to the 1.5% growth in overall higher education. As online delivery has grown, so has criticism of its efficacy. Online delivery of education has become an important strategy for the institution that is the setting of this study. The purpose of this study was three-fold. The first purpose of the study was to determine if there was a significant difference between the course grades of students participating in TA online courses and their traditional classroombased counterparts. The second purpose of the study was to determine if there was a 52 significant difference between course retention of students participating in TA online courses and their traditional classroom-based counterparts. A third purpose of the study was to determine if there was a significant difference between program retention of students who began the TA program in online courses and those who began the program enrolled in traditional face-to-face courses. The study was designed to expand the knowledge base concerning online education and its efficacy in providing baccalaureate degree completion opportunities. The research design was a quantitative study to compare course grades, course retention, and program retention of students enrolled in the online and traditional face-toface TA program at Washburn University. Archival data from the student system at Washburn University was utilized to compare online and traditional face-to-face students. In order to answer Research Question 1, a sample of students enrolled in TA 300 and TA 310 online and traditional face-to-face courses was analyzed. The sample included students entering the program in the Fall semesters of 2002, 2003, 2004, 2005, and 2006. Two instructors were responsible for concurrent instruction of both the online and faceto-face classes for the period analyzed. A two-factor analysis of variance was used to analyze for a potential difference in the dependent variable, course grades, due to delivery method (online and face-to-face), the instructor (instructors A and B), and the potential interaction between the two independent variables (Research Question 1). A chi-square test for differences among proportions was used to analyze both course and program retention (Research Questions 2 and 3). For Research Question 2, archived data from the Washburn University student system was analyzed for students enrolled in TA 300 and TA 310. Additional variables identified for this sample included 53 course selection and instructor (A or B). For Research Question 3, archived data from the Washburn University system was used, which identified students with declared Technology Administration majors who began the TA program enrolled in online and face-to-face courses. A single gatekeeper course (TA 300) was identified for testing. Two instructors (A and B) were responsible for instruction during the testing period. A two-factor ANOVA was utilized to test H1: There is a statistically significant difference in course grades of students participating in online courses and students enrolled in a traditional classroom setting at the 0.05 level of significance. ANOVA testing was utilized to account for the two delivery methods and two instructors involved for the period of the study. The results of the test indicated there was no statistically significant difference in grades due to delivery method. The results of the testing also indicated no statistically significant difference in grades due to instructor and no interaction between the two independent variables. The research hypothesis was not supported. To test the next hypothesis, chi-square testing was utilized. H2: There is a statistically significant difference in student course retention between students participating in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. The result of the chi-square testing indicated there was no statistically significant difference in course retention of students enrolled in online courses and students enrolled in face-to-face courses in the TA program. The research hypothesis was not supported. To test the final hypothesis, chi-square testing was also used. H3: There is a statistically significant difference in program retention between students who begin the 54 Technology Administration program in online courses and students who begin in face-toface courses at the 0.05 level of significance. The result of the chi-square testing indicated there was no statistically significant difference in the program retention rate of students who began the TA program in the online courses and students who began the program in the face-to-face courses. The research hypothesis was not supported. Testing found that course retention was high in both formats, leading to interpretation that higher results may be due to the age of participants or prior degree completion. The results found no significant difference in grades, course, or program retention for students in online TA courses and students enrolled in traditional face-to-face instruction. The implication of these results compared to current literature is discussed in the next section. Findings Related to the Literature Online education has become a strategy for higher education to provide instruction to students limited by distance or time, or who, for other reasons, do not wish to attend traditional classroom-based university classes. Additionally, online education allows higher education institutions to expand their geographic base. Institutions have utilized distance education for over a century to provide instruction, but it was only within the last two decades that instruction over the Internet had replaced correspondence, television, and video courses as the method of choice for delivery (Russell, 1999). Utilizing grades as a measure of achievement, meta-analyses conducted by Russell (1999), Shachar and Neumann (2003), and Machtmes and Asher (2002) found no significant difference in grades of online students and traditional classroom-based 55 students. These analyses utilized multiple studies of course information, comparing grades of online students and traditional face-to-face students, primarily utilizing t tests as the preferred methodology. The results of previous research were supported by the present study. Additionally, this study went further, analyzing data over more than one semester, controlling for the effect of different instructors. These results were contrary to the conclusion reached by Phipps and Merisotis (1999). The second purpose of the study was to determine if a significant difference existed between the course retention of students enrolled in online TA courses and students enrolled in face-to-face courses. Meta-analyses conducted by Phipps and Merisotis (1999) and Nora and Snyder (2009) concluded a much lower course retention rate in online students as compared to their face-to-face counterparts. The previous metaanalyses examined retention of online students and traditional face-to-face students in distinct courses, utilizing t tests as the primary methodology. The chosen method of t tests was used instead of the chi square testing due to the limitations of the studies to one course taught by one instructor, limited to one semester or cycle. Carr (2002) reported in The Chronicle of Higher Education that retention of online students was 50% less than that of traditional face-to-face students. Carr’s results were based on the examination of longitudinal retention data from universities as reported to the United States Department of Education. The results of the present study found no significant difference in the course retention rates. These results are supported by the findings of Carmel and Gold (2007) in which they reported no significant difference in course retention rates of online students compared to traditional face-to-face students in their analysis of students in multiple 56 courses in disciplines across a 4-year university. The present study expanded those results, examining course data in the same discipline over a 6-year period and controlling for delivery by two separate instructors. Research into program completion rates of AAS students has been conducted primarily in traditional university settings, including Townsend’s (2002) studies at the University of Missouri-Columbia. Townsend’s results showed a lower baccalaureate completion rate for students entering with an AAS than students who transferred to 4- year universities with an AA degree. Studies by Hogan (1997) of vocational-education programs also found a lower program completion rate for online students compared to students in traditional delivery vocational education programs. Analysis of the data in the current study showed no significant difference in program completion rate of students who began in online TA courses as compared to students who began the program in faceto-face courses. Conclusions The use of distance education for postsecondary instruction, primarily in the form of the Internet, has both changed and challenged the views of traditional university-based instruction. Multiple studies have been designed in an effort to examine whether online students have the same level of academic achievement as their traditional higher education peers. The present study agrees with the research indicating there is no statistically significant difference in the grades of online students and their face-to-face counterparts. In addition, with student retention an issue for all postsecondary institutions, the data from previous studies indicated a lower retention rate for online students than for their traditional face-to-face classmates. The current study contradicted 57 those arguments. In the following sections, implications for action, recommendations for research, and concluding remarks are addressed. Implications for Action As postsecondary institutions move into the 21st century, many have examined issues of student recruitment and retention in an effort to meet the demands of both their students and their communities. The majority of postsecondary institutions have initiated online education as a strategy to recruit students from beyond their traditional geographic areas. This study supported existing research utilizing grades as a measure of achievement and should alleviate doubt that online students are shortchanged in their education. The transition of existing face-to-face to courses to an online delivery model can be accomplished without sacrificing achievement of course and program goals. The study also examined course and program retention data, finding no significant differences between online and traditional students in the TA program. The findings of this study support the expansion of additional online courses and programs within the School of Applied Studies. Finally, this study can provide the basis for further action, including analyzing other programs and courses offered in the online format by the University. The analysis of other programs offered in an online delivery model would enhance further development of online courses and programs. Recommendations for Future Research Distance education delivery has expanded dramatically with the use of the Internet for online instruction. The present study could be continued in future years to measure the effects of specific curriculum delivery models and changes made to online 58 delivery platforms. In addition, the study could be expanded to include specific characteristics of student retention named in the literature, such as examining whether the age and entering GPA of students provides any insight into course and program retention. The study could also be expanded to include other universities with similar baccalaureate-degree completion programs and other disciplines. Because the body of research is limited concerning the baccalaureate-degree completion of students who begin their postsecondary education in career-oriented instruction, there is value in continuing to study baccalaureate completion rates, both in an online format and in more traditionally based settings. Concluding Remarks The current study examined a Technology Administration program that has been offered in both online and face-to-face format, utilizing data from Fall 2002 through Spring 2008. The TA program was developed to allow students who had completed an occupationally oriented AAS degree to complete a bachelor’s degree program. Three hypotheses were tested in this study, examining course grades, course retention, and program retention of students enrolled in online and face-to-face courses in Technology Administration. No significant difference was found for the three hypotheses. These results form a strong foundation for expanding online courses and programs at Washburn University. By addressing two of the major concerns of educators, achievement and retention, the study results allow expansion of online courses and programs to benefit from data-driven decision-making. Other institutions can and should utilize data to examine existing online course and program data. 59 REFERENCES Allen, I. E., & Seaman, J. (2003). Seizing the opportunity: The quality and extent of online education in the United States, 2002 and 2003. Needham, MA: The Sloan Consortium. Allen, I. E., & Seaman, J. (2005). Growing by degrees: Online education in the United States, 2005. Needham, MA: The Sloan Consortium. Allen, I. E., & Seaman, J. (2007a). Making the grade: Online education in the United States. Needham, MA: The Sloan Consortium Allen, I. E., & Seaman, J. (2007b). Online nation: Five years of growth in online learning. Needham, MA: The Sloan Consortium. Arle, J. (2002). Rio Salado College online human anatomy. In C. Twigg, Innovations in online learning: Moving beyond no significant difference (p. 18). Troy, NY: Center for Academic Transformation. Atkins, T. (2008, May 13). Changing times bring recruiting challenges at WU. Retrieved May 15, 2008, from CJOnline Web site at http://cjonline.com/stories/ 051308/loc_278440905.shtml Berge, Z., & Huang, L. P. (2004, May). A model for sustainable student retention: A holistic perspective on the student dropout problem with special attention to elearning. American Center for the Study of Distance Education. Retrieved April 17, 2009, from DEOSNEWS Web site at http://www.ed.psu.edu/acsde/deos/deosnews/deosarchives.asp 60 Bradburn, E., Hurst, D., & Peng, S. (2001). Community college transfer rates to 4-year institutions using alternative definitions of transfer. Washington, DC: National Center for Education Statistics. Brown, B. W., & Liedholm, C. (2002, May). Can Web courses replace the classroom in principles of microeconomics? The American Economic Review, 92, 444-448. California Community Colleges Chancellor's Office. (2009, April 20). Retention rates for community colleges. Retrieved April 20, 2009, from https://misweb.cccco.edu/mis/onlinestat/ret_suc_rpt.cfm?timeout=800 Carmel, A. & Gold, S. S.. (2007). The effects of course delivery modality on student satisfaction and retention and GPA in on-site vs. hybrid courses. Retrieved September 15, 2008, from ERIC database. (Doc. No. ED496527) Carnevale, D. (2006, November 17). Company's survey suggests strong growth potential for online education. The Chronicle of Higher Education , p. 35. Carr, S. (2000, February 11). As distance education comes of age, the challenge is keeping the students. The Chronicle of Higher Education , pp. 1-5. Cohen, A., & Brawer, F. (1996). The American community college. San Francisco: Jossey-Bass. Diaz, D. (2002, May-June). Online drop rates revisited. Retrieved April 8, 2008, from The Technology Source Archives Web site at http://www.technologysource.org/article/online_drop_rates-revisited/ Dougherty, K. J. (1992). Community colleges and baccalaureate attainment. The Journal of Higher Education, 63, 188-214. 61 Ebel, R., & Frisbie, D. (1991). Essentials of educational measurement. Prentice Hall: Englewood Cliffs, NJ. The Harvard guide. (2004). Retrieved May 20, 2008, from http://www.news.harvard.edu/guide Hogan, R. (1997, July). Analysis of student success in distance learning courses compared to traditional courses. Paper presented at Sixth Annual Conference on Multimedia in Education and Industry, Chattanoga, TN. Jacobs, J., & Grubb, W. N. (2003). The federal role in vocational education. New York: Community College Research Center. Joliet Junior College history. (2008). Retrieved May 20, 2008, from Joliet Junior College Web site at http://www.jjc.edu/campus_info/history/ Kansas Board of Regents. (2002-2003). Degree and program inventory. Retrieved May 14, 2008, from http://www.kansasregents.org Keeley, E. J., & House, J. D. (1993). Transfer shock revisited: A longitudinal study of transfer academic performance. Paper presented at the 33rd Annual Forum of the Association for Institutional Research, Chicago, IL. Knowles, M. S. (1994). A history of the adult education movement in the United States. Melbourne, FL: Krieger. Laanan, F. (2003). Degree aspirations of two-year students. Community College Journal of Research and Practice, 27, 495-518. Lynch, T. (2002). LSU expands distance learning program through online learning solution. T H E Journal (Technological Horizons in Education), 29(6), 47. 62 Machtmes, K., & Asher, J. W. (2000). A meta-analysis of the effectiveness of telecourses in distance education. The American Journal of Distance Education, 14(1), 27-41. Gilman, E. W., Lowe, J., McHenry, R., & Pease, R. (Eds.). (1998). Merriam-Webster’s collegiate dictionary. Springfield, MA: Merriam. Nash, R. (1984, Winter). Course completion rates among distance learners: Identifying possible methods to improve retention. Retrieved April 19, 2009, from Online Journal of Distance Education Web site at http://www.westga.edu/~distance/ojdla/winter84/nash84.htm National Center for Education Statistics. (2000). Distance education statistics 1999-2000. Retrieved March 13, 2008, from at http://nces.ed.gov/das/library/tables_listing National Center for Education Statistics. (2001). Percentage of undergraduates who took any distance education courses in 1999-2000 Answer the question using only information from the provided context block. What are some of the benefits of online education?
Answer the question using only information from the provided context block. EVIDENCE: INTRODUCTION Historically, postsecondary education in the United States was founded on the principles of the European system, requiring the physical presence of professors and students in the same location (Knowles, 1994). From 1626, with the founding of Harvard University (The Harvard Guide, 2004), to the development of junior colleges and vocational schools in the early 1900s (Cohen & Brawer, 1996; Jacobs & Grubb, 2003), the higher education system developed to prepare post-high school students for one of three separate tiers. The college and university system in the United States developed its own set of structures designed to prepare students for baccalaureate and graduate degrees. Junior colleges were limited to associate degrees, while vocational education institutions offered occupational certificates. In many cases, there was inadequate recognition of the postsecondary education offered at junior colleges and vocational education institutions, resulting in the inability of students to transfer to 4-year institutions (National Center for Education Statistics, 2006). In the mid-20th century, some junior colleges began to provide academic, vocational, and personal development educational offerings for members of the local communities. During this same period, junior or community colleges developed a role as transfer institutions for students who, because of time, preparedness, economics, or distance, could not begin their postsecondary education at a 4-year institution (Cohen & Brawer, 1996). Until the mid-1990s, the majority of transfer programs involved Associate of Arts (AA) and Associate of Science (AS) degrees. Associate of Applied Science (AAS) degrees were developed during the 1990s. The AAS degree was granted to those 2 who successfully completed the majority of their college program in vocational education. The creation of a variety of applied baccalaureate degrees allowed students who had previously thought of the AAS degree as a terminal program to complete a baccalaureate degree (Kansas Board of Regents, 2002-2003). Online education also became a strategy for students to access higher education in the 1990s (Allen & Seaman, 2007b). The proliferation of online courses alleviated some of the location-bound barriers to higher education, but online education was criticized as less rigorous than traditional classroom-based course work by traditional academicians. Russell attempted to address this argument with his 1999 meta-analysis of studies dating from the 1920s and covering multiple delivery models, including online education. Russell concluded there was no statistically significant difference in student achievement between courses offered online and those offered in the traditional classroom setting. Since the development of correspondence courses in the 1920s, researchers have attempted to ascertain if students participating in distance education are being shortchanged in their educational goals. No significant difference in grades has been found in the majority of studies designed to address this issue. Studies analyzing online student retention have shown significantly lower retention for online students. In the last 10 years, research studies have expanded to include variations of online education. These include strictly online, hybrid courses, Web-assisted classroom settings, and the traditional higher education course offered only as face-to-face instruction (Carmel & Gold, 2007). Online education continues to proliferate at the same time the number of secondary students in the United States overall is projected to increase (National Center 3 for Education Statistics [NCES], 2006). The projected increase of potential postsecondary students and online postsecondary options provides opportunities for increases in online education programs and courses. In 2000, NCES reported that over 65% of students in higher education were participating in online courses. In a 2007 study, Allen and Seaman estimated only 16% of those enrolled in online education courses are undergraduate students seeking their first degree, counter to the projected increase in traditional-age students. The majority of enrollees in online education are adults updating or advancing their credentials, creating an additional educational market for colleges and universities seeking to expand enrollment without adding physical space (Allen & Seaman, 2007a). For states and localities faced with a contradictory traditional-age enrollment decrease, these figures present an untapped market for higher education courses and programs. Background Researchers attempted to analyze the efficacy of distance education as far back as the 1920s when correspondence courses were created to meet the need of students not willing to attend a traditional classroom-based higher education setting. A meta-analysis of these studies resulted in “The No Significant Difference Phenomenon,” reported by Russell (2001). The results of over 355 studies were compiled, comparing various modes of delivery including correspondence, audio, television courses, and the newest wave of computer-facilitated instruction. Following analyses of studies completed prior to 2001, Russell concluded there was no difference in learning between students enrolled in distance education and those completing courses in the traditional setting. Studies completed since then have provided mixed results. Summers, Waigand, and Whittaker (2005) found there was no difference in GPA and retention between the 4 online and traditional classroom. Arle (2002) found higher achievement by online students, and Brown and Liedholm (2002) found GPA and student retention better in a traditional classroom setting. Student retention is an integral part of the student achievement conversation and is an issue for all forms of higher education. Degree-seeking students’ overall retention has been reported as less than 56% by NCES (2001). Long considered a problem in higher education, attention to the distance education model has shown even lower retention rates in online students than in students attending at the traditional college setting (Phipps & Meristosis, 1999). Research on different modalities, such as fully online and hybrid online courses, has produced mixed results (Carmel & Gold, 2007). No significant trend toward increased retention of students in any of the online modalities has been documented. Retention studies of transfer students have primarily included traditionally defined students transfering from a community college. Statistics have consistantly shown a lower retention rate for students transfering from a community college to a 4-year university than for students who began their post-high school education at a 4-year institution (NCES, 2006). Townsend’s studies of transfer students at the University of Missouri-Columbia also showed a lower baccalaureate retention rate for students who had completed an AAS degree than for students beginning their education at a 4-year institution (Townsend, 2002). Occupationally oriented bachelor’s degree completion programs are relatively new to higher education. Transfer programs in the liberal arts from community colleges to 4-year institutions were common by the 1990s. Townsend (2001), in her study 5 conducted at the University of Missouri–Columbia, observed the blurring of the lines between non-transferrable occupationally oriented undergraduate degrees and undergraduate degrees and certificates that were easily transferred. The study conducted by Townsend was among the first to recognize that many students who began their education at community and technical colleges had bachelor’s degree aspirations that grew after their completion of an occupationally-oriented degree. Laanan proposed that the increase in institutions offering AAS degrees necessitated new ways to transfer undergraduate credits (2003). The setting of this study is a medium-sized Midwestern campus located in Topeka, Kansas. Washburn University enrolls approximately 6000 students a year in undergraduate and graduate programs, including liberal arts, professional schools, and a law school (Washburn University, 2008). The Technology Administration (TA) program selected for the present study began in the 1990s as a baccalaureate degree completion program for students who had received an occupationally oriented associate degree at a Kansas community college or through Washburn’s articulation agreement with Kansas vocational-technical schools. This program provided students who previously had obtained an Associate of Applied Science degree in an occupational area an opportunity to earn a bachelor’s degree. Peterson, Dean of Continuing Education, Washburn University, stated that in early 1999, Washburn University began online courses and programs at the behest of a neighboring community college (personal communication, April 18, 2008). Washburn was asked to develop an online bachelor’s degree completion program for students graduating from community colleges and technical colleges with an Associate of Applied 6 Science degree. The TA program was among the first programs to offer the online bachelor’s degree completion option. The TA program offered its first online courses in Spring 2000. Online education at Washburn expanded to other programs and courses, to include over 200 courses (Washburn University, 2008). The original online partnership with two community colleges expanded to include 16 additional community colleges and four technical colleges in Kansas, as well as colleges in Missouri, California, Wisconsin, South Carolina, and Nebraska (Washburn University, 2008). An initial study in 2002 of student’s course grades and retention in online courses offered at Washburn showed no significant difference between students enrolled in online courses and students enrolled in traditional face-to-face course work (Peterson, personal communication, April 18, 2008). No studies of program retention have been completed. In 2008, Atkins reported overall enrollment at Washburn University decreased 6.7% from Fall 2004 to Fall 2008, from 7400 to 6901 students. During the same period, online course enrollment patterns increased 65%, from 3550 students to 5874 in 2007- 2008 (Washburn University, 2008). Atkins also reported that between 1998 and 2008, the ratio of traditional post-high school age students to nontraditional students enrolling at Washburn University reversed from 40:60 to 60:40. The shift in enrollment patterns produced an increase in enrollment in the early part of the 21st century; however, Washburn University anticipated a decrease in high school graduates in Kansas through 2016, based on demographic patterns of the state. The state figures are opposite the anticipated increase of traditional-age students nationally (NCES, 2008). The increase in 7 distance education students in relation to the anticipated decline in traditional-age students provided the focus for the study. Purpose of the Study Online education has become an important strategy for the higher education institution that was the setting of this study. First, the purpose of the study was to determine if there was a significant difference between the course grades of students participating in TA online courses and their traditional classroom-based counterparts. The second purpose of the study was to determine if there was a significant difference between course retention of students participating in TA online courses and their traditional classroom-based counterparts. The second part of the study was a replication of studies comparing modes of online course delivery to traditional classroom-based instruction (Carmel & Gold, 2007; Russell, 1999). A third purpose of the study was to determine if there was a significant difference between program retention of students who began the TA program in online courses and those who began the program enrolled in traditional face-to-face courses. The study’s purpose was to expand the knowledge base concerning online education to include its efficacy in providing baccalaureate degree completion opportunities. Research Questions Roberts (2004) stated research questions guide the study and usually provide the structure for presenting the results of the research. The research questions guiding this study were: 8 1. Is there is a statistically significant difference between students’ grades in online classes and traditional face-to-face classes? 2. Is there a statistically significant difference between course retention rates in online classes and traditional face-to-face classes? 3. Is there a statistically significant difference between program retention for students entering the program enrolled in online classes and students entering the program enrolled in traditional face-to-face classes? Overview of the Methodology A quantitative study was utilized to compare grades by course, course retention, and program retention of students enrolled in the online and traditional face-to-face TA program at Washburn University. Archival data from the student system at Washburn University were utilized from comparative online and traditional face-to-face classes in two separate courses. In order to answer Research Question 1, a sample of 885 students enrolled in online and traditional face-to-face courses was identified. The sample included students entering the program in the Fall semesters of 2002, 2003, 2004, 2005, and 2006 in both the online and traditional face-to-face classes. Two instructors were responsible for concurrent instruction of both the online and face-to-face classes for the period analyzed. A two-factor analysis of variance was used to analyze for the potential difference in the dependent variables, course grades due to delivery method (online and face-to-face), instructor (instructors A and B), and the potential interaction between the two independent variables (Research Question 1). A chi-square test for differences among proportions was used to analyze course and program retention (Research Questions 2 and 3). 9 Delimitations Roberts (2004) defined delimitations as the boundaries of the study that are controlled principally by the researcher. The delimitations for this study were 1. Only data from 2002 through 2008 from Technology Administration online and face-to-face courses were utilized. 2. The study was confined to students enrolled at Washburn University in the Technology Administration program. 3. Only grades and retention were analyzed. Assumptions Assumptions are defined as those things presupposed in a study (Roberts, 2004). The study was based on the following assumptions: 1. Delivery of content was consistent between online and face-to-face courses and instructors, 2. Course objectives were the same for paired online and traditional face-toface courses, 3. All students enrolled in the TA program met the same criteria for admission to the University, 4. All data entered in the Excel spreadsheets were correct, 5. All students enrolled in the TA program met the same criteria for grade point average and program prerequisites. 10 Definitions The following terms are defined for the purpose of this study: Distance education. Education or training courses delivered to remote locations via postal delivery, or broadcast by audio, video, or computer technologies (Allen, 2007). Dropout. A dropout is defined as a student who has left school and discontinued studies (Merriam-Webster's Collegiate Dictionary, 1998). Face-to-face delivery. This is a course that uses no online technology; content is delivered in person, either in written or oral form (Allen, 2007). Hybrid course. This course is a blend of the online and face-to-face course. A substantial proportion of the content is delivered online, typically using some online discussions and some face-to-face meetings (Allen, 2007). Online course. This defines a course where most or all of the content is delivered online via computer technologies. Typically, there are no face-to-face meetings (Allen, 2007). 2+2 PLAN. The Partnership for Learning and Networking is a collaborative set of online 2+2 baccalaureate degree programs developed by Washburn University. The programs require completion of an associate degree from one of the partner community or technical colleges (Washburn University, 2008). Retention. This term refers to the completion of a course by receiving a letter grade in a course, or a certificate of completion or degree for program completion (Washburn University, 2008). Web-assisted. A course that uses Web-based technology to facilitate what is essentially a face-to-face course (Allen, 2007). 11 Organization of the Study This study consists of five chapters. Chapter One introduced the role of distance education in higher education. Chapter One included the background of the study, the research questions, overview of the methodology, the delimitations of the study, and the definition of terms. Chapter Two presents a literature review, which includes the history of occupational postsecondary education, distance education, and studies relating to grades and retention of students involved in distance education. Chapter Three describes the methodology used for the research study. It includes the selection of participants, design, data collection, and statistical procedures of the study. Chapter Four presents the findings of the research study. Finally, Chapter Five provides a discussion of the results, conclusions, and implications for further research and practice. 12 CHAPTER TWO LITERATURE REVIEW This chapter presents the background for research into the efficacy of distance education in the delivery of higher education. Research studies have focused primarily on grades as a measure of the quality of distance education courses as compared to traditional face-to-face instruction. Utilizing grades has produced a dividing line among education researchers concerning the use of distance education as a delivery model. Retention in distance education has focused primarily on single courses, with little program retention data available. Data from retention studies in higher education have focused primarily on the traditional 4-year university student. Retention studies of community college students have produced quantitative results; however, these studies have been directed at community college students who identify themselves as transfer students early in their community college careers. Retention studies of students enrolled in occupationally oriented programs are limited. Statistical data of higher education shows an increased use of distance education for traditional academic courses as well as occupationally oriented courses. The increase in distance education courses and programs has provided a new dimension to studies of both grades and retention. The recognition of this increase, as well as questions concerning its impact on student learning and retention, produced the impetus for this study. The following review of the literature represents the literature related to this research study. Through examination of previous research, the direction of the present study was formulated. Specifically, the chapter is organized into four sections: (a) the 13 history of occupational transfer programs; (b) the history and research of distance education, including occupational transfer programs utilizing distance education; (c) research utilizing grades as an indicator of student learning in online education; and (d) research focusing on student retention in higher education, including student retention issues in transfer education and online transfer courses and programs. History of Occupational Transfer Programs The measure of success in higher education has been characterized as the attainment of a bachelor’s degree at a 4-year university. Occupationally oriented education was considered primarily a function of job preparation, and until the 1990s was not considered transferrable to other higher education institutions. Occupational transfer programs are a recent occurrence within the postsecondary system that provides an additional pathway to bachelor’s degree completion. Historically, the postsecondary experience in the United States developed as a three-track system. Colleges were established in the United States in 1636 with the founding of Harvard College (The Harvard Guide, 2004). Junior colleges were first founded in 1901 as experimental post-high school graduate programs (Joliet Junior College History, 2008). Their role was initially as a transfer institution to the university. When the Smith-Hughes Act was passed in 1917, a system of vocational education was born in the United States (Jacobs & Grubb, 2003), and was designed to provide further education to those students not viewed as capable of success in a university setting. Vocational education, currently referred to as occupational or technical education, was not originally designed to be a path to higher education. The first programs were designed to help agricultural workers complete their education and increase their skills. 14 More vocational programs were developed during the early 20th century as industrialization developed and as increasing numbers of skills were needed by workers in blue-collar occupations (Jacobs & Grubb, 2003). In the mid-20th century, some junior colleges expanded their programs beyond academic selections to provide occupational development and continuing education. Because of the geographic area from which they attracted students, junior colleges developed a role as “community” colleges. They also solidified their role as transfer institutions for students who, because of time, preparedness, economics, or distance, could not begin their postsecondary education at a 4-year institution (Cohen & Brawer, 1996). Until the mid-1990s, the majority of transfer programs to 4-year universities involved traditional academic degrees, including the Associate of Arts (AA) and Associate of Science (AS) degrees. Occupational programs and continuing education were viewed as terminal and non-transferrable. In 1984, Congress authorized the Carl Perkins Vocational and Technical Education Act (P.L. 98-524). In the legislation, Congress responded to employers’ concerns about the lack of basic skills in employees by adding academic requirements to vocational education legislation. Vocational program curriculum was expanded to include language arts, mathematics, and science principles, and the curriculum reflected the context of the program. The Secretary’s Commission on Achieving Necessary Skills (SCANS) was created in 1990 to determine the skills young people need to succeed in the world of work (U.S. Department of Labor, 2000). In the second Carl Perkins reauthorization in 1990 (P.L. 105-332), Congress responded to the report, which targeted academic and job skills, by outlining a seamless system of vocational and academic 15 education to prepare vocational students to progress into and through higher education. This emphasis led to the development of Associate of Applied Science (AAS) degrees during the 1990s. Granted to those who have successfully completed programs in the applied arts and sciences for careers, AAS degrees were seen as terminal (Kansas Board of Regents, 2002-2003). But as one goal was attained, conversation turned to creating a pathway from occupational associate degrees to bachelor’s degree completion. The desire of students to continue from technical degrees to a baccalaureate was not a new idea. In a paper presented in 1989 to the American Technical Association national conference, TrouttErvin and Morgan’s overview of 2+2 programs showed acceptance of AAS degrees at traditional universities was generally non-existent. Their suggestion for an academic bridge from early technical education to baccalaureate programs highlighted programs accepting AAS degrees toward baccalaureate completion were an exception rather than a rule (Troutt-Ervin & Morgan, 1989). It was not until the late 1990s that applied baccalaureate degrees recognized credits from technical degree students who had previously thought of themselves in a terminal program to complete their baccalaureate degree (Wellman, 2002). Despite the advance of recognition of AAS degrees, standard definitions of transfer students continued to exclude students who completed technical programs. The U.S. Department of Education did not include students receiving an Associate of Applied Science degree in the definition of students preparing for transfer to 4-year colleges (Bradburn, Hurst, & Peng, 2001; Carnevale, 2006). Most states had comparable policies in place concerning core academic curriculum, articulation agreements, transfer of credit, 16 and statewide transfer guides. There was no general recognition of occupational credit transfer. Only a few states, including Kansas, Missouri, and Washington, allowed credits earned in occupationally oriented degrees to transfer to 4-year institutions (Townsend, 2001). No state had set clear goals for the transference of occupational credits between institutions or for the state as a whole (Wellman, 2002). Despite the lack of recognition of occupational transfer credit at the federal level, a new definition of transfer education had emerged. Initially defined as the general education component of the first 2 years of a baccalaureate, the definition of transfer education now included any courses that transferred to a 4-year college, regardless of the nature of the courses (Townsend, 2001). The line between vocational schools, community colleges, and 4-year institutions blurred in the United States as employers and students increasingly made business decisions regarding education and workforce development. Employers increasingly asked for employees with academic and technical skills, as well as critical thinking skills and personal responsibility (U.S. Department of Labor, 2000). Returning students themselves were more attuned to the demands of the 21st century workforce. Their desire to return to higher education, coupled with the economy and the variety of options available to them, required a more adaptive higher education system (Carnevale, 2006). There was growing demand among new and returning students for higher education opportunities responsive to their needs. The expanding needs of the returning student provided opportunities for higher education to respond by utilizing different delivery models. 17 Distance Education Online education became a strategy for postsecondary institutions when the first correspondence courses were initiated with the mail service in the early 20th century (Russell, 1999). As various technologies emerged, distance education utilized television and video models, in addition to paper-based correspondence courses. The expansion of distance education utilizing computer technologies renewed academic debate over the efficacy of the delivery model. Online education utilizing the Internet became a significant factor in the 1990s, prompting renewed evaluation of the use of distance learning opportunities (Russell, 1999, Phipps & Meristosis, 1999). In 1999–2000, the number of students who took any distance education courses was 8.4% of total undergraduates enrolled in postsecondary education (NCES, 2000). In 2000, the report of the Web-Based Education Commission to the President and Congress concluded that the Internet was no longer in question as a tool to transform the way teaching and learning was offered. The Commission recommended that the nation embrace E-learning as a strategy to provide on-demand, high-quality teaching and professional development to keep the United States competitive in the global workforce. They also recommended continued funding of research into teaching and learning utilizing web-based resources (Web-Based Education Commission, 2000). The acceptance of the importance of the Internet for delivery of higher education opened new opportunities for research and continued the academic debate of the quality of instruction delivered in online education courses and programs. In a longitudinal study from 2002-2007, The Sloan Consortium, a group of higher education institutions actively involved in online education, began studies of online 18 education in the United States over a period of 5 years. In the first study, researchers Allen and Seaman (2003) conducted polls of postsecondary institutions involved with online education and found that students overwhelming responded to the availability of online education, with over 1.6 million students taking at least one online course during the Fall semester of 2002. Over one third of these students took all of their courses online. The survey also found that in 2002, 81% of all institutions of higher education offered at least one fully online or blended course (Allen & Seaman, 2003). In their intermediate report in 2005, Allen and Seaman postulated that online education had continued to make inroads in postsecondary education, with 65% of schools offering graduate courses and programs face-to-face also offering graduate courses online. Sixty-three percent of undergraduate institutions offering face-to-face courses also offered courses online. From 2003 to 2005, the survey results showed that online education, as a long-term strategy for institutions, had increased from 49% to 56%. In addition, core education online course offerings had increased (Allen & Seaman, 2005). In Allen and Seaman’s final report (2007b) for the Sloan Consortium, the researchers reported that almost 3.5 million students participated in at least one online course during the Fall 2006 term, a nearly 10% increase over the number reported in the previous year. Allen and Seaman also reported a 9.7% increase in online enrollment, compared to the 1.5% growth in overall higher education. They found by 2007, 2-year institutions had the highest growth rates and accounted for over the half the online enrollments in the previous 5 years. The researchers concluded, based on a survey 19 conducted as part of the research, institutions believed that improved student access was the top reason for offering online courses and programs (Allen & Seaman, 2007b). Community colleges began embracing distance education in the 1920s as part of their mission to provide low-cost, time-effective education. Community colleges initially provided correspondence courses by mail, but later switched to television and video courses as technology improved (Cohen & Brawer, 1996). In 2001, over 90% of public 2- year colleges in the United States provided distance education courses over the Internet (NCES, 2001). Vocational education, by the nature of its instructional format, was among the last of the educational institutions to participate in distance education. Because of the kinesthetic nature of instruction, vocational education leaders began investigating distance education opportunities in the 1990s, relying on the method to provide only the lecture portion of instruction. By 2004, only 31% of students enrolled in vocational schools had participated in some form of distance education during their program of study (NCES, 2005). In 2008, hands-on instruction in programs such as automobile mechanics and welding, and the clinical portion of health occupations programs, continued to be taught in the traditional classroom setting (NCES, 2008). Analysis of data reported by the NCES indicated that distance education had become a staple for higher education institutions. At both the 4-year and 2-year university level, over 65% of institutions offered more than 12 million courses in 2006-2007 by distance education. While vocational education had traditionally been more hands-on, distance education had become more prevalent in providing opportunities for students to participate in components of the system over the Internet (NCES, 2008). 20 Distance education became the prevalent strategy for higher education institutions to expand their services to new and returning students, without the financial implications of capital expansion. Higher education utilized the strategy to market to students outside their traditional geographic reach by utilizing the power of the Internet. The increasing demand from students of all ages for online opportunities provided new ground for the expansion of higher education opportunities. Grades as an Indicator of Quality of Student Learning The grading system in the United States educational system has served as an indicator of knowledge for over 100 years. Educators have utilized high school grades as a sorting mechanism in American schools to determine postsecondary opportunities. Modern society has accepted honors attainment, graduation honors, and course grades as an indicator of knowledge acquisition in postsecondary education. Stray (2001) reported that the use of grading in schools can be traced to the industrial revolution and the development of factories. William Farish of Cambridge University developed the first grading system in higher education in 1792 (Stray, 2001). Farish mimicked the system established by factories of the time: grade A being the best. The thought was that Farish employed the grading system in order to teach more students, an aberration at that time when instructors rarely had more than a few. The demand for more higher education opportunities prompted Farish to open his class to more students, and as such, led to his use of a sorting system. This was the first known record of grading utilized in classrooms to measure student achievement (Stray, 2001). 21 Smallwood (1935) reported the first grading in higher education at Yale University in 1792. Stiles, President of Yale University, directed the use of the scale in the late 18th century. However, Smallwood noted it was not until 1813 that any record of grades or marking appeared. Using a scale of 100, philosophy and mathematic professors instituted the first use of a marking instrument in the 1800s at Harvard. Smallwood noted early systems were experimental, utilizing different numerical scales, with no standardized system in place between higher education institutions. It was not until the late 1800s that faculty began using descriptors, such as A and B, to rank students according to a predetermined numerical scale (Smallwood, 1935). Experimentation with evaluation of achievement continued into the early 20th century, when educational psychologists, including Dewey and Thorndike, attempted to compare grading scales with intelligence testing. Thorndike’s philosophy of standardized testing and grading survived the 20th century, and his quote, “Whatever exists at all exists in some amount” (Thorndike, 1916, as cited in Ebel & Frisbie, p. 26) has been utilized in educational measurement textbooks as a validation of the use of standards of measurement to measure achievement (Ebel & Frisbie, 1991). The use of grades expanded to community colleges, high schools, and elementary schools in the early 1900s (Pressey, 1920). The use of grades throughout the educational system is fairly standardized today with the 4.0 scale. It is this standardization that allows comparison of grades as achievement between educational levels and institutions (Ebel & Frisbie, 1991) and allows grades to be utilized as a measure for comparison of educational achievement. 22 Researchers analyzing the success of community college transfer students have traditionally studied the grades of the traditional transfer student with an AA or AS degree. Keeley and House’s 1993 study of sophomore and junior transfer students at Northern Illinois University analyzed “transfer shock” (p. 2) for students matriculating from community colleges. The researchers found students who transferred from a community college obtained a grade point average significantly lower in their first semester than did students who began their college career at a 4-year institution. However, the results of the longitudinal studies showed that transfer students who persisted to graduation showed an equivalent GPA at baccalaureate completion (Keeley & House, 1993). Students who transferred from occupationally oriented degree programs typically were not included in traditional studies of transfer students. While the research in general does not include AAS students in traditional transfer data, limited conclusions were available comparing AAS students to traditional 4-year college attendees. Townsend’s study at the University of Missouri-Columbia (2002) showed no difference in grades at baccalaureate graduation between students with an AA/AS degree and students with an AAS degree. The use of grades as an indicator of the level of student achievement has been relied upon by studies comparing traditional classroom instruction and distance instruction. Research analyzing the effectiveness of student learning in distance education began with the first correspondence courses offered utilizing the mail service (Russell, 1999). The study of effectiveness of correspondence courses expanded to include new technologies, such as television and video courses, and increased with the proliferation of 23 online educational offerings. Researchers continued to challenge the effectiveness of learning methods not delivered in traditional higher education settings. In 1991, Russell reviewed over 355 studies, dating from the 1930s and continuing through the late 1980s, and found no significant difference in student learning using any form of distance education, as compared with students in classroom-based instruction (Russell, 1999). Russell’s conclusion formed the basis for a series of works collectively known as “No Significant Difference.” Russell’s conclusion from his studies follows: The fact is the findings of comparative studies are absolutely conclusive; one can bank on them. No matter how it is produced, how it is delivered, whether or not it is interactive, low tech or high tech, students learn equally well with each technology and learn as well as their on-campus, face-to-face counterparts even though students would rather be on campus with the instructor if that were a real choice. (p. xviii) Overwhelmingly, studies have supported Russell’s conclusions, including Neuhauser’s (2002) study of traditional face-to-face education and online education in a business communications class at a large urban university in North Carolina. Neuhauser concluded there was no significant difference in pre- and post-test scores of students enrolled in online and traditional communications classes. In addition, Neuhauser found no significant difference in final grades, homework grades, and grades on research papers, even though learners in the online course were significantly older than were learners in the traditional face-to-face section. The Summers et al. (2005) research included a comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class. 24 The study, conducted at the University of Missouri-Columbia, included undergraduate nursing students who were tested on both their pre- and post-course knowledge of statistics. Their results indicated that utilizing grades as an indicator of knowledge showed no significant difference between the online and traditional classroom students. In their meta-analysis, Machtmes and Asher (2002) reviewed 30 studies and concluded there did not appear to be a difference in achievement, as measured by grades, between distance and traditional learners. As technology use continued to evolve in online education, various studies were conducted to determine whether different delivery methods created a difference in the grades of online students compared to their face-to-face counterparts. A study conducted by Carmel and Gold (2007) supported Russell’s original conclusion by analyzing specific types of online platforms and delivery models. Carmel and Gold’s study included hybrid and traditional classroom-based instruction. They analyzed results from 164 students in 110 courses and found no significant difference in student achievement based on grades between students enrolled in either delivery method. Additional studies supporting Russell’s theory have crossed multiple content areas and delivery models. Brown and Liedholm’s (2002) study at Michigan State University included microeconomics students in virtual, hybrid, and traditional classroom-based instruction. The study included 389 students in the traditional setting, 258 in the hybrid delivery section and 89 students enrolled in online education. No significant difference in student learning as measured by end of course grades was found. Research also showed type of course discipline is not affected by the online delivery model. Schulman and Simms (1999) compared pretest and posttest scores of 25 students enrolled in an online course and a traditional course at Nova Southeastern University. The researchers compared 40 undergraduate students enrolled in online courses and 59 undergraduate students enrolled in the classroom setting of the same course. Results indicated that the students who select online courses scored higher than traditional students scored on the pretest results. However, posttest results showed no significant difference for the online students versus the in-class students. Schulman and Simms concluded that online students were learning equally as well as their classroombased counterparts. Reigle’s (2007) analysis across disciplines at the University of San Francisco and the University of California found no significant difference between online and face-to-face student grade attainment. Shachar and Neumann (2003) conducted a meta-analysis that estimated and compared the differences between the academic performance of students enrolled in distance education compared to those enrolled in traditional settings over the period from 1990-2002. Eighty-six studies containing data from over 15,000 participating students were included in their analysis. The results of the meta-analysis showed that in two-thirds of the cases, students taking courses by distance education outperformed their student counterparts enrolled in traditionally instructed courses. Lynch, during the use of the “Tegrity” system, a brand-specific online platform at Louisiana State University, found that students’ grades were slightly better after utilizing the technology than when the traditional approach was used (Lynch, 2002). Initial results of a University of Wisconsin-Milwaukee study of 5000 students over 2 years indicated that the U-Pace online students performed 12% better than their traditional Psychology 101 counterparts on the same cumulative test (Perez, 2009). Arle’s (2002) study found 26 students enrolled in online human anatomy courses at Rio Salado College scored an average of 6.3% higher on assessments than the national achievement average. Students were assessed using a national standardized test generated by the Human Anatomy and Physiology Society, whose norming sample is based entirely on traditional classroom delivery (Arle, 2002). In a study conducted by Stephenson, Brown, and Griffin (2008), comparing three different delivery styles (traditional, asynchronous electronic courseware, and synchronous e-lectures), results indicated no increased effectiveness of any delivery style when all question types were taken into account. However, when results were analyzed, students receiving traditional lectures showed the lowest levels on questions designed to assess comprehension. Research found supporters in higher education academic leaders. In a 2006 survey of Midwestern postsecondary institutions concerning their online offerings, 56 % of academic leaders in the 11 states rated the learning outcomes in online education as the same or superior to those in face-to-face instructional settings. The proportion of higher education institutions believing that online learning outcomes were superior to those for face-to-face outcomes was still relatively small, but had grown by 34% since 2003, from 10.2 to 13.7 % (Allen & Seaman, 2007b). This belief added merit to the conclusions supported by Russell and others. Russell’s (1999) “no significant difference” conclusion had its detractors. The most commonly cited is Phipps and Merisotis (1999), who reviewed Russell’s original meta-analysis (1999) and reported a much different conclusion. They concluded that the overall quality of the original research was questionable, that much of the research did 27 not control for extraneous variables, and therefore it could not show cause and effect. They included in their findings evidence that the studies utilized by Russell (2000) in the meta-analysis did not use randomly selected subjects, did not take into effect the differences among students, and did not include tests of validity and reliability. The Phipps and Merisotis (1999) analysis included the conclusion that research has focused too much on individual courses rather than on academic programs, and has not taken into account differences among students. They postulated that based on these conclusions, there is a significant difference in the learning results, as evidenced by grades, of students participating in distance education as compared to their classroombased peers. Their analysis of Russell’s original work questioned both the quality and effectiveness of research comparing distance and traditional education delivery. While there has been ongoing conjecture that online education students are not receiving an equivalent learning experience compared to their traditional classroom counterparts, studies utilizing grades as an indicator of student learning have produced little evidence of the disparity. The incidence of studies showing significant negative differences in grades of online learners is small. Higher education institutions have indicated their support for online education, and its continued growth has allowed studies such as the present research to contribute to ongoing dialogue. Student Retention in Postsecondary Education Persistence and retention in higher education is an issue that has intrigued researchers for over 50 years. Quantitative studies conducted in the mid-20th century produced data that caused researchers to look at low retention rates in higher education 28 and search for answers. This question has continued to consume researchers and higher education institutions. In 1987, Tinto attempted to summarize studies of individual student retention in higher education by proposing a theory to allow higher education administrators to predict success and support students (Tinto, 1987). Tinto’s model of student engagement has been in use for over 20 years as higher education administrators and faculty attempt to explain student retention issues at universities and colleges. Tinto’s model primarily focused on factors of student engagement: How students respond to instructors, the higher education community itself, and students’ own engagement in learning are the primary factors Tinto theorized as determining the student’s retention. In the concluding remarks to his 1987 treatise on retention, Tinto acknowledged that persistence in higher education is but one facet of human growth and development, and one that cannot necessarily be attributed to a single factor or strategy. Tinto’s (1987) original study of student retention included the observation that student retention is a complicated web of events that shape student leaving and persistence. He observed that the view of student retention had changed since the 1950s, when students were thought to leave due to lack of motivation, persistence, and skills, hence the name dropout. In the 1970s, research began to focus on the role of the environment in student decisions to stay or leave. In the 1990s, Tinto proposed that the actions of the faculty were the key to institutional efforts to enhance student retention (Tinto, 2007). This was a significant addition to his theory, placing the cause on the instructor instead of the student, and it has done much to influence retention strategies 29 utilized in higher education institutions (Tinto, 2007). Tinto’s studies have driven research in both traditional retention studies and those involving distance education. Studies of the persistence of the postsecondary student routinely focus on 4-year postsecondary education. It is only within the last 20 years that persistence studies have included community college students and occupational students, acknowledging that their reasons for entering the postsecondary community are different from the traditional 4- year higher education participant (Cohen & Brawer, 1996). With different avenues to a baccalaureate degree more prevalent, the research into college persistence has expanded to include other types of programs and students. Postsecondary student retention rates routinely utilize data from longitudinal studies of students entering in a Fall semester and completing a bachelor’s program no more than 6 years later (NCES, 2003). The National Center for Education Statistics reported that 55% of those seeking a baccalaureate degree would complete in 6 years (NCES, 2003). The report acknowledged institutions are unable to follow students who transfer to other institutions; they are able to report only the absence of enrollment in their own institution. Research has also found a large gap between community college entrants and 4- year college entrants in rates of attaining a bachelor’s degree. Dougherty (1992) reported that students entering community college receive 11 to 19% fewer bachelor’s degrees than students beginning at a 4-year university. Dougherty postulated that the lower baccalaureate attainment rate of community college entrants was attributable to both their individual traits and the institution they entered (Dougherty, 1992). 30 Studies of student retention of community college also vary based on the types of students. Community college retention rates are routinely reported as lower than traditional 4-year institutions (NCES, 2007). Cohen and Brawer (1996) attributed the differences in retention to the difference in the mission. In many instances, students did not enroll in a community college in order to attain a degree (Cohen & Brawer, 1996). The most recent longitudinal study in 1993 showed a retention rate of 55.4% of students after 3 years (NCES, 2001). Of community college students, only 60.9% indicated a desire to transfer later to a baccalaureate degree completion program (NCES, 2003). While retention data collected by the federal government (NCES, 2003) did not include students with an AAS degree, Townsend’s studies of the transfer rates and baccalaureate attainment rates of students in Missouri who had completed an Associate of Arts and students who had completed an Associate of Applied Science degree was 61% compared to 54% (Townsend, 2001). Vocational or occupational programs have reported retention rates as “program completion,” a definition involving completion of specific tasks and competencies instead of grades and tied to a limited program length. This state and federal requirement indicates program quality and ensures continued federal funding. In 2001, the U.S. Department of Education reported a 60.1% completion rate of postsecondary students enrolled in occupational education (NCES, 2007). Until 1995, the reasons for students leaving was neither delineated nor reported; it was not until federal reporting requirements under the Carl Perkins Act of 1994 that institutions were required to explore why students were not retained in vocational programs (P.L. 105-332). 31 Distance education provided a new arena for the study of student persistence. Theorists and researchers have attempted to utilize Tinto’s model of student persistence to explain retention issues involved with distance education. However, Rovai (2003) analyzed the differing student characteristics of distance learners as compared to the traditional students targeted by Tinto’s original models and concluded that student retention theories proposed from that population were no longer applicable to distance education learners. Rovai proposed that distance educators could address retention in ways that traditional higher education has not. He suggested that distance educators utilize strategies such as capitalizing on students’ expectations of technology, addressing economic benefits and specific educational needs to increase student retention in courses (Rovai, 2003). The expanded use of technology created a distinct subset of research into student retention issues. In 2004, Berge and Huang developed an overview of models of student retention, with special emphasis on models developed to explain the retention rates in distance education. Their studies primarily focused on the variables in student demographics and external factors, such as age and gender, which influence persistence and retention in online learning. Berge and Huang found that traditional models of student retention such as Tinto’s did not acknowledge the differences in student expectations and goals that are ingrained in the student’s selection of the online learning option. Other researchers have attempted to study retention issues specifically for online education. In a meta-analysis, Nora and Snyder (2009) found the majority of studies of online education focused on students’ individual characteristics and individual 32 perceptions of technology. Nora and Snyder concluded that researchers attempt to utilize traditional models of student engagement to explain student retention issues in distance or online learning courses, with little or no success. This supported Berge and Huard’s conclusions. Nora and Snyder (2009) also noted a dearth of quantitative research. Few quantitative studies exist that support higher or equal retention in online students compared to their classroom-based counterparts. One example is the Carmel and Gold (2007) study. They found no significant difference in student retention rates between students in distance education courses and their traditional classroom-based counterparts. The study utilized data from 164 students, 95 enrolled in classroom-based courses and 69 enrolled in a hybrid online format. Participants randomly self-selected and were not all enrolled in the same course, introducing variables not attributed in the study. The majority of quantitative studies instead concluded there is a higher retention rate in traditional classrooms than in distance education. In the Phipps and Merisotis (1999) review of Russell’s original research, which included online education, results indicated that research has shown even lower retention rates in online students than in students attending classes in the traditional college setting. The high dropout rate among distance education students was not addressed in Russell’s meta-analysis, and Phipps and Merisotis found no suitable explanation in the research. They postulated that the decreased retention rate documented within distance education studies skews achievement data by excluding the dropouts. Diaz (2002) found a high drop rate for online students compared to traditional classroom-based students in an online health education course at Nova Southeastern. Other studies have supported the theory that retention of online students is far below that 33 of the traditional campus students. In 2002, Carr, reporting for The Chronicle of Higher Education, noted that online courses routinely lose 50 % of students who originally enrolled, as compared to a retention rate of 70-75% of traditional face-to-face students. Carr reported dropout rates of up to 75% in online courses as a likely indicator of the difficultly faced in retaining distance education students who do not routinely meet with faculty. The data have not been refuted. As community colleges began utilizing distance education, retention rates were reported as higher than traditional students (Nash, 1984). However, the California Community College System report for Fall 2008 courses showed inconsistent retention results for distance education learners, varying by the type of course. Results indicated equivalent retention rates for online instruction compared to traditional coursework in the majority of courses. Lower retention rates were indicated in online engineering, social sciences, and mathematics courses as compared to traditional classroom instructional models (California Community Colleges Chancellor's Office, 2009). Due to the limited number of vocational/technical or occupational courses taught in the online mode, there was little data on student retention. In 1997, Hogan studied technical course and program completion of students in distance and traditional vocational education and found that course completion rates were higher for distance education students. However, program completion rates were higher for traditional students than for students enrolled in distance education (Hogan, 1997). In summary, studies of retention have focused primarily on student characteristics while acknowledging that postsecondary retention rates vary according to a variety of factors. Research showed mixed results concerning the retention rate of online students, 34 though quantitative data leans heavily toward a lower course retention rate in online students. Data from 4-year universities have shown lower retention rates for online students than for traditional face-to-face students, while community colleges have shown inconsistent results. Data from vocational-technical education has been limited, but course retention rates are higher for online students, while program retention rates are lower. No significant research factor affecting retention has been isolated between students in online baccalaureate completion programs and students participating in traditional classroom-based settings. Summary Research studies have been conducted analyzing student retention in higher education, transfer and retention of students from community colleges to universities, the impact of distance education, and student achievement and retention factors related to distance education. However, no comparative research was identified that compared the achievement and retention of students participating in an occupationally oriented transfer program utilizing both online education and traditional classroom-based instruction. Chapter Three addresses the topics of research design, hypotheses, and research questions. Additionally, population and sample, data collection, and data analysis are discussed. 35 CHAPTER THREE METHODOLOGY The purpose of this study was to determine if there is a significant difference between course grades of students enrolled in online Technology Administration courses and their traditional classroom-based counterparts. The study also examined if there is a significant difference between course retention and program retention of students enrolled in online Technology Administration courses and their traditional classroombased counterparts. The methodology employed to test the research hypotheses is presented in this chapter. The chapter is organized into the following sections: research design, hypotheses and research questions, population and sample, data collection, data analysis, and summary. Research Design A quantitative, quasi-experimental research design was selected to study grades, course retention, and program retention in students enrolled in the Technology Administration program. The design was chosen as a means to determine if significant differences occur between online and face-to-face students by examining numerical scores from all participants enrolled, and retention rates in both courses and programs in the Technology Administration program. Hypotheses and Research Questions This study focused on three research questions with accompanying hypotheses. The research questions and hypotheses guiding the study follow. 36 Research Question 1: Is there is a statistically significant difference between students’ grades in online classes and traditional face-to-face classes? H1: There is a statistically significant difference in course grades of students participating in online courses and students enrolled in a traditional classroom setting at the 0.05 level of significance. Research Question 2: Is there a statistically significant difference between course retention rate of students in online classes and traditional face-to-face classes? H2: There is a statistically significant difference in student course retention between students participating in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. Research Question 3: Is there a statistically significant difference in program retention between students who entered the program in online classes and students who entered the program in traditional face-to-face classes? H3: There is a statistically significant difference in program retention between students who begin the Technology Administration program in online courses and students who begin in face-to-face courses at the 0.05 level of significance. Population and Sample The two populations selected were students enrolled in online and face-to-face courses. The sample included students enrolled in Technology Administration courses. Student enrollment was analyzed for all Technology Administration courses in the program sequence to determine the number of samples available in online and face-toface classes. The course enrollment data for the sample are outlined in Table E1. The subsample of the data utilized for the study is presented in Table 1. 37 Table 1 Technology Administration Enrollment Data Year Instructor TA 300 TA310 FTF OL FTF OL Spring 02 A 14 25 Fall 02 A 11 20 9 26 Spring 03 A 29 38 Fall 03 A 20 29 13 34 Spring 04 B 32 25 Fall 04 B 18 32 10 28 Spring 05 B 23 31 Fall 05 B 15 28 11 28 Spring 06 B 13 30 Fall 06 B 14 24 24 32 Spring 07 B 15 33 Fall 07 B 16 23 27 30 Spring 08 B 22 3529 TOTAL 94 156 242 395 Note: TA 300 Evolution and Development of Technology, TA 310 Technology and Society The subsample for hypothesis 1 and hypothesis 2 included all students enrolled in two entry-level courses required for completion of the Technology Administration program: TA 300 Evolution and Development of Technology, and TA 310 Society and 38 Technology. The university offered the courses in online and face-to-face formats during the period of the study. Two instructors, identified as A and B, were involved with teaching the online and face-to-face courses. Two courses were selected that met the following criteria: (a) the same faculty member taught both courses, (b) the courses were offered over the period of the study consistently in online and face-to-face instruction, and (c) the syllabi for simultaneous online and face-to-face sections were identical. For hypothesis 3, data included records of all students enrolled in TA 300 Evolution and Development of Technology for the Fall semesters of 2002, 2003, 2004, 2005, and 2006. The course was selected for inclusion in the study based on the following criteria: (a) student enrollment in the course was the result of declaration of the Technology Administration program major and (b) parameters of the study allowed students 2 or more years to complete the program requirements. For the purpose of the study, all student names were removed. Data Collection An Institutional Review Board (IRB) form was prepared for Washburn University approval prior to data collection. The study was designated as an exempt study. The Washburn University IRB form is provided in Appendix A. Approval of the IRB was transmitted by e-mail. A copy is located in Appendix B. In addition, an IRB was submitted to Baker University. The form is located in Appendix C. The Baker IRB approval letter is located in Appendix D. Washburn University had two types of data collection systems in place during the period identified for the study, Spring 2002 through Spring 2008. The AS 400 data collection system generated paper reports for 2002 and 2003. The researcher was allowed 39 access to paper records for 2002 and 2003. Enrollment results for all technology administration sections for 2002-2003 were entered manually into an Excel spreadsheet. In 2004, the University transferred to the Banner electronic student data management system. All records since 2004 were archived electronically and were retrieved utilizing the following filters for data specific to students enrolled in the identified Technology Administration courses: TA course designation and specific coding for year and semester to be analyzed (01 = Spring semester, 03 = Fall semester, 200X for specified year). Results retrieved under the Banner system were saved as an Excel spreadsheet by the researcher. The course enrollment data for the sample are presented in Tables E1 and E2. Student transcripts and records were analyzed to determine program completion or continued enrollment in the program for program retention analysis. Documents examined included paper student advising files located within the Technology Administration department and specific student records housed within the Banner reporting system. Technology Administration course TA 300 was selected based on the following: (a) It is a required entry course only for Technology Administration majors, and (b) TA 310 is a dual enrollment course for business department majors. Data Analysis Data analysis for all hypothesis testing was conducted utilizing SPSS software version 16.0. The software system provided automated analysis of the statistical measures. To address Research Question 1, a two-factor analysis of variance was used to analyze for a potential difference in delivery method (online and face-to-face), potential 40 difference in instructor (instructors A and B), and potential interaction between the two factors. When the analysis of variance reveals a difference between the levels of any factor, Salkind (2008) referred to this as the main effect. This analysis produces three F statistics: to determine if a difference in grades of online students as compared to their classroom based counterparts was affected by a main effect for delivery, a main effect for instructor, and for interaction between instructor and delivery. Chi-square testing was selected to address research questions 2 and 3. The rationale for selecting chi-square testing was to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Salkind, 2008). If the obtained chi-square value is greater than the critical value, it indicates there is sufficient evidence to believe the research hypothesis is true. For research question 2, a chi-square test for differences between proportions analyzed course retention of online and face-to-face students at the end of semester. For Research Question 3, a chi-square test for differences between proportions analyzed program retention comparing students who began the program in the online section of TA 300 to the students who began in the face-to-face section. Limitations of the Study Roberts (2004) defined the limitations of the study as those features of the study that may affect the results of the study or the ability to generalize the results. The limitations of this study included (a) potential for data entry error, (b) curriculum modifications not reflected in the syllabi made by instructors over the period of the study, (c) behavior of the instructors during delivery in the two different formats, and (d) 41 rationale of students for selecting one course delivery method over another. These may affect the generalizability of this study to other populations. Summary This chapter described the research design, population and sample, hypotheses, data collection, and analysis used in this research study. Statistical analysis using twoway analysis of variance and chi-square were used to determine if there are significant statistical differences in the course grades, course retention, and program retention of students enrolled in online classes as compared to their face-to face counterparts. The results of this study are presented in Chapter Four. 42 CHAPTER FOUR RESULTS The study had three main purposes. The first purpose was to determine if there was a difference in grades between students in online classes and students in traditional face-to-face classes in the Technology Administration program. In addition, the study was designed to examine the difference in course retention rates of students in the online classes as compared to the face-to-face classes. The third part of the study was designed to examine program retention rates of students who began the program in online classes and students who began the program in traditional face-to-face classes. This chapter begins with the descriptive statistics for the sample: gender, age, grades by gender, and course selection of students in online or face-to-face courses by gender. From the three research questions, research hypotheses were developed, and the results of statistical analyses used to test each hypothesis are presented. Descriptive Statistics Demographic data for the sample was collected from the student data system for 2002 through 2009. The descriptive statistics presented below include gender (n = 884), age (n = 880), grades by gender (n = 884) and course selection online or face-to-face by gender (n = 884). Table 2 describes the cross-tabulation of the frequencies for gender and of the sample selected for the study. The mean age for the sample tested was 31.06 years, with a standard deviation of 9.46 years. The age range of the sample was from 18 to 66 years. One participant did not report gender. Age was not available for three participants. 43 Table 2 Participant Age Group by Gender (n=880) Age Range By Years < 20 20-29 30-39 40-49 50-59 60-69 Female 0 198 121 62 29 3 Male 5 281 104 53 19 5 Note: Gender not reported for one participant; Age not reported for four participants Females = 413 Males = 467 Table 3 presents the frequency of course grades by gender and total number of students receiving each grade. Grades were distributed across the continuum, with slightly more females than males receiving A’s, more males than females receiving B’s, C’s and F’s, and an equal distribution of students receiving D’s. More males withdrew from classes than did females. 44 Table 3 Average Grades by Gender (n=884) Grades Female Male Total A 245 208 453 B 53 79 132 C 32 70 102 D 17 16 33 F 37 55 92 No Credit 1 0 1 Passing 0 1 1 Withdraw 25 42 67 Withdraw Failing 3 0 3 Total 413 471 884 Note: Gender not reported for one participant Table 4 presents the course selection patterns of male and female students. Overall, more students selected online courses than face-to-face courses. Females and males enrolled in online courses in equal numbers; however, proportionally more females (68.7%) chose the online instructional format instead of face-to-face compared with males (60.1%). 45 Table 4 Course Selection by Gender (n=884) Course Type Female Male Total Face-to-face 129 184 313 Online 284 287 571 Total 413 471 884 Note: Gender not reported for one participant Hypothesis Testing H1: There is a statistically significant difference in the course grades of students enrolled in online classes and students enrolled in a traditional classroom setting at the 0.05 level of significance. The sample consisted of 815 students enrolled in online and face-to-face Technology Administration courses at Washburn University. A two-factor analysis of variance was used to analyze for the potential difference in course grades due to delivery method (online and face-to-face), the potential difference due to instructor (instructors A and B), and the potential interaction between the two independent variables. Mean and standard deviation for grades were calculated by delivery type and instructor. Table 5 presents the descriptive statistics. The mean of grades by delivery showed no significant difference between online and face-to-face instruction. Additionally, no significant difference in mean grade was evident when analyzed by instructor. 46 Table 5 Means and Standard Deviations by Course Type and Instructor Course type Instructor Mean Standard Deviation n Face-to-face A 3.0690` 1.41247 29 B 2.9586 1.39073 266 Total 2.9695 1.39084 295 Online A 2.9024 1.52979 41 B 3.0271 1.35579 479 Total 3.0271 1.36911 520 Total A 2.9714 1.47414 70 B 3.0027 1.36783 745 Total 3.000 1.37635 815 The results of the two-factor ANOVA, presented in Table 6, indicated there was no statistically significant difference in grades due to delivery method (F = 0.078, p = 0.780, df = 1, 811). This test was specific for hypothesis 1. In addition, there was no statistically significant difference in grades due to instructor (F = 0.002, p = .967, df = 1, 811), and no significant interaction between the two factors (F = 0.449, p = 0.503, df = 1, 811). The research hypothesis was not supported. 47 Table 6 Two-Factor Analysis of Variance (ANOVA) of Delivery by Instructor df F p Delivery 1 0.148 0.780 Instructor 1 0.003 0.967 Delivery*Instructor 1 0.449 0.503 Error 811 Total 815 H2: There is a statistically significant difference in student course retention between students enrolled in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. The sample consisted of 885 students enrolled in TA 300 and TA 320 online and face-to-face courses. The hypothesis testing began with the analysis of the contingency data presented in Table 7. The data are organized with course selection (online or face-to-face) as the row variable and retention in the course as the column variable. Data were included in the retained column if a final grade was reported for participant. Participants who were coded as withdraw or withdraw failing were labeled as not retained. Chi-square analysis was selected to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Roberts, 2004). The result of the chi square testing (X2 = 2.524, p = .112, df = 1, 884) indicated there was no statistically significant difference between retention of students enrolled in online courses compared to students enrolled in face-to-face courses in the TA program. Additional results indicated that 93.92% (294/313) of the online students were retained, 48 compared to 90.89% (519/571) of the face-to-face students. The research hypothesis was not supported. Table 7 Course retention of online and face-to-face TA students Retained Not retained Total Face-to-face students 294 19 313 Online students 519 52 571 Total 813 71 884 H3: There is a statistically significant difference in program retention between students who begin the Technology Administration program in online courses and students who begin in face-to-face courses at the 0.05 level of significance. The sample consisted of 249 students enrolled in TA 300 in the online and face-to-face courses from Fall 2002 through Fall 2008. The hypothesis testing began with the analysis of the contingency data located in Table 8. The table is organized with course selection (online or face-to-face) as the row variable and program retention as the column variable. Data were included in the retention column if students had successfully met requirements for a Bachelors of Applied Science in Technology Administration or if they were enrolled in the program in Spring 2009. Data were included in the non-retained column if students had not fulfilled degree requirements and they were not enrolled in Spring 2009. Chisquare analysis was selected to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Roberts, 2004). 49 The result of the chi-square testing (X2 = .132, p = .717, df = 1, 249) indicated there was no statistically significant difference between the program retention rate of students who began the TA program in the online courses compared to the students who began the program in the face-to-face courses. Additional results showed that 91.57% (163/178) of students who began in online courses were retained compared to 92.96% (66/71) of students who began the TA program in face-to-face courses. The research hypothesis was not supported. Table 8 Program retention of online and face-to-face TA students Retained Not retained Total Face-to-face 66 5 71 Online 163 15 178 Total 229 20 249 Summary In this chapter, an introduction provided a summary of the analysis and statistical testing and in the order in which it was presented. This was followed by descriptive statistics of the sample, including age range of participants, grades by gender, and course selection by gender. Results from testing of H1 revealed no significant difference between course grades of online students and students enrolled in traditional face-to-face classes. Chisquare testing was utilized for testing of H2. Results indicated there was no significant 50 difference in course retention of students enrolled in online courses and students enrolled in traditional face-to-face courses. H3 was also tested utilizing chi-square testing. The results indicated no significant difference in program retention of students who began the TA program in online courses and students who began in traditional face-to-face courses. Chapter Five provides a summary of the study, discussion of the findings in relationship to the literature, implications for practice, recommendations for further research, and conclusions. 51 CHAPTER FIVE INTERPRETATION AND RECOMMENDATIONS Introduction In the preceding chapter, the results of the analysis were reported. Chapter Five consists of the summary of the study, an overview of the problem, purpose statement and research questions, review of the methodology, major findings, and findings related to the literature. Chapter Five also contains implications for further action and recommendations for further research. The purpose of the latter sections is to expand on the research into distance education, including implications for expansion of course and program delivery and future research. Finally, a summary is offered to capture the scope and substance of what has been offered in the research. Study Summary The online delivery of course content in higher education has increased dramatically in the past decade. Allen and Seaman (2007a) reported that almost 3.5 million students participated in at least one online course during the Fall 2006 term, a nearly 10% increase over the number reported in the previous year. They also reported a 9.7% increase in online enrollment compared to the 1.5% growth in overall higher education. As online delivery has grown, so has criticism of its efficacy. Online delivery of education has become an important strategy for the institution that is the setting of this study. The purpose of this study was three-fold. The first purpose of the study was to determine if there was a significant difference between the course grades of students participating in TA online courses and their traditional classroombased counterparts. The second purpose of the study was to determine if there was a 52 significant difference between course retention of students participating in TA online courses and their traditional classroom-based counterparts. A third purpose of the study was to determine if there was a significant difference between program retention of students who began the TA program in online courses and those who began the program enrolled in traditional face-to-face courses. The study was designed to expand the knowledge base concerning online education and its efficacy in providing baccalaureate degree completion opportunities. The research design was a quantitative study to compare course grades, course retention, and program retention of students enrolled in the online and traditional face-toface TA program at Washburn University. Archival data from the student system at Washburn University was utilized to compare online and traditional face-to-face students. In order to answer Research Question 1, a sample of students enrolled in TA 300 and TA 310 online and traditional face-to-face courses was analyzed. The sample included students entering the program in the Fall semesters of 2002, 2003, 2004, 2005, and 2006. Two instructors were responsible for concurrent instruction of both the online and faceto-face classes for the period analyzed. A two-factor analysis of variance was used to analyze for a potential difference in the dependent variable, course grades, due to delivery method (online and face-to-face), the instructor (instructors A and B), and the potential interaction between the two independent variables (Research Question 1). A chi-square test for differences among proportions was used to analyze both course and program retention (Research Questions 2 and 3). For Research Question 2, archived data from the Washburn University student system was analyzed for students enrolled in TA 300 and TA 310. Additional variables identified for this sample included 53 course selection and instructor (A or B). For Research Question 3, archived data from the Washburn University system was used, which identified students with declared Technology Administration majors who began the TA program enrolled in online and face-to-face courses. A single gatekeeper course (TA 300) was identified for testing. Two instructors (A and B) were responsible for instruction during the testing period. A two-factor ANOVA was utilized to test H1: There is a statistically significant difference in course grades of students participating in online courses and students enrolled in a traditional classroom setting at the 0.05 level of significance. ANOVA testing was utilized to account for the two delivery methods and two instructors involved for the period of the study. The results of the test indicated there was no statistically significant difference in grades due to delivery method. The results of the testing also indicated no statistically significant difference in grades due to instructor and no interaction between the two independent variables. The research hypothesis was not supported. To test the next hypothesis, chi-square testing was utilized. H2: There is a statistically significant difference in student course retention between students participating in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. The result of the chi-square testing indicated there was no statistically significant difference in course retention of students enrolled in online courses and students enrolled in face-to-face courses in the TA program. The research hypothesis was not supported. To test the final hypothesis, chi-square testing was also used. H3: There is a statistically significant difference in program retention between students who begin the 54 Technology Administration program in online courses and students who begin in face-toface courses at the 0.05 level of significance. The result of the chi-square testing indicated there was no statistically significant difference in the program retention rate of students who began the TA program in the online courses and students who began the program in the face-to-face courses. The research hypothesis was not supported. Testing found that course retention was high in both formats, leading to interpretation that higher results may be due to the age of participants or prior degree completion. The results found no significant difference in grades, course, or program retention for students in online TA courses and students enrolled in traditional face-to-face instruction. The implication of these results compared to current literature is discussed in the next section. Findings Related to the Literature Online education has become a strategy for higher education to provide instruction to students limited by distance or time, or who, for other reasons, do not wish to attend traditional classroom-based university classes. Additionally, online education allows higher education institutions to expand their geographic base. Institutions have utilized distance education for over a century to provide instruction, but it was only within the last two decades that instruction over the Internet had replaced correspondence, television, and video courses as the method of choice for delivery (Russell, 1999). Utilizing grades as a measure of achievement, meta-analyses conducted by Russell (1999), Shachar and Neumann (2003), and Machtmes and Asher (2002) found no significant difference in grades of online students and traditional classroom-based 55 students. These analyses utilized multiple studies of course information, comparing grades of online students and traditional face-to-face students, primarily utilizing t tests as the preferred methodology. The results of previous research were supported by the present study. Additionally, this study went further, analyzing data over more than one semester, controlling for the effect of different instructors. These results were contrary to the conclusion reached by Phipps and Merisotis (1999). The second purpose of the study was to determine if a significant difference existed between the course retention of students enrolled in online TA courses and students enrolled in face-to-face courses. Meta-analyses conducted by Phipps and Merisotis (1999) and Nora and Snyder (2009) concluded a much lower course retention rate in online students as compared to their face-to-face counterparts. The previous metaanalyses examined retention of online students and traditional face-to-face students in distinct courses, utilizing t tests as the primary methodology. The chosen method of t tests was used instead of the chi square testing due to the limitations of the studies to one course taught by one instructor, limited to one semester or cycle. Carr (2002) reported in The Chronicle of Higher Education that retention of online students was 50% less than that of traditional face-to-face students. Carr’s results were based on the examination of longitudinal retention data from universities as reported to the United States Department of Education. The results of the present study found no significant difference in the course retention rates. These results are supported by the findings of Carmel and Gold (2007) in which they reported no significant difference in course retention rates of online students compared to traditional face-to-face students in their analysis of students in multiple 56 courses in disciplines across a 4-year university. The present study expanded those results, examining course data in the same discipline over a 6-year period and controlling for delivery by two separate instructors. Research into program completion rates of AAS students has been conducted primarily in traditional university settings, including Townsend’s (2002) studies at the University of Missouri-Columbia. Townsend’s results showed a lower baccalaureate completion rate for students entering with an AAS than students who transferred to 4- year universities with an AA degree. Studies by Hogan (1997) of vocational-education programs also found a lower program completion rate for online students compared to students in traditional delivery vocational education programs. Analysis of the data in the current study showed no significant difference in program completion rate of students who began in online TA courses as compared to students who began the program in faceto-face courses. Conclusions The use of distance education for postsecondary instruction, primarily in the form of the Internet, has both changed and challenged the views of traditional university-based instruction. Multiple studies have been designed in an effort to examine whether online students have the same level of academic achievement as their traditional higher education peers. The present study agrees with the research indicating there is no statistically significant difference in the grades of online students and their face-to-face counterparts. In addition, with student retention an issue for all postsecondary institutions, the data from previous studies indicated a lower retention rate for online students than for their traditional face-to-face classmates. The current study contradicted 57 those arguments. In the following sections, implications for action, recommendations for research, and concluding remarks are addressed. Implications for Action As postsecondary institutions move into the 21st century, many have examined issues of student recruitment and retention in an effort to meet the demands of both their students and their communities. The majority of postsecondary institutions have initiated online education as a strategy to recruit students from beyond their traditional geographic areas. This study supported existing research utilizing grades as a measure of achievement and should alleviate doubt that online students are shortchanged in their education. The transition of existing face-to-face to courses to an online delivery model can be accomplished without sacrificing achievement of course and program goals. The study also examined course and program retention data, finding no significant differences between online and traditional students in the TA program. The findings of this study support the expansion of additional online courses and programs within the School of Applied Studies. Finally, this study can provide the basis for further action, including analyzing other programs and courses offered in the online format by the University. The analysis of other programs offered in an online delivery model would enhance further development of online courses and programs. Recommendations for Future Research Distance education delivery has expanded dramatically with the use of the Internet for online instruction. The present study could be continued in future years to measure the effects of specific curriculum delivery models and changes made to online 58 delivery platforms. In addition, the study could be expanded to include specific characteristics of student retention named in the literature, such as examining whether the age and entering GPA of students provides any insight into course and program retention. The study could also be expanded to include other universities with similar baccalaureate-degree completion programs and other disciplines. Because the body of research is limited concerning the baccalaureate-degree completion of students who begin their postsecondary education in career-oriented instruction, there is value in continuing to study baccalaureate completion rates, both in an online format and in more traditionally based settings. Concluding Remarks The current study examined a Technology Administration program that has been offered in both online and face-to-face format, utilizing data from Fall 2002 through Spring 2008. The TA program was developed to allow students who had completed an occupationally oriented AAS degree to complete a bachelor’s degree program. Three hypotheses were tested in this study, examining course grades, course retention, and program retention of students enrolled in online and face-to-face courses in Technology Administration. No significant difference was found for the three hypotheses. These results form a strong foundation for expanding online courses and programs at Washburn University. By addressing two of the major concerns of educators, achievement and retention, the study results allow expansion of online courses and programs to benefit from data-driven decision-making. Other institutions can and should utilize data to examine existing online course and program data. 59 REFERENCES Allen, I. E., & Seaman, J. (2003). Seizing the opportunity: The quality and extent of online education in the United States, 2002 and 2003. Needham, MA: The Sloan Consortium. Allen, I. E., & Seaman, J. (2005). Growing by degrees: Online education in the United States, 2005. Needham, MA: The Sloan Consortium. Allen, I. E., & Seaman, J. (2007a). Making the grade: Online education in the United States. Needham, MA: The Sloan Consortium Allen, I. E., & Seaman, J. (2007b). Online nation: Five years of growth in online learning. Needham, MA: The Sloan Consortium. Arle, J. (2002). Rio Salado College online human anatomy. In C. Twigg, Innovations in online learning: Moving beyond no significant difference (p. 18). Troy, NY: Center for Academic Transformation. Atkins, T. (2008, May 13). Changing times bring recruiting challenges at WU. Retrieved May 15, 2008, from CJOnline Web site at http://cjonline.com/stories/ 051308/loc_278440905.shtml Berge, Z., & Huang, L. P. (2004, May). A model for sustainable student retention: A holistic perspective on the student dropout problem with special attention to elearning. American Center for the Study of Distance Education. Retrieved April 17, 2009, from DEOSNEWS Web site at http://www.ed.psu.edu/acsde/deos/deosnews/deosarchives.asp 60 Bradburn, E., Hurst, D., & Peng, S. (2001). Community college transfer rates to 4-year institutions using alternative definitions of transfer. Washington, DC: National Center for Education Statistics. Brown, B. W., & Liedholm, C. (2002, May). Can Web courses replace the classroom in principles of microeconomics? The American Economic Review, 92, 444-448. California Community Colleges Chancellor's Office. (2009, April 20). Retention rates for community colleges. Retrieved April 20, 2009, from https://misweb.cccco.edu/mis/onlinestat/ret_suc_rpt.cfm?timeout=800 Carmel, A. & Gold, S. S.. (2007). The effects of course delivery modality on student satisfaction and retention and GPA in on-site vs. hybrid courses. Retrieved September 15, 2008, from ERIC database. (Doc. No. ED496527) Carnevale, D. (2006, November 17). Company's survey suggests strong growth potential for online education. The Chronicle of Higher Education , p. 35. Carr, S. (2000, February 11). As distance education comes of age, the challenge is keeping the students. The Chronicle of Higher Education , pp. 1-5. Cohen, A., & Brawer, F. (1996). The American community college. San Francisco: Jossey-Bass. Diaz, D. (2002, May-June). Online drop rates revisited. Retrieved April 8, 2008, from The Technology Source Archives Web site at http://www.technologysource.org/article/online_drop_rates-revisited/ Dougherty, K. J. (1992). Community colleges and baccalaureate attainment. The Journal of Higher Education, 63, 188-214. 61 Ebel, R., & Frisbie, D. (1991). Essentials of educational measurement. Prentice Hall: Englewood Cliffs, NJ. The Harvard guide. (2004). Retrieved May 20, 2008, from http://www.news.harvard.edu/guide Hogan, R. (1997, July). Analysis of student success in distance learning courses compared to traditional courses. Paper presented at Sixth Annual Conference on Multimedia in Education and Industry, Chattanoga, TN. Jacobs, J., & Grubb, W. N. (2003). The federal role in vocational education. New York: Community College Research Center. Joliet Junior College history. (2008). Retrieved May 20, 2008, from Joliet Junior College Web site at http://www.jjc.edu/campus_info/history/ Kansas Board of Regents. (2002-2003). Degree and program inventory. Retrieved May 14, 2008, from http://www.kansasregents.org Keeley, E. J., & House, J. D. (1993). Transfer shock revisited: A longitudinal study of transfer academic performance. Paper presented at the 33rd Annual Forum of the Association for Institutional Research, Chicago, IL. Knowles, M. S. (1994). A history of the adult education movement in the United States. Melbourne, FL: Krieger. Laanan, F. (2003). Degree aspirations of two-year students. Community College Journal of Research and Practice, 27, 495-518. Lynch, T. (2002). LSU expands distance learning program through online learning solution. T H E Journal (Technological Horizons in Education), 29(6), 47. 62 Machtmes, K., & Asher, J. W. (2000). A meta-analysis of the effectiveness of telecourses in distance education. The American Journal of Distance Education, 14(1), 27-41. Gilman, E. W., Lowe, J., McHenry, R., & Pease, R. (Eds.). (1998). Merriam-Webster’s collegiate dictionary. Springfield, MA: Merriam. Nash, R. (1984, Winter). Course completion rates among distance learners: Identifying possible methods to improve retention. Retrieved April 19, 2009, from Online Journal of Distance Education Web site at http://www.westga.edu/~distance/ojdla/winter84/nash84.htm National Center for Education Statistics. (2000). Distance education statistics 1999-2000. Retrieved March 13, 2008, from at http://nces.ed.gov/das/library/tables_listing National Center for Education Statistics. (2001). Percentage of undergraduates who took any distance education courses in 1999-2000 USER: What are some of the benefits of online education? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
true
11
9
14,271
null
299
Use only context block information. write the answer in bullet points only. Less than 100 words.
Write a cost-benefit analysis of 1983 Act.
After the 1973 Act, in fulfilling the objective of controlling pollution, each Water Authority had created a water quality advisory panel to monitor its performance in meeting water quality requirements. The objective of the advisory panels was to achieve some independence between the water authority’s functions of public supply, pollution control and monitoring of environmental performance. In a move to address the problem of poor surface water quality, the National Water Council published a classification of river quality objectives in 1977. The classification system related to the purposes for which water was to be used based on five basic classes of river waters: 1A High quality waters suitable for all abstraction purposes with only modest treatment. Capable of supporting high class fisheries. High amenity value. 1B Good quality waters usable for substantially the same purposes as 1A though not as high quality. 2 Fair quality waters viable as coarse (freshwater) fisheries and capable of use for drinking water provided advanced treatment is given. Moderate amenity value. 3 Poor waters polluted to the extent that fish were absent or only sporadically present. Suitable only for low grade industrial abstractions. 4 Bad quality waters which were grossly polluted and likely to cause a nuisance. This classification was adopted by each water authority in setting informal river quality objectives and to define the permits for treated sewage discharges. The suitability of this classification system was later questioned as it introduced the concept of high river quality being a lower priority unless specific uses compel it. With significant scope for discretion in the setting of standards by the water authorities and no imposed national standards, it ultimately led to a review of the number of discharge permits, which led to a relaxation of their requirements36. It further masked the problems of declining water quality and was clearly insufficient to satisfy EC law. With little political acceptance of the dramatic increases required to customer bills to address the problems of under-investment and declining infrastructure, the government continued to delay implementation of the condition from the 1974 Act that required the water authorities to publish pollution registers against the performance of discharge permits. This was contrary to the openness required once the authorities were given the conflicting roles of sewage works operators and river quality regulators37, conflicted with the water authorities’ role to prevent pollution and led the water quality advisory panels to be largely ineffective. With the water authorities unwilling to selfregulate and self-prosecute there was a sharp increase in the number of incidents of river pollution38. Lack of public access to information on discharge permits and pollution incidents further compounded the problem. 3.4 WATER ACT 1983 In response to the problems created by the increasing capital investment requirements of the water authorities and the requirement to address the problems of environmental pollution, the government introduced the Water Act 1983. The assumption underlying the 1983 Act was that water customers were best served by an efficiently run water utility providing prescribed service standards at least cost. The 1983 Act changed the organisational structure of the water authorities, reduced the role of local government, and, by allowing companies to operate in a more commercial manner, paved the way for privatisation. 3.4.1 Constitutional changes Until 1983, the water authorities were run by large boards with a majority of local authority representatives (see section 3.1.4). The 1983 Act reduced the size of the board structures with the intention of making these smaller and more business like by reducing the number of representatives from local authorities. Although all members continued to be appointed by central government, a series of chairmen vacancies were filled by people with experience in the industry rather than experience of public affairs. The 1983 Act provided for Consumer Consultative Committees to represent the interests of customers following the abolishment of locally elected councillors as water authority members, and as a result of restrictions in public access to management meetings of the authorities.
Write a cost-benefit analysis of 1983 Act. Use only context block information. write the answer in bullet points only. Less than 100 words. After the 1973 Act, in fulfilling the objective of controlling pollution, each Water Authority had created a water quality advisory panel to monitor its performance in meeting water quality requirements. The objective of the advisory panels was to achieve some independence between the water authority’s functions of public supply, pollution control and monitoring of environmental performance. In a move to address the problem of poor surface water quality, the National Water Council published a classification of river quality objectives in 1977. The classification system related to the purposes for which water was to be used based on five basic classes of river waters: 1A High quality waters suitable for all abstraction purposes with only modest treatment. Capable of supporting high class fisheries. High amenity value. 1B Good quality waters usable for substantially the same purposes as 1A though not as high quality. 2 Fair quality waters viable as coarse (freshwater) fisheries and capable of use for drinking water provided advanced treatment is given. Moderate amenity value. 3 Poor waters polluted to the extent that fish were absent or only sporadically present. Suitable only for low grade industrial abstractions. 4 Bad quality waters which were grossly polluted and likely to cause a nuisance. This classification was adopted by each water authority in setting informal river quality objectives and to define the permits for treated sewage discharges. The suitability of this classification system was later questioned as it introduced the concept of high river quality being a lower priority unless specific uses compel it. With significant scope for discretion in the setting of standards by the water authorities and no imposed national standards, it ultimately led to a review of the number of discharge permits, which led to a relaxation of their requirements36. It further masked the problems of declining water quality and was clearly insufficient to satisfy EC law. With little political acceptance of the dramatic increases required to customer bills to address the problems of under-investment and declining infrastructure, the government continued to delay implementation of the condition from the 1974 Act that required the water authorities to publish pollution registers against the performance of discharge permits. This was contrary to the openness required once the authorities were given the conflicting roles of sewage works operators and river quality regulators37, conflicted with the water authorities’ role to prevent pollution and led the water quality advisory panels to be largely ineffective. With the water authorities unwilling to selfregulate and self-prosecute there was a sharp increase in the number of incidents of river pollution38. Lack of public access to information on discharge permits and pollution incidents further compounded the problem. 3.4 WATER ACT 1983 In response to the problems created by the increasing capital investment requirements of the water authorities and the requirement to address the problems of environmental pollution, the government introduced the Water Act 1983. The assumption underlying the 1983 Act was that water customers were best served by an efficiently run water utility providing prescribed service standards at least cost. The 1983 Act changed the organisational structure of the water authorities, reduced the role of local government, and, by allowing companies to operate in a more commercial manner, paved the way for privatisation. 3.4.1 Constitutional changes Until 1983, the water authorities were run by large boards with a majority of local authority representatives (see section 3.1.4). The 1983 Act reduced the size of the board structures with the intention of making these smaller and more business like by reducing the number of representatives from local authorities. Although all members continued to be appointed by central government, a series of chairmen vacancies were filled by people with experience in the industry rather than experience of public affairs. The 1983 Act provided for Consumer Consultative Committees to represent the interests of customers following the abolishment of locally elected councillors as water authority members, and as a result of restrictions in public access to management meetings of the authorities. Local authorities were left to propose how the committees were set up, but the government published guidelines indicating how this should be done. The guidelines were criticised for a number of reasons including (i) the committees had wide terms of reference that covered national issues, but were intended to be set up on a regional basis and deal with regional issues, and (ii) they had little independence from the water authorities. In addition, the 1983 Act abolished the National Water Council which had done little to promote the views of the water industry to central government since its implementation40. 3.4.2 Financial changes The 1983 Act initiated many of the financing changes that were ultimately required at privatisation and started the process of transforming the water industry from a public service to a business organisation. The 1983 Act made express provision for water authorities to borrow directly from the private capital markets rather than solely from central government. However, in practice central government continued to exercise control over the authorities’ borrowing and this acted to prevent the authorities from private borrowing. The 1983 Act introduced the principle of cost-benefit to the industry for assessing capital investment requirements and attempts were made to introduce long-run marginal cost pricing for determination of water tariffs41. THE NEED FOR CHANGE Section II of the Control of Pollution Act 1974 (COPA II), finally became effective from 1985 and required publication of discharge permit standards. However, in practice, the changes brought about by COPA II or the 1983 Act did little to improve the environmental performance of the water authorities, measured by improvements in river water quality. Despite the above inflation price rises from the early 1980’s onwards (Figure 3.3.1b), the 1985 River Quality Survey showed, for the first time since surveys were undertaken in 1958, that the length of river quality deterioration had overtaken that of river water quality improvement. In total, 903km out of 40,000km rivers surveyed showed a net deterioration over the period42. And in 1988, for example, 742 out of 6407 sewage treatment works failed their discharge permit requirements. The continued lack of investment meant that a significant number of incidents of pollution continued to occur and the United Kingdom continued to be in breach of a number of EC Directives. The decision by the EC to start prosecution proceedings against the government for non-compliance with two EC Directives in the mid-1980’s was a major factor in the government recognising the requirement for further significant capital investment and control of pollution. With government unwilling to fund the increased investment requirements either from increases in taxes or increasing borrowing and with its broader programme of privatisation of utilities underway, the government started to consider the privatisation of the industry. The next section describes the process of privatisation. 4. PRIVATISATION 4.1 INTRODUCTION The proposals for privatisation of the water industry were in response to the need for more investment in the industry than the government was prepared to fund from public finance. There was also a prevailing policy which favoured privatisation as a means of securing efficiency; British Telecom and British Gas had been privatised in 1984 and 1986 respectively. The government first published its proposals in a discussion paper on water privatisation in 198643. 4.2 INITIAL PROPOSALS The 1986 discussion paper proposed privatisation of the water authorities as they existed. This would have simply transferred the water authorities to private ownership, without changes to their powers or responsibilities. It would have required the authorities, as private companies, to have responsibility for providing water and sewerage services and to have responsibility for flood control, river water quality and control of abstraction. The 1986 discussion paper included the concept of comparative competition, such that the privatised undertakers would be competing in the financial markets for access to finance and the performance of each company could be compared. The government considered profit would be a more effective incentive for improved management performance than government controls. However, to protect customers’ interests, a system of regulatory controls would be required to prevent privatised water authorities from overcharging customers or providing poor standards of service. The paper proposed that a Director General of Water Services would set price limits and performance standards for each licensed company.44 4.2.1 Economic Regulation The proposals for privatisation of the water industry differed in three fundamental respects from those of the gas and telecoms industries:  privatisation would involve not one (as in gas and telecoms), but ten Water Authorities;  the water and sewerage industries are distinctive in that they have duties concerning the protection of the environment; and  natural monopoly conditions were more prevalent in the water and sewerage industry because it consisted of local and regional monopolies with no national distribution network. Alongside its plans for sale and restructuring of the water and sewerage services, the government commissioned a report to discuss the proposals for economic regulation of the industry45.
Use only context block information. write the answer in bullet points only. Less than 100 words. EVIDENCE: After the 1973 Act, in fulfilling the objective of controlling pollution, each Water Authority had created a water quality advisory panel to monitor its performance in meeting water quality requirements. The objective of the advisory panels was to achieve some independence between the water authority’s functions of public supply, pollution control and monitoring of environmental performance. In a move to address the problem of poor surface water quality, the National Water Council published a classification of river quality objectives in 1977. The classification system related to the purposes for which water was to be used based on five basic classes of river waters: 1A High quality waters suitable for all abstraction purposes with only modest treatment. Capable of supporting high class fisheries. High amenity value. 1B Good quality waters usable for substantially the same purposes as 1A though not as high quality. 2 Fair quality waters viable as coarse (freshwater) fisheries and capable of use for drinking water provided advanced treatment is given. Moderate amenity value. 3 Poor waters polluted to the extent that fish were absent or only sporadically present. Suitable only for low grade industrial abstractions. 4 Bad quality waters which were grossly polluted and likely to cause a nuisance. This classification was adopted by each water authority in setting informal river quality objectives and to define the permits for treated sewage discharges. The suitability of this classification system was later questioned as it introduced the concept of high river quality being a lower priority unless specific uses compel it. With significant scope for discretion in the setting of standards by the water authorities and no imposed national standards, it ultimately led to a review of the number of discharge permits, which led to a relaxation of their requirements36. It further masked the problems of declining water quality and was clearly insufficient to satisfy EC law. With little political acceptance of the dramatic increases required to customer bills to address the problems of under-investment and declining infrastructure, the government continued to delay implementation of the condition from the 1974 Act that required the water authorities to publish pollution registers against the performance of discharge permits. This was contrary to the openness required once the authorities were given the conflicting roles of sewage works operators and river quality regulators37, conflicted with the water authorities’ role to prevent pollution and led the water quality advisory panels to be largely ineffective. With the water authorities unwilling to selfregulate and self-prosecute there was a sharp increase in the number of incidents of river pollution38. Lack of public access to information on discharge permits and pollution incidents further compounded the problem. 3.4 WATER ACT 1983 In response to the problems created by the increasing capital investment requirements of the water authorities and the requirement to address the problems of environmental pollution, the government introduced the Water Act 1983. The assumption underlying the 1983 Act was that water customers were best served by an efficiently run water utility providing prescribed service standards at least cost. The 1983 Act changed the organisational structure of the water authorities, reduced the role of local government, and, by allowing companies to operate in a more commercial manner, paved the way for privatisation. 3.4.1 Constitutional changes Until 1983, the water authorities were run by large boards with a majority of local authority representatives (see section 3.1.4). The 1983 Act reduced the size of the board structures with the intention of making these smaller and more business like by reducing the number of representatives from local authorities. Although all members continued to be appointed by central government, a series of chairmen vacancies were filled by people with experience in the industry rather than experience of public affairs. The 1983 Act provided for Consumer Consultative Committees to represent the interests of customers following the abolishment of locally elected councillors as water authority members, and as a result of restrictions in public access to management meetings of the authorities. USER: Write a cost-benefit analysis of 1983 Act. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
16
7
657
null
763
Use only the document provided. If the question can not be answered then respond with 'I am unable to answer this request'
Summarize the information from this paper.
BIAS IN POLICING Bias in the American legal system includes biases in law enforcement or policing, where racial disparities have long been documented and continue to persist. Compared with White Americans, Black and Latino men are disproportionately more likely to be stopped, searched, and arrested by police officers (Kahn & Martin 2016). Furthermore, members of these minority groups also experience greater use of force by the police (Goff & Kahn 2012, Kahn et al. 2016). Recently, a string of high-profile deadly cases involving Black men like Michael Brown, Eric Garner, and Walter Scott has increased public awareness of these hostile interactions with law enforcement. An initial analysis of public records revealed that non-White minorities made up almost half (47%) of all people killed by the police, despite comprising only 37% of the population. Furthermore, of those killed, 32% of Blacks and 25% of Latinos were unarmed, compared with 15% of Whites (Swaine et al. 2015). This troubling pattern of statistics has called into question the role that race may play in police decisions. Psychological research has examined this important social issue by directly investigating the content of racial stereotypes, as well as indirectly assessing how these associations affect perceptions and behavior. Self-report surveys have indicated that hostility, violence, and criminality are commonly associated with Black Americans, even by egalitarian-minded White Americans (Devine 1989, Devine & Elliot 1995, Dovidio et al. 1986). Additionally, priming low-prejudiced individuals with Black versus White stimuli typically results in the faster categorization of negative than positive attributes (e.g., Fazio et al. 1995, Greenwald et al. 1998, Wittenbrink et al. 1997). Together, these findings suggest that awareness of social stereotypes and exposure to stigmatized group members can affect decision making. The Impact of Race on Weapon and Crime Perception Applying the above rationale to police contexts, Payne (2001) developed the Weapons Identification Task (WIT) to better understand the psychological mechanisms that may drive racially biased shootings. This sequential priming procedure involves a series of trials that begin with the presentation of a Black or White face, which participants are instructed to ignore. After 200 ms, the prime is replaced by the target stimulus, which is a picture of either a tool or a handgun. Participants must correctly categorize the object as quickly as possible using one of two computer keys. Across two initial studies, Payne (2001) found evidence of racial bias in both the reaction times and error rates. Following the presentation of a Black versus White facial prime, participants were faster to correctly identify a gun and more likely to misidentify a tool as a gun, depending on the implementation of a response deadline. The results revealed that the racial primes had an automatic influence on the visual identification of weapons (see also Amodio et al. 2004, Klauer & Voss 2008, Payne et al. 2002). As such, Payne (2001) proposed that law enforcement officers may experience bias through the activation of Black stereotypes, especially when the cognitive resources needed to engage behavioral control are depleted. Correll et al. (2002) extended this line of inquiry by developing a video game that similarly examines the impact of race on weapon processing. In their first-person Shooter Task, participants are randomly presented with a range of one to four real-life photos of public spaces (e.g., parks, offices, courtyards). On the final image, a Black or White male target suddenly appears superimposed holding either a handgun or an innocuous object like a cell phone, soda can, or wallet. Participants must quickly press either a “shoot” or “don't shoot” button on their computer keyboard. When participants are given 850 ms to respond, they are faster to shoot armed Blacks versus Whites and slower to not shoot unarmed Blacks compared with Whites. However, providing participants with a 630-ms deadline results in a biased pattern of errors, such that unarmed Blacks are more likely to be incorrectly shot than their White counterparts and armed Whites are less likely to be shot than armed Black targets (see Correll et al. 2014, Mekawi & Bresin 2015). Biased responses were due to participants having lower thresholds for shooting Black compared with White targets (see also Greenwald et al. 2003). Furthermore, the magnitude of shooter bias was related to cultural awareness of Black stereotypes related to danger, violence, and aggression. Consequently, African American participants demonstrated the same pattern of shooter bias, despite holding presumably more positive attitudes about their group. These findings suggest that decisions to shoot may be strongly influenced by negative racial schemas that affect perceptions in ambiguous situations. Additional research supports the notion that racial stereotypes may serve as perceptual tuners that direct attention in a biased manner. Eberhardt et al. (2004) conducted a series of studies examining how associations between Blacks and crime affected visual processing. In their first study, undergraduates were subliminally primed with a photo of a Black male face, a White male face, or no face at all before completing a supposedly unrelated object detection task. On this critical task, severely degraded images of crime-relevant (e.g., guns, knives) or -irrelevant (e.g., phones, keys) objects appeared on the screen and slowly increased in clarity. Participants needed fewer frames to accurately detect a crime-relevant object following a Black versus White or no-face prime, a pattern of bias that was not related to their explicit racial attitudes. These results were replicated among California police officers who were primed with crime words (e.g., arrest, shoot) and then tested for their memory of the distractor face presented on the task. Compared with the correct image, officers were more likely to incorrectly choose a Black target with more stereotypical features following the crime primes. Early perceptual processes of the police may therefore be impacted by cultural associations that produce racial profiling of suspects and bias their subsequent treatment. Plant & Peruche (2005) also used actual law enforcement officers in their research to examine how race influenced their responses to criminal suspects. Police officers completed a more static version of the Shooter Task in which only photos of Black or White male faces appeared with a gun or object superimposed without a background image. The researchers wanted to examine whether repeated exposure to the program would reduce the bias expressed by the officers. As in past studies with undergraduate participants (e.g., Correll et al. 2002), the police were initially more likely to shoot unarmed Black versus White targets and had a lower threshold for shooting Black targets. However, this biased tendency disappeared in the second half of trials, signifying that officers learned to dissociate race from the presence of weapons to make more accurate decisions on the task. The potential benefit of expert police training on performance is further supported by the findings of Correll et al. (2007b), who compared the performance of three different samples: Denver community members, Denver police officers, and national police officers. In contrast to citizens who demonstrated bias in both their reaction time and error rates, police officers demonstrated it only in their response latencies. In other words, police officers did not make racially biased mistakes on the task but were still faster to shoot armed Black men and slower to not shoot unarmed Black targets. This shooter bias was more pronounced among officers serving high-crime areas with larger Black and minority populations. The findings suggest that exposure to negative racial stereotypes can impact the speed with which police officers make decisions, but that their extensive training and field experience may allow them to exert more control over their behavior than regular citizens. In sum, independent labs have accumulated a considerable amount of evidence that race can impact crime-oriented perceptions and bias subsequent decision making. Yet, findings are often mixed when comparing data obtained from police officers versus undergraduate or civilian samples. Under certain circumstances, the police express a similar magnitude of racial bias as individuals not in law enforcement; in other situations, their prior experience helps them limit the influence of stereotypes. Beyond the Impact of Race The mixed results discussed above point to the importance of conducting research that considers factors other than race to more fully understand the complexity of real-life police decision making. To this end, some studies have explored how personal motivations, situational contexts, and physical cues may attenuate or exacerbate the expression of racial bias. Personal motivation. Research that has examined motivational processes demonstrates that responses to race are not uniformly biased. For example, Payne (2001) found that motivation to control prejudice moderated the relationship between explicit measures of bias and performance on the WIT. Participants with low motivation to control prejudice tended to show a positive correlation between modern scores of racism and task performance. However, those with higher motivation levels tended to show a dissociation between explicit and implicit bias, indicating a regulatory effort to override stereotyping effects. Similarly, Amodio and colleagues (2006, 2008) have examined the impact of internal (personal) versus external (normative) motivations to respond without prejudice. Participants in their studies completed the WIT while having their brain activity recorded. The data indicated that internally motivated participants responded more accurately on the task, particularly following stereotypical errors. Because this neural activity occurred below conscious awareness, the researchers proposed that some individuals are able to engage a spontaneous form of control that helps reduce the influence of race on behavior. In contrast, Swencionis & Goff (2017) proposed that the motivation to view the world in hierarchical terms may increase bias in police decisions. Social Dominance Theory (Sidanius & Pratto 1999) posits that group-based inequalities are maintained by cultural influences that promote social stratification based on factors such as age, sex, and race. Consequently, power is primarily distributed to and legitimized by high-status groups and institutions. Past work has found that people with high social dominance orientation (SDO) are more attracted to hierarchy-enhancing professions, such as law enforcement, politics, and business (Sidanius et al. 2004). Given that police officers tend to report greater SDO levels than public defenders, college students, or community members (Sidanius et al. 1994), they may be more prone to expressing discrimination against low-status groups. Situational contexts. Recognizing that police decisions do not occur in a social vacuum, some researchers have attempted to recreate ecologically valid situations that may contribute to the expression of racial bias. For example, Correll et al. (2007a) reasoned that frequent media or environmental exposure to stereotypical depictions of Blacks may increase shooter bias. In line with their hypothesizing, they found that participants who were first exposed to stories involving Black versus White criminal activity later showed more bias on the Shooter Task. A similar pattern emerged when they manipulated the amount of Black armed and White unarmed targets appearing on the task. Thus, increasing the accessibility of associations between Blacks and danger resulted in more pronounced anti-Black bias. Cox et al. (2014) also argued for the use of more complex situational contexts to assess various psychological factors that influence real-life decisions. To this end, they developed a modified version of the Shooter Task that used short video clips along with static photos of the suspect and recorded responses through a gun apparatus instead of computer keys. Because the police usually have prior knowledge and expectations about neighborhoods, they also manipulated where the crimes on the task supposedly took place by providing the exact city location. Wisconsin police officers were randomly assigned to complete the task imbedded within a primarily White or non-White neighborhood. When examining responses on photo trials, the researchers found that police officers did not make racially biased errors but were faster to shoot armed Black versus White targets, as in the work by Correll et al. (2007b). Interestingly, they also found that the composition of the neighborhood interacted with the race of the officers, such that more errors were made when officers were assigned to other-race areas.
Use only the document provided. If the question can not be answered then respond with 'I am unable to answer this request' Summarize the information from this paper. BIAS IN POLICING Bias in the American legal system includes biases in law enforcement or policing, where racial disparities have long been documented and continue to persist. Compared with White Americans, Black and Latino men are disproportionately more likely to be stopped, searched, and arrested by police officers (Kahn & Martin 2016). Furthermore, members of these minority groups also experience greater use of force by the police (Goff & Kahn 2012, Kahn et al. 2016). Recently, a string of high-profile deadly cases involving Black men like Michael Brown, Eric Garner, and Walter Scott has increased public awareness of these hostile interactions with law enforcement. An initial analysis of public records revealed that non-White minorities made up almost half (47%) of all people killed by the police, despite comprising only 37% of the population. Furthermore, of those killed, 32% of Blacks and 25% of Latinos were unarmed, compared with 15% of Whites (Swaine et al. 2015). This troubling pattern of statistics has called into question the role that race may play in police decisions. Psychological research has examined this important social issue by directly investigating the content of racial stereotypes, as well as indirectly assessing how these associations affect perceptions and behavior. Self-report surveys have indicated that hostility, violence, and criminality are commonly associated with Black Americans, even by egalitarian-minded White Americans (Devine 1989, Devine & Elliot 1995, Dovidio et al. 1986). Additionally, priming low-prejudiced individuals with Black versus White stimuli typically results in the faster categorization of negative than positive attributes (e.g., Fazio et al. 1995, Greenwald et al. 1998, Wittenbrink et al. 1997). Together, these findings suggest that awareness of social stereotypes and exposure to stigmatized group members can affect decision making. The Impact of Race on Weapon and Crime Perception Applying the above rationale to police contexts, Payne (2001) developed the Weapons Identification Task (WIT) to better understand the psychological mechanisms that may drive racially biased shootings. This sequential priming procedure involves a series of trials that begin with the presentation of a Black or White face, which participants are instructed to ignore. After 200 ms, the prime is replaced by the target stimulus, which is a picture of either a tool or a handgun. Participants must correctly categorize the object as quickly as possible using one of two computer keys. Across two initial studies, Payne (2001) found evidence of racial bias in both the reaction times and error rates. Following the presentation of a Black versus White facial prime, participants were faster to correctly identify a gun and more likely to misidentify a tool as a gun, depending on the implementation of a response deadline. The results revealed that the racial primes had an automatic influence on the visual identification of weapons (see also Amodio et al. 2004, Klauer & Voss 2008, Payne et al. 2002). As such, Payne (2001) proposed that law enforcement officers may experience bias through the activation of Black stereotypes, especially when the cognitive resources needed to engage behavioral control are depleted. Correll et al. (2002) extended this line of inquiry by developing a video game that similarly examines the impact of race on weapon processing. In their first-person Shooter Task, participants are randomly presented with a range of one to four real-life photos of public spaces (e.g., parks, offices, courtyards). On the final image, a Black or White male target suddenly appears superimposed holding either a handgun or an innocuous object like a cell phone, soda can, or wallet. Participants must quickly press either a “shoot” or “don't shoot” button on their computer keyboard. When participants are given 850 ms to respond, they are faster to shoot armed Blacks versus Whites and slower to not shoot unarmed Blacks compared with Whites. However, providing participants with a 630-ms deadline results in a biased pattern of errors, such that unarmed Blacks are more likely to be incorrectly shot than their White counterparts and armed Whites are less likely to be shot than armed Black targets (see Correll et al. 2014, Mekawi & Bresin 2015). Biased responses were due to participants having lower thresholds for shooting Black compared with White targets (see also Greenwald et al. 2003). Furthermore, the magnitude of shooter bias was related to cultural awareness of Black stereotypes related to danger, violence, and aggression. Consequently, African American participants demonstrated the same pattern of shooter bias, despite holding presumably more positive attitudes about their group. These findings suggest that decisions to shoot may be strongly influenced by negative racial schemas that affect perceptions in ambiguous situations. Additional research supports the notion that racial stereotypes may serve as perceptual tuners that direct attention in a biased manner. Eberhardt et al. (2004) conducted a series of studies examining how associations between Blacks and crime affected visual processing. In their first study, undergraduates were subliminally primed with a photo of a Black male face, a White male face, or no face at all before completing a supposedly unrelated object detection task. On this critical task, severely degraded images of crime-relevant (e.g., guns, knives) or -irrelevant (e.g., phones, keys) objects appeared on the screen and slowly increased in clarity. Participants needed fewer frames to accurately detect a crime-relevant object following a Black versus White or no-face prime, a pattern of bias that was not related to their explicit racial attitudes. These results were replicated among California police officers who were primed with crime words (e.g., arrest, shoot) and then tested for their memory of the distractor face presented on the task. Compared with the correct image, officers were more likely to incorrectly choose a Black target with more stereotypical features following the crime primes. Early perceptual processes of the police may therefore be impacted by cultural associations that produce racial profiling of suspects and bias their subsequent treatment. Plant & Peruche (2005) also used actual law enforcement officers in their research to examine how race influenced their responses to criminal suspects. Police officers completed a more static version of the Shooter Task in which only photos of Black or White male faces appeared with a gun or object superimposed without a background image. The researchers wanted to examine whether repeated exposure to the program would reduce the bias expressed by the officers. As in past studies with undergraduate participants (e.g., Correll et al. 2002), the police were initially more likely to shoot unarmed Black versus White targets and had a lower threshold for shooting Black targets. However, this biased tendency disappeared in the second half of trials, signifying that officers learned to dissociate race from the presence of weapons to make more accurate decisions on the task. The potential benefit of expert police training on performance is further supported by the findings of Correll et al. (2007b), who compared the performance of three different samples: Denver community members, Denver police officers, and national police officers. In contrast to citizens who demonstrated bias in both their reaction time and error rates, police officers demonstrated it only in their response latencies. In other words, police officers did not make racially biased mistakes on the task but were still faster to shoot armed Black men and slower to not shoot unarmed Black targets. This shooter bias was more pronounced among officers serving high-crime areas with larger Black and minority populations. The findings suggest that exposure to negative racial stereotypes can impact the speed with which police officers make decisions, but that their extensive training and field experience may allow them to exert more control over their behavior than regular citizens. In sum, independent labs have accumulated a considerable amount of evidence that race can impact crime-oriented perceptions and bias subsequent decision making. Yet, findings are often mixed when comparing data obtained from police officers versus undergraduate or civilian samples. Under certain circumstances, the police express a similar magnitude of racial bias as individuals not in law enforcement; in other situations, their prior experience helps them limit the influence of stereotypes. Beyond the Impact of Race The mixed results discussed above point to the importance of conducting research that considers factors other than race to more fully understand the complexity of real-life police decision making. To this end, some studies have explored how personal motivations, situational contexts, and physical cues may attenuate or exacerbate the expression of racial bias. Personal motivation. Research that has examined motivational processes demonstrates that responses to race are not uniformly biased. For example, Payne (2001) found that motivation to control prejudice moderated the relationship between explicit measures of bias and performance on the WIT. Participants with low motivation to control prejudice tended to show a positive correlation between modern scores of racism and task performance. However, those with higher motivation levels tended to show a dissociation between explicit and implicit bias, indicating a regulatory effort to override stereotyping effects. Similarly, Amodio and colleagues (2006, 2008) have examined the impact of internal (personal) versus external (normative) motivations to respond without prejudice. Participants in their studies completed the WIT while having their brain activity recorded. The data indicated that internally motivated participants responded more accurately on the task, particularly following stereotypical errors. Because this neural activity occurred below conscious awareness, the researchers proposed that some individuals are able to engage a spontaneous form of control that helps reduce the influence of race on behavior. In contrast, Swencionis & Goff (2017) proposed that the motivation to view the world in hierarchical terms may increase bias in police decisions. Social Dominance Theory (Sidanius & Pratto 1999) posits that group-based inequalities are maintained by cultural influences that promote social stratification based on factors such as age, sex, and race. Consequently, power is primarily distributed to and legitimized by high-status groups and institutions. Past work has found that people with high social dominance orientation (SDO) are more attracted to hierarchy-enhancing professions, such as law enforcement, politics, and business (Sidanius et al. 2004). Given that police officers tend to report greater SDO levels than public defenders, college students, or community members (Sidanius et al. 1994), they may be more prone to expressing discrimination against low-status groups. Situational contexts. Recognizing that police decisions do not occur in a social vacuum, some researchers have attempted to recreate ecologically valid situations that may contribute to the expression of racial bias. For example, Correll et al. (2007a) reasoned that frequent media or environmental exposure to stereotypical depictions of Blacks may increase shooter bias. In line with their hypothesizing, they found that participants who were first exposed to stories involving Black versus White criminal activity later showed more bias on the Shooter Task. A similar pattern emerged when they manipulated the amount of Black armed and White unarmed targets appearing on the task. Thus, increasing the accessibility of associations between Blacks and danger resulted in more pronounced anti-Black bias. Cox et al. (2014) also argued for the use of more complex situational contexts to assess various psychological factors that influence real-life decisions. To this end, they developed a modified version of the Shooter Task that used short video clips along with static photos of the suspect and recorded responses through a gun apparatus instead of computer keys. Because the police usually have prior knowledge and expectations about neighborhoods, they also manipulated where the crimes on the task supposedly took place by providing the exact city location. Wisconsin police officers were randomly assigned to complete the task imbedded within a primarily White or non-White neighborhood. When examining responses on photo trials, the researchers found that police officers did not make racially biased errors but were faster to shoot armed Black versus White targets, as in the work by Correll et al. (2007b). Interestingly, they also found that the composition of the neighborhood interacted with the race of the officers, such that more errors were made when officers were assigned to other-race areas.
Use only the document provided. If the question can not be answered then respond with 'I am unable to answer this request' EVIDENCE: BIAS IN POLICING Bias in the American legal system includes biases in law enforcement or policing, where racial disparities have long been documented and continue to persist. Compared with White Americans, Black and Latino men are disproportionately more likely to be stopped, searched, and arrested by police officers (Kahn & Martin 2016). Furthermore, members of these minority groups also experience greater use of force by the police (Goff & Kahn 2012, Kahn et al. 2016). Recently, a string of high-profile deadly cases involving Black men like Michael Brown, Eric Garner, and Walter Scott has increased public awareness of these hostile interactions with law enforcement. An initial analysis of public records revealed that non-White minorities made up almost half (47%) of all people killed by the police, despite comprising only 37% of the population. Furthermore, of those killed, 32% of Blacks and 25% of Latinos were unarmed, compared with 15% of Whites (Swaine et al. 2015). This troubling pattern of statistics has called into question the role that race may play in police decisions. Psychological research has examined this important social issue by directly investigating the content of racial stereotypes, as well as indirectly assessing how these associations affect perceptions and behavior. Self-report surveys have indicated that hostility, violence, and criminality are commonly associated with Black Americans, even by egalitarian-minded White Americans (Devine 1989, Devine & Elliot 1995, Dovidio et al. 1986). Additionally, priming low-prejudiced individuals with Black versus White stimuli typically results in the faster categorization of negative than positive attributes (e.g., Fazio et al. 1995, Greenwald et al. 1998, Wittenbrink et al. 1997). Together, these findings suggest that awareness of social stereotypes and exposure to stigmatized group members can affect decision making. The Impact of Race on Weapon and Crime Perception Applying the above rationale to police contexts, Payne (2001) developed the Weapons Identification Task (WIT) to better understand the psychological mechanisms that may drive racially biased shootings. This sequential priming procedure involves a series of trials that begin with the presentation of a Black or White face, which participants are instructed to ignore. After 200 ms, the prime is replaced by the target stimulus, which is a picture of either a tool or a handgun. Participants must correctly categorize the object as quickly as possible using one of two computer keys. Across two initial studies, Payne (2001) found evidence of racial bias in both the reaction times and error rates. Following the presentation of a Black versus White facial prime, participants were faster to correctly identify a gun and more likely to misidentify a tool as a gun, depending on the implementation of a response deadline. The results revealed that the racial primes had an automatic influence on the visual identification of weapons (see also Amodio et al. 2004, Klauer & Voss 2008, Payne et al. 2002). As such, Payne (2001) proposed that law enforcement officers may experience bias through the activation of Black stereotypes, especially when the cognitive resources needed to engage behavioral control are depleted. Correll et al. (2002) extended this line of inquiry by developing a video game that similarly examines the impact of race on weapon processing. In their first-person Shooter Task, participants are randomly presented with a range of one to four real-life photos of public spaces (e.g., parks, offices, courtyards). On the final image, a Black or White male target suddenly appears superimposed holding either a handgun or an innocuous object like a cell phone, soda can, or wallet. Participants must quickly press either a “shoot” or “don't shoot” button on their computer keyboard. When participants are given 850 ms to respond, they are faster to shoot armed Blacks versus Whites and slower to not shoot unarmed Blacks compared with Whites. However, providing participants with a 630-ms deadline results in a biased pattern of errors, such that unarmed Blacks are more likely to be incorrectly shot than their White counterparts and armed Whites are less likely to be shot than armed Black targets (see Correll et al. 2014, Mekawi & Bresin 2015). Biased responses were due to participants having lower thresholds for shooting Black compared with White targets (see also Greenwald et al. 2003). Furthermore, the magnitude of shooter bias was related to cultural awareness of Black stereotypes related to danger, violence, and aggression. Consequently, African American participants demonstrated the same pattern of shooter bias, despite holding presumably more positive attitudes about their group. These findings suggest that decisions to shoot may be strongly influenced by negative racial schemas that affect perceptions in ambiguous situations. Additional research supports the notion that racial stereotypes may serve as perceptual tuners that direct attention in a biased manner. Eberhardt et al. (2004) conducted a series of studies examining how associations between Blacks and crime affected visual processing. In their first study, undergraduates were subliminally primed with a photo of a Black male face, a White male face, or no face at all before completing a supposedly unrelated object detection task. On this critical task, severely degraded images of crime-relevant (e.g., guns, knives) or -irrelevant (e.g., phones, keys) objects appeared on the screen and slowly increased in clarity. Participants needed fewer frames to accurately detect a crime-relevant object following a Black versus White or no-face prime, a pattern of bias that was not related to their explicit racial attitudes. These results were replicated among California police officers who were primed with crime words (e.g., arrest, shoot) and then tested for their memory of the distractor face presented on the task. Compared with the correct image, officers were more likely to incorrectly choose a Black target with more stereotypical features following the crime primes. Early perceptual processes of the police may therefore be impacted by cultural associations that produce racial profiling of suspects and bias their subsequent treatment. Plant & Peruche (2005) also used actual law enforcement officers in their research to examine how race influenced their responses to criminal suspects. Police officers completed a more static version of the Shooter Task in which only photos of Black or White male faces appeared with a gun or object superimposed without a background image. The researchers wanted to examine whether repeated exposure to the program would reduce the bias expressed by the officers. As in past studies with undergraduate participants (e.g., Correll et al. 2002), the police were initially more likely to shoot unarmed Black versus White targets and had a lower threshold for shooting Black targets. However, this biased tendency disappeared in the second half of trials, signifying that officers learned to dissociate race from the presence of weapons to make more accurate decisions on the task. The potential benefit of expert police training on performance is further supported by the findings of Correll et al. (2007b), who compared the performance of three different samples: Denver community members, Denver police officers, and national police officers. In contrast to citizens who demonstrated bias in both their reaction time and error rates, police officers demonstrated it only in their response latencies. In other words, police officers did not make racially biased mistakes on the task but were still faster to shoot armed Black men and slower to not shoot unarmed Black targets. This shooter bias was more pronounced among officers serving high-crime areas with larger Black and minority populations. The findings suggest that exposure to negative racial stereotypes can impact the speed with which police officers make decisions, but that their extensive training and field experience may allow them to exert more control over their behavior than regular citizens. In sum, independent labs have accumulated a considerable amount of evidence that race can impact crime-oriented perceptions and bias subsequent decision making. Yet, findings are often mixed when comparing data obtained from police officers versus undergraduate or civilian samples. Under certain circumstances, the police express a similar magnitude of racial bias as individuals not in law enforcement; in other situations, their prior experience helps them limit the influence of stereotypes. Beyond the Impact of Race The mixed results discussed above point to the importance of conducting research that considers factors other than race to more fully understand the complexity of real-life police decision making. To this end, some studies have explored how personal motivations, situational contexts, and physical cues may attenuate or exacerbate the expression of racial bias. Personal motivation. Research that has examined motivational processes demonstrates that responses to race are not uniformly biased. For example, Payne (2001) found that motivation to control prejudice moderated the relationship between explicit measures of bias and performance on the WIT. Participants with low motivation to control prejudice tended to show a positive correlation between modern scores of racism and task performance. However, those with higher motivation levels tended to show a dissociation between explicit and implicit bias, indicating a regulatory effort to override stereotyping effects. Similarly, Amodio and colleagues (2006, 2008) have examined the impact of internal (personal) versus external (normative) motivations to respond without prejudice. Participants in their studies completed the WIT while having their brain activity recorded. The data indicated that internally motivated participants responded more accurately on the task, particularly following stereotypical errors. Because this neural activity occurred below conscious awareness, the researchers proposed that some individuals are able to engage a spontaneous form of control that helps reduce the influence of race on behavior. In contrast, Swencionis & Goff (2017) proposed that the motivation to view the world in hierarchical terms may increase bias in police decisions. Social Dominance Theory (Sidanius & Pratto 1999) posits that group-based inequalities are maintained by cultural influences that promote social stratification based on factors such as age, sex, and race. Consequently, power is primarily distributed to and legitimized by high-status groups and institutions. Past work has found that people with high social dominance orientation (SDO) are more attracted to hierarchy-enhancing professions, such as law enforcement, politics, and business (Sidanius et al. 2004). Given that police officers tend to report greater SDO levels than public defenders, college students, or community members (Sidanius et al. 1994), they may be more prone to expressing discrimination against low-status groups. Situational contexts. Recognizing that police decisions do not occur in a social vacuum, some researchers have attempted to recreate ecologically valid situations that may contribute to the expression of racial bias. For example, Correll et al. (2007a) reasoned that frequent media or environmental exposure to stereotypical depictions of Blacks may increase shooter bias. In line with their hypothesizing, they found that participants who were first exposed to stories involving Black versus White criminal activity later showed more bias on the Shooter Task. A similar pattern emerged when they manipulated the amount of Black armed and White unarmed targets appearing on the task. Thus, increasing the accessibility of associations between Blacks and danger resulted in more pronounced anti-Black bias. Cox et al. (2014) also argued for the use of more complex situational contexts to assess various psychological factors that influence real-life decisions. To this end, they developed a modified version of the Shooter Task that used short video clips along with static photos of the suspect and recorded responses through a gun apparatus instead of computer keys. Because the police usually have prior knowledge and expectations about neighborhoods, they also manipulated where the crimes on the task supposedly took place by providing the exact city location. Wisconsin police officers were randomly assigned to complete the task imbedded within a primarily White or non-White neighborhood. When examining responses on photo trials, the researchers found that police officers did not make racially biased errors but were faster to shoot armed Black versus White targets, as in the work by Correll et al. (2007b). Interestingly, they also found that the composition of the neighborhood interacted with the race of the officers, such that more errors were made when officers were assigned to other-race areas. USER: Summarize the information from this paper. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
22
6
1,963
null
253
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document]
Can you summarize in 6-8 sentences how this paper suggests how to properly price AI stocks and what is so different about it compared to current strategies? Do not use the word AI or any other phrase that refers to Artificial Intelligence.
Artificial Intelligence (AI) is having a powerful impact in the domain of finance. Estimates of the 10-year revenue figure vary but could be as high as $3 trillion. Over a similar period, the market value of AI firms is expected to grow at a 37% rate. Although the opportunities seem boundless, valuations conducted in the financial markets are increasingly complex to carry out for AI/tech stocks. Artificial Intelligence in Finance: Valuations and Opportunities Yosef Bonaparte Finance Research Letters A version of this paper can be found here Want to read our summaries of academic finance papers? Check out our Academic Research Insight category. What are the research questions? While there is literature that describes the “domain” of artificial intelligence, there are very few, if any, that analyze the valuation and pricing of AI stocks. The authors attempt to fill the void with a two-part methodology. What are Academic Insights? The methodology combines behavioral and fundamental components. The fundamental model incorporates new AI innovation into the P/E ratio and R&D as a percentage of revenue. FUNDAMENTAL: The key is to estimate future revenue as a function of the exposure of the stock to AI technology. The author uses Nvidia as an example of an overpriced (yes, overpriced!) stock in June 2023 via fundamentals as follows: The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged and do not reflect management or trading fees, and one cannot invest directly in an index. 2. BEHAVIORAL: The behavioral component is based on a technology sentiment index, using the Google trend of key technological terms. The key issue is to determine how the stock relates to AI technology. In the Nvidia case, the correlation between the Google Trend Index and Nvidia’s price acts as an estimate of how embedded the stock is to AI tech. Terms include: artificial intelligence, neural network, large language model, machine learning, generative AI, and deep learning. See Tables 1 and 2 below for estimates of the correlation for Nvidia over 2 separate time -periods. The estimate of AI sensitivity is found in Table 3. For Nvidia, the sum of artificial intelligence searches is 17.5% (from Table 1: 683 divided by total searches at 3904) and 35.2% when weighted across search terms. Nvidia has substantial exposure to AI terms, however, the exposure is less than the exposure of MSFT and GOOG. The same methodology can be used on other stocks to determine sensitivity to the AI opportunity. Why does it matter? The authors identified three key areas of knowledge an investor or analyst should acquire to understand how AI is transforming the financial landscape. First, develop a thorough understanding of the AI concept in terms of innovation and relevance to finance. Second, develop the methodologies for assessing AI companies and tech investment funds. Third, focus on the industry leaders in AI to provide context and provoke an examination of the future of AI. How will it influence the capital markets, how and what will drive growth in sales, and ultimately, how will valuations be influenced? The most important chart from the paper The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged and do not reflect management or trading fees, and one cannot invest directly in an index.
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== Can you summarize in 6-8 sentences how this paper suggests how to properly price AI stocks and what is so different about it compared to current strategies? Do not use the word AI or any other phrase that refers to Artificial Intelligence. {passage 0} ========== Artificial Intelligence (AI) is having a powerful impact in the domain of finance. Estimates of the 10-year revenue figure vary but could be as high as $3 trillion. Over a similar period, the market value of AI firms is expected to grow at a 37% rate. Although the opportunities seem boundless, valuations conducted in the financial markets are increasingly complex to carry out for AI/tech stocks. Artificial Intelligence in Finance: Valuations and Opportunities Yosef Bonaparte Finance Research Letters A version of this paper can be found here Want to read our summaries of academic finance papers? Check out our Academic Research Insight category. What are the research questions? While there is literature that describes the “domain” of artificial intelligence, there are very few, if any, that analyze the valuation and pricing of AI stocks. The authors attempt to fill the void with a two-part methodology. What are Academic Insights? The methodology combines behavioral and fundamental components. The fundamental model incorporates new AI innovation into the P/E ratio and R&D as a percentage of revenue. FUNDAMENTAL: The key is to estimate future revenue as a function of the exposure of the stock to AI technology. The author uses Nvidia as an example of an overpriced (yes, overpriced!) stock in June 2023 via fundamentals as follows: The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged and do not reflect management or trading fees, and one cannot invest directly in an index. 2. BEHAVIORAL: The behavioral component is based on a technology sentiment index, using the Google trend of key technological terms. The key issue is to determine how the stock relates to AI technology. In the Nvidia case, the correlation between the Google Trend Index and Nvidia’s price acts as an estimate of how embedded the stock is to AI tech. Terms include: artificial intelligence, neural network, large language model, machine learning, generative AI, and deep learning. See Tables 1 and 2 below for estimates of the correlation for Nvidia over 2 separate time -periods. The estimate of AI sensitivity is found in Table 3. For Nvidia, the sum of artificial intelligence searches is 17.5% (from Table 1: 683 divided by total searches at 3904) and 35.2% when weighted across search terms. Nvidia has substantial exposure to AI terms, however, the exposure is less than the exposure of MSFT and GOOG. The same methodology can be used on other stocks to determine sensitivity to the AI opportunity. Why does it matter? The authors identified three key areas of knowledge an investor or analyst should acquire to understand how AI is transforming the financial landscape. First, develop a thorough understanding of the AI concept in terms of innovation and relevance to finance. Second, develop the methodologies for assessing AI companies and tech investment funds. Third, focus on the industry leaders in AI to provide context and provoke an examination of the future of AI. How will it influence the capital markets, how and what will drive growth in sales, and ultimately, how will valuations be influenced? The most important chart from the paper The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged and do not reflect management or trading fees, and one cannot invest directly in an index. https://alphaarchitect.com/2024/04/valuing-artificial-intelligence-ai-stocks/
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document] EVIDENCE: Artificial Intelligence (AI) is having a powerful impact in the domain of finance. Estimates of the 10-year revenue figure vary but could be as high as $3 trillion. Over a similar period, the market value of AI firms is expected to grow at a 37% rate. Although the opportunities seem boundless, valuations conducted in the financial markets are increasingly complex to carry out for AI/tech stocks. Artificial Intelligence in Finance: Valuations and Opportunities Yosef Bonaparte Finance Research Letters A version of this paper can be found here Want to read our summaries of academic finance papers? Check out our Academic Research Insight category. What are the research questions? While there is literature that describes the “domain” of artificial intelligence, there are very few, if any, that analyze the valuation and pricing of AI stocks. The authors attempt to fill the void with a two-part methodology. What are Academic Insights? The methodology combines behavioral and fundamental components. The fundamental model incorporates new AI innovation into the P/E ratio and R&D as a percentage of revenue. FUNDAMENTAL: The key is to estimate future revenue as a function of the exposure of the stock to AI technology. The author uses Nvidia as an example of an overpriced (yes, overpriced!) stock in June 2023 via fundamentals as follows: The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged and do not reflect management or trading fees, and one cannot invest directly in an index. 2. BEHAVIORAL: The behavioral component is based on a technology sentiment index, using the Google trend of key technological terms. The key issue is to determine how the stock relates to AI technology. In the Nvidia case, the correlation between the Google Trend Index and Nvidia’s price acts as an estimate of how embedded the stock is to AI tech. Terms include: artificial intelligence, neural network, large language model, machine learning, generative AI, and deep learning. See Tables 1 and 2 below for estimates of the correlation for Nvidia over 2 separate time -periods. The estimate of AI sensitivity is found in Table 3. For Nvidia, the sum of artificial intelligence searches is 17.5% (from Table 1: 683 divided by total searches at 3904) and 35.2% when weighted across search terms. Nvidia has substantial exposure to AI terms, however, the exposure is less than the exposure of MSFT and GOOG. The same methodology can be used on other stocks to determine sensitivity to the AI opportunity. Why does it matter? The authors identified three key areas of knowledge an investor or analyst should acquire to understand how AI is transforming the financial landscape. First, develop a thorough understanding of the AI concept in terms of innovation and relevance to finance. Second, develop the methodologies for assessing AI companies and tech investment funds. Third, focus on the industry leaders in AI to provide context and provoke an examination of the future of AI. How will it influence the capital markets, how and what will drive growth in sales, and ultimately, how will valuations be influenced? The most important chart from the paper The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged and do not reflect management or trading fees, and one cannot invest directly in an index. USER: Can you summarize in 6-8 sentences how this paper suggests how to properly price AI stocks and what is so different about it compared to current strategies? Do not use the word AI or any other phrase that refers to Artificial Intelligence. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
42
569
null
425
Only use the document provided to answer the question. Cite the section of text you are basing your response on.
Using the provided document, what terrain may cause connection issues for WISPs?
**Rural access** One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project.[141] Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100 Mbit/s service.[30] Wireless Internet service providers (WISPs) are rapidly becoming a popular broadband option for rural areas.[142] The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option.[143] The Broadband for Rural Nova Scotia initiative is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy.[144] In New Zealand, a fund has been formed by the government to improve rural broadband,[145] and mobile phone coverage. Current proposals include: (a) extending fiber coverage and upgrading copper to support VDSL, (b) focusing on improving the coverage of cellphone technology, or (c) regional wireless.[146] Several countries have started Hybrid Access Networks to provide faster Internet services in rural areas by enabling network operators to efficiently combine their XDSL and LTE networks.
{Document} ========== **Rural access** One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project.[141] Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100 Mbit/s service.[30] Wireless Internet service providers (WISPs) are rapidly becoming a popular broadband option for rural areas.[142] The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option.[143] The Broadband for Rural Nova Scotia initiative is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy.[144] In New Zealand, a fund has been formed by the government to improve rural broadband,[145] and mobile phone coverage. Current proposals include: (a) extending fiber coverage and upgrading copper to support VDSL, (b) focusing on improving the coverage of cellphone technology, or (c) regional wireless.[146] Several countries have started Hybrid Access Networks to provide faster Internet services in rural areas by enabling network operators to efficiently combine their XDSL and LTE networks. ---------------- {System Instruction} ========== Only use the document provided to answer the question. Cite the section of text you are basing your response on. ---------------- {Question} ========== Using the provided document, what terrain may cause connection issues for WISPs?
Only use the document provided to answer the question. Cite the section of text you are basing your response on. EVIDENCE: **Rural access** One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project.[141] Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100 Mbit/s service.[30] Wireless Internet service providers (WISPs) are rapidly becoming a popular broadband option for rural areas.[142] The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option.[143] The Broadband for Rural Nova Scotia initiative is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy.[144] In New Zealand, a fund has been formed by the government to improve rural broadband,[145] and mobile phone coverage. Current proposals include: (a) extending fiber coverage and upgrading copper to support VDSL, (b) focusing on improving the coverage of cellphone technology, or (c) regional wireless.[146] Several countries have started Hybrid Access Networks to provide faster Internet services in rural areas by enabling network operators to efficiently combine their XDSL and LTE networks. USER: Using the provided document, what terrain may cause connection issues for WISPs? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
20
12
315
null
823
Craft your answer only using the information provided in the context block. Keep your answer under 200 words.
How many complaints were within OIG jurisdiction?
Section 1001 of the USA PATRIOT Act (Patriot Act), Public Law 107-56, directs the Office of the Inspector General (OIG) of the U.S. Department of Justice (DOJ or Department) to undertake a series of actions related to claims of civil rights or civil liberties violations allegedly committed by DOJ employees. It also requires the OIG to provide semiannual reports to Congress on the implementation of the OIG’s responsibilities under Section 1001. This report summarizes the OIG’s Section 1001-related activities from July 1, 2023, through December 31, 2023. Introduction The OIG is an independent entity within DOJ that reports to both the Attorney General and Congress. The OIG’s mission is to investigate allegations of waste, fraud, and abuse in DOJ programs and personnel, and to promote economy and efficiency in DOJ operations. The OIG has jurisdiction to review programs and personnel in all DOJ components, including the Federal Bureau of Investigation (FBI), the Drug Enforcement Administration (DEA), the Federal Bureau of Prisons (BOP), the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), the U.S. Marshals Service (USMS), and the U.S. Attorneys’ Offices.1 The OIG consists of the Immediate Office of the Inspector General and the following divisions and offices: • Audit Division conducts independent audits of Department programs, computer systems, financial statements, and DOJ-awarded grants and contracts. • Evaluation and Inspections Division conducts program and management reviews that involve on-site inspections, statistical analysis, and other techniques to review Department programs and activities. • Investigations Division investigates allegations of bribery, fraud, abuse, civil rights violations, and violations of other criminal laws and administrative procedures that govern Department employees, contractors, and grantees. • Oversight and Review Division blends the skills of attorneys, investigators, and program analysts to investigate or review high profile or sensitive matters involving Department programs or employees. • Information Technology Division executes the OIG’s IT strategic vision and goals by directing technology and business process integration, network administration, implementation of computer hardware and software, cybersecurity, applications development, programming services, policy formulation, and other mission-support activities. Management and Planning Division provides the Inspector General with advice on administrative and fiscal policy and assists OIG components by providing services in the areas of planning, budget, finance, quality assurance, personnel, communications, procurement, facilities, telecommunications, security, and general support. • Office of General Counsel provides legal advice to OIG management and staff. In addition, the office drafts memoranda on issues of law; prepares administrative subpoenas; represents the OIG in personnel, contractual, and legal matters; and responds to Freedom of Information Act requests. The OIG has a staff of approximately 500 employees, about half of whom are based in Washington, D.C. The OIG has 28 Investigations Division field locations and 6 Audit Division regional offices located throughout the country. Section 1001 of the Patriot Act Section 1001 of the Patriot Act provides the following: The DOJ Inspector General shall designate one official who shall― (1) review information and receive complaints alleging abuses of civil rights and civil liberties by DOJ employees and officials; (2) make public through the Internet, radio, television, and newspaper advertisements information on the responsibilities and functions of, and how to contact, the official; and (3) submit to the Committee on the Judiciary of the House of Representatives and the Committee on the Judiciary of the Senate on a semiannual basis a report on the implementation of this subsection and detailing any abuses described in paragraph (1), including a description of the use of funds appropriations used to carry out this subsection. Responsibilities, Functions, and Contact Information of the OIG’s Designated Section 1001 Official The DOJ Inspector General has designated the OIG’s Assistant Inspector General for Investigations as the official responsible for the duties required under Section 1001, which are described in the next section of this report. Civil Rights and Civil Liberties Complaints Section 1001 requires the OIG to “review information and receive complaints alleging abuses of civil rights and civil liberties by employees and officials of the Department of Justice.” While the phrase “civil rights and civil liberties” is not specifically defined in the Patriot Act, the OIG has looked to the “Sense of Congress” provisions in the statute, namely Sections 102 and 1002, for context. Sections 102 and 1002 identify certain ethnic and religious groups who would be vulnerable to abuse due to a possible backlash from the terrorist attacks of September 11, 2001, including Muslims, Arabs, Sikhs, and South Asians. The OIG’s Investigations Division, which is headed by the Assistant Inspector General for Investigations, manages the OIG’s Section 1001 investigative responsibilities. The two units with primary responsibility for coordinating these activities are Operations Branch I and Operations Branch II, each of which is directed by a Special Agent in Charge and two Assistant Special Agents in Charge. In addition, these units are supported by Investigative Specialists and other staff assigned to the Hotline Operations Branch, who divide their time between Section 1001 and other responsibilities. The Investigations Division receives civil rights and civil liberties complaints via mail, email, telephone, and fax. Upon receipt, Division Assistant Special Agents in Charge review the complaints and assign an initial disposition to each matter, and Investigative Specialists enter the complaints alleging a violation within the investigative jurisdiction of the OIG or another federal agency into an OIG database. Serious civil rights and civil liberties allegations relating to actions of DOJ employees or contractors are typically assigned to an OIG Investigations Division field office, where Special Agents conduct investigations of criminal violations and administrative misconduct. Given the number of complaints the OIG receives compared to its limited resources, the OIG does not investigate all allegations of misconduct against DOJ employees. The OIG refers many complaints involving DOJ employees to internal affairs offices in DOJ components such as the FBI Inspection Division, the DEA Office of Professional Responsibility, and the BOP Office of Internal Affairs. In certain referrals, the OIG requires the components to report the results of their investigations to the OIG. In most cases, the OIG notifies the complainant of the referral. Many complaints the OIG receives involve matters outside its jurisdiction. When those matters identify a serious issue for investigation, such as a threat to life or safety, the OIG forwards them to the appropriate investigative entity. In other cases, the complainant is directed to another investigative agency when possible. Allegations related to the authority of a DOJ attorney to litigate, investigate, or provide legal advice are referred to the DOJ Office of Professional Responsibility. Allegations related solely to state and local law enforcement or government officials that raise a federal civil rights concern are forwarded to the DOJ Civil Rights Division. When an allegation received from any source involves a potential violation of federal civil rights statutes by a DOJ employee, the OIG discusses the complaint with the DOJ Civil Rights Division for possible prosecution. In some cases, the Civil Rights Division accepts the case and requests additional investigation by either the OIG or the FBI. In other cases, the Civil Rights Division declines prosecution and either the OIG or the appropriate DOJ internal affairs office reviews the case for possible administrative misconduct. Complaints Processed During This Reporting Period Between July 1, 2023, and December 31, 2023, the period covered by this report, the OIG processed 739 new complaints that were identified by the complainant as civil rights or civil liberties complaints. Of these complaints, 717 did not fall within the OIG’s jurisdiction or did not warrant further investigation. These complaints involved allegations against agencies or entities outside the DOJ, including other federal agencies, local governments, or private businesses, as well as allegations that were not suitable for investigation by the OIG, and could not be or were not referred to another agency for investigation, generally because the complaints failed to identify a subject or agency. The OIG found that the remaining 22 of the 739 complaints it received involved DOJ employees or DOJ components and included allegations that required further review. The OIG determined that 20 of these complaints generally raised management issues unrelated to the OIG’s Section 1001 duties and referred these complaints to DOJ components for appropriate handling. Examples of complaints in this category included allegations by federal prisoners about the general prison conditions, and by others that the FBI did not initiate an investigation into particular allegations. The OIG identified two complaints by federal prisoners that alleged a potential civil rights or civil liberties violation under Section 1001 but, based on the lack of an identified subject or the non-specific nature of the allegations, determined that these complaints generally raised management issues. The OIG referred these complaints to BOP for appropriate handling.
system instructions: [Craft your answer only using the information provided in the context block. Keep your answer under 200 words.] question: [ How many complaints were within OIG jurisdiction? ] context block: [Section 1001 of the USA PATRIOT Act (Patriot Act), Public Law 107-56, directs the Office of the Inspector General (OIG) of the U.S. Department of Justice (DOJ or Department) to undertake a series of actions related to claims of civil rights or civil liberties violations allegedly committed by DOJ employees. It also requires the OIG to provide semiannual reports to Congress on the implementation of the OIG’s responsibilities under Section 1001. This report summarizes the OIG’s Section 1001-related activities from July 1, 2023, through December 31, 2023. Introduction The OIG is an independent entity within DOJ that reports to both the Attorney General and Congress. The OIG’s mission is to investigate allegations of waste, fraud, and abuse in DOJ programs and personnel, and to promote economy and efficiency in DOJ operations. The OIG has jurisdiction to review programs and personnel in all DOJ components, including the Federal Bureau of Investigation (FBI), the Drug Enforcement Administration (DEA), the Federal Bureau of Prisons (BOP), the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), the U.S. Marshals Service (USMS), and the U.S. Attorneys’ Offices.1 The OIG consists of the Immediate Office of the Inspector General and the following divisions and offices: • Audit Division conducts independent audits of Department programs, computer systems, financial statements, and DOJ-awarded grants and contracts. • Evaluation and Inspections Division conducts program and management reviews that involve on-site inspections, statistical analysis, and other techniques to review Department programs and activities. • Investigations Division investigates allegations of bribery, fraud, abuse, civil rights violations, and violations of other criminal laws and administrative procedures that govern Department employees, contractors, and grantees. • Oversight and Review Division blends the skills of attorneys, investigators, and program analysts to investigate or review high profile or sensitive matters involving Department programs or employees. • Information Technology Division executes the OIG’s IT strategic vision and goals by directing technology and business process integration, network administration, implementation of computer hardware and software, cybersecurity, applications development, programming services, policy formulation, and other mission-support activities. Management and Planning Division provides the Inspector General with advice on administrative and fiscal policy and assists OIG components by providing services in the areas of planning, budget, finance, quality assurance, personnel, communications, procurement, facilities, telecommunications, security, and general support. • Office of General Counsel provides legal advice to OIG management and staff. In addition, the office drafts memoranda on issues of law; prepares administrative subpoenas; represents the OIG in personnel, contractual, and legal matters; and responds to Freedom of Information Act requests. The OIG has a staff of approximately 500 employees, about half of whom are based in Washington, D.C. The OIG has 28 Investigations Division field locations and 6 Audit Division regional offices located throughout the country. Section 1001 of the Patriot Act Section 1001 of the Patriot Act provides the following: The DOJ Inspector General shall designate one official who shall― (1) review information and receive complaints alleging abuses of civil rights and civil liberties by DOJ employees and officials; (2) make public through the Internet, radio, television, and newspaper advertisements information on the responsibilities and functions of, and how to contact, the official; and (3) submit to the Committee on the Judiciary of the House of Representatives and the Committee on the Judiciary of the Senate on a semiannual basis a report on the implementation of this subsection and detailing any abuses described in paragraph (1), including a description of the use of funds appropriations used to carry out this subsection. Responsibilities, Functions, and Contact Information of the OIG’s Designated Section 1001 Official The DOJ Inspector General has designated the OIG’s Assistant Inspector General for Investigations as the official responsible for the duties required under Section 1001, which are described in the next section of this report. Civil Rights and Civil Liberties Complaints Section 1001 requires the OIG to “review information and receive complaints alleging abuses of civil rights and civil liberties by employees and officials of the Department of Justice.” While the phrase “civil rights and civil liberties” is not specifically defined in the Patriot Act, the OIG has looked to the “Sense of Congress” provisions in the statute, namely Sections 102 and 1002, for context. Sections 102 and 1002 identify certain ethnic and religious groups who would be vulnerable to abuse due to a possible backlash from the terrorist attacks of September 11, 2001, including Muslims, Arabs, Sikhs, and South Asians. The OIG’s Investigations Division, which is headed by the Assistant Inspector General for Investigations, manages the OIG’s Section 1001 investigative responsibilities. The two units with primary responsibility for coordinating these activities are Operations Branch I and Operations Branch II, each of which is directed by a Special Agent in Charge and two Assistant Special Agents in Charge. In addition, these units are supported by Investigative Specialists and other staff assigned to the Hotline Operations Branch, who divide their time between Section 1001 and other responsibilities. The Investigations Division receives civil rights and civil liberties complaints via mail, email, telephone, and fax. Upon receipt, Division Assistant Special Agents in Charge review the complaints and assign an initial disposition to each matter, and Investigative Specialists enter the complaints alleging a violation within the investigative jurisdiction of the OIG or another federal agency into an OIG database. Serious civil rights and civil liberties allegations relating to actions of DOJ employees or contractors are typically assigned to an OIG Investigations Division field office, where Special Agents conduct investigations of criminal violations and administrative misconduct. Given the number of complaints the OIG receives compared to its limited resources, the OIG does not investigate all allegations of misconduct against DOJ employees. The OIG refers many complaints involving DOJ employees to internal affairs offices in DOJ components such as the FBI Inspection Division, the DEA Office of Professional Responsibility, and the BOP Office of Internal Affairs. In certain referrals, the OIG requires the components to report the results of their investigations to the OIG. In most cases, the OIG notifies the complainant of the referral. Many complaints the OIG receives involve matters outside its jurisdiction. When those matters identify a serious issue for investigation, such as a threat to life or safety, the OIG forwards them to the appropriate investigative entity. In other cases, the complainant is directed to another investigative agency when possible. Allegations related to the authority of a DOJ attorney to litigate, investigate, or provide legal advice are referred to the DOJ Office of Professional Responsibility. Allegations related solely to state and local law enforcement or government officials that raise a federal civil rights concern are forwarded to the DOJ Civil Rights Division. When an allegation received from any source involves a potential violation of federal civil rights statutes by a DOJ employee, the OIG discusses the complaint with the DOJ Civil Rights Division for possible prosecution. In some cases, the Civil Rights Division accepts the case and requests additional investigation by either the OIG or the FBI. In other cases, the Civil Rights Division declines prosecution and either the OIG or the appropriate DOJ internal affairs office reviews the case for possible administrative misconduct. Complaints Processed During This Reporting Period Between July 1, 2023, and December 31, 2023, the period covered by this report, the OIG processed 739 new complaints that were identified by the complainant as civil rights or civil liberties complaints. Of these complaints, 717 did not fall within the OIG’s jurisdiction or did not warrant further investigation. These complaints involved allegations against agencies or entities outside the DOJ, including other federal agencies, local governments, or private businesses, as well as allegations that were not suitable for investigation by the OIG, and could not be or were not referred to another agency for investigation, generally because the complaints failed to identify a subject or agency. The OIG found that the remaining 22 of the 739 complaints it received involved DOJ employees or DOJ components and included allegations that required further review. The OIG determined that 20 of these complaints generally raised management issues unrelated to the OIG’s Section 1001 duties and referred these complaints to DOJ components for appropriate handling. Examples of complaints in this category included allegations by federal prisoners about the general prison conditions, and by others that the FBI did not initiate an investigation into particular allegations. The OIG identified two complaints by federal prisoners that alleged a potential civil rights or civil liberties violation under Section 1001 but, based on the lack of an identified subject or the non-specific nature of the allegations, determined that these complaints generally raised management issues. The OIG referred these complaints to BOP for appropriate handling.]
Craft your answer only using the information provided in the context block. Keep your answer under 200 words. EVIDENCE: Section 1001 of the USA PATRIOT Act (Patriot Act), Public Law 107-56, directs the Office of the Inspector General (OIG) of the U.S. Department of Justice (DOJ or Department) to undertake a series of actions related to claims of civil rights or civil liberties violations allegedly committed by DOJ employees. It also requires the OIG to provide semiannual reports to Congress on the implementation of the OIG’s responsibilities under Section 1001. This report summarizes the OIG’s Section 1001-related activities from July 1, 2023, through December 31, 2023. Introduction The OIG is an independent entity within DOJ that reports to both the Attorney General and Congress. The OIG’s mission is to investigate allegations of waste, fraud, and abuse in DOJ programs and personnel, and to promote economy and efficiency in DOJ operations. The OIG has jurisdiction to review programs and personnel in all DOJ components, including the Federal Bureau of Investigation (FBI), the Drug Enforcement Administration (DEA), the Federal Bureau of Prisons (BOP), the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), the U.S. Marshals Service (USMS), and the U.S. Attorneys’ Offices.1 The OIG consists of the Immediate Office of the Inspector General and the following divisions and offices: • Audit Division conducts independent audits of Department programs, computer systems, financial statements, and DOJ-awarded grants and contracts. • Evaluation and Inspections Division conducts program and management reviews that involve on-site inspections, statistical analysis, and other techniques to review Department programs and activities. • Investigations Division investigates allegations of bribery, fraud, abuse, civil rights violations, and violations of other criminal laws and administrative procedures that govern Department employees, contractors, and grantees. • Oversight and Review Division blends the skills of attorneys, investigators, and program analysts to investigate or review high profile or sensitive matters involving Department programs or employees. • Information Technology Division executes the OIG’s IT strategic vision and goals by directing technology and business process integration, network administration, implementation of computer hardware and software, cybersecurity, applications development, programming services, policy formulation, and other mission-support activities. Management and Planning Division provides the Inspector General with advice on administrative and fiscal policy and assists OIG components by providing services in the areas of planning, budget, finance, quality assurance, personnel, communications, procurement, facilities, telecommunications, security, and general support. • Office of General Counsel provides legal advice to OIG management and staff. In addition, the office drafts memoranda on issues of law; prepares administrative subpoenas; represents the OIG in personnel, contractual, and legal matters; and responds to Freedom of Information Act requests. The OIG has a staff of approximately 500 employees, about half of whom are based in Washington, D.C. The OIG has 28 Investigations Division field locations and 6 Audit Division regional offices located throughout the country. Section 1001 of the Patriot Act Section 1001 of the Patriot Act provides the following: The DOJ Inspector General shall designate one official who shall― (1) review information and receive complaints alleging abuses of civil rights and civil liberties by DOJ employees and officials; (2) make public through the Internet, radio, television, and newspaper advertisements information on the responsibilities and functions of, and how to contact, the official; and (3) submit to the Committee on the Judiciary of the House of Representatives and the Committee on the Judiciary of the Senate on a semiannual basis a report on the implementation of this subsection and detailing any abuses described in paragraph (1), including a description of the use of funds appropriations used to carry out this subsection. Responsibilities, Functions, and Contact Information of the OIG’s Designated Section 1001 Official The DOJ Inspector General has designated the OIG’s Assistant Inspector General for Investigations as the official responsible for the duties required under Section 1001, which are described in the next section of this report. Civil Rights and Civil Liberties Complaints Section 1001 requires the OIG to “review information and receive complaints alleging abuses of civil rights and civil liberties by employees and officials of the Department of Justice.” While the phrase “civil rights and civil liberties” is not specifically defined in the Patriot Act, the OIG has looked to the “Sense of Congress” provisions in the statute, namely Sections 102 and 1002, for context. Sections 102 and 1002 identify certain ethnic and religious groups who would be vulnerable to abuse due to a possible backlash from the terrorist attacks of September 11, 2001, including Muslims, Arabs, Sikhs, and South Asians. The OIG’s Investigations Division, which is headed by the Assistant Inspector General for Investigations, manages the OIG’s Section 1001 investigative responsibilities. The two units with primary responsibility for coordinating these activities are Operations Branch I and Operations Branch II, each of which is directed by a Special Agent in Charge and two Assistant Special Agents in Charge. In addition, these units are supported by Investigative Specialists and other staff assigned to the Hotline Operations Branch, who divide their time between Section 1001 and other responsibilities. The Investigations Division receives civil rights and civil liberties complaints via mail, email, telephone, and fax. Upon receipt, Division Assistant Special Agents in Charge review the complaints and assign an initial disposition to each matter, and Investigative Specialists enter the complaints alleging a violation within the investigative jurisdiction of the OIG or another federal agency into an OIG database. Serious civil rights and civil liberties allegations relating to actions of DOJ employees or contractors are typically assigned to an OIG Investigations Division field office, where Special Agents conduct investigations of criminal violations and administrative misconduct. Given the number of complaints the OIG receives compared to its limited resources, the OIG does not investigate all allegations of misconduct against DOJ employees. The OIG refers many complaints involving DOJ employees to internal affairs offices in DOJ components such as the FBI Inspection Division, the DEA Office of Professional Responsibility, and the BOP Office of Internal Affairs. In certain referrals, the OIG requires the components to report the results of their investigations to the OIG. In most cases, the OIG notifies the complainant of the referral. Many complaints the OIG receives involve matters outside its jurisdiction. When those matters identify a serious issue for investigation, such as a threat to life or safety, the OIG forwards them to the appropriate investigative entity. In other cases, the complainant is directed to another investigative agency when possible. Allegations related to the authority of a DOJ attorney to litigate, investigate, or provide legal advice are referred to the DOJ Office of Professional Responsibility. Allegations related solely to state and local law enforcement or government officials that raise a federal civil rights concern are forwarded to the DOJ Civil Rights Division. When an allegation received from any source involves a potential violation of federal civil rights statutes by a DOJ employee, the OIG discusses the complaint with the DOJ Civil Rights Division for possible prosecution. In some cases, the Civil Rights Division accepts the case and requests additional investigation by either the OIG or the FBI. In other cases, the Civil Rights Division declines prosecution and either the OIG or the appropriate DOJ internal affairs office reviews the case for possible administrative misconduct. Complaints Processed During This Reporting Period Between July 1, 2023, and December 31, 2023, the period covered by this report, the OIG processed 739 new complaints that were identified by the complainant as civil rights or civil liberties complaints. Of these complaints, 717 did not fall within the OIG’s jurisdiction or did not warrant further investigation. These complaints involved allegations against agencies or entities outside the DOJ, including other federal agencies, local governments, or private businesses, as well as allegations that were not suitable for investigation by the OIG, and could not be or were not referred to another agency for investigation, generally because the complaints failed to identify a subject or agency. The OIG found that the remaining 22 of the 739 complaints it received involved DOJ employees or DOJ components and included allegations that required further review. The OIG determined that 20 of these complaints generally raised management issues unrelated to the OIG’s Section 1001 duties and referred these complaints to DOJ components for appropriate handling. Examples of complaints in this category included allegations by federal prisoners about the general prison conditions, and by others that the FBI did not initiate an investigation into particular allegations. The OIG identified two complaints by federal prisoners that alleged a potential civil rights or civil liberties violation under Section 1001 but, based on the lack of an identified subject or the non-specific nature of the allegations, determined that these complaints generally raised management issues. The OIG referred these complaints to BOP for appropriate handling. USER: How many complaints were within OIG jurisdiction? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
18
7
1,436
null
231
Response must use only information contained in the context block to answer the question. Model should not rely on its own knowledge or outside sources of information when responding.
What are the consequences of using your credit card to get cash instead of just making regular purchases?
First Timer’s Guide: Credit Cards Used the right way, your credit card can be your new financial BFF. HonestMoney.ca Like most things, with great power comes great responsibility. And credit cards are no different. Used the right way, they can be your new financial BFF. But before you tap, swipe, and charge your way into a bold new financial future, it’s important to have a handle on the basics to avoid some of the downsides of living that plastic life. First things first: What is a credit card? In the most basic sense, a credit card is a piece of plastic that allows you to pay for things with borrowed money. It’s an agreement between you and a financial institution where you can opt to pay on credit rather than with actual money. In practice, it’s a little more involved than that. Your credit card comes with a limit—that is the amount of money you have to borrow against. And those charges? You’re going to pay interest on them if you carry a balance. But we’re getting ahead of ourselves. Before you get swiping, make sure you know why. And how, so you can do it responsibly. Why you should have a credit card? There are lots of reasons why having a credit card can make you into a financial super hero: TO BUILD CREDIT Somewhere down the line, you will need a credit history. And a credit card—when used correctly—is one of the easiest way to build credit. When the time comes to take out a car loan or get a mortgage, your financial institution will refer back to your credit history to see how reliable you are with borrowing money. So even if a credit card seems unnecessary, making frequent purchases with it and immediately paying it off will help you build a positive credit history, which will pay off in the future. FLIGHTS, RENTALS, HOTELS, AND ONLINE SHOPPING If you want to get on planes, trains, or automobiles, or to purchase the latest bobble from your favourite online retailer, you’re going to need a credit card. Ditto for booking a room in a hotel, booking concert tickets, and more. HonestMoney.ca REWARDS A lot of cards actually reward you for using them with things like cash back, travel points, or exclusive offers like concert tickets. As long as you’re managing your balance wisely, using your credit card frequently can help you treat yourself later. EMERGENCIES Hopefully it never happens, but every once and while we all get stuck in emergencies where we just don’t have cash on hand. And although you should never put something on your credit card if you don’t have the money to pay for it, your card might help you get out of a tough situation in the very short term – or at least until you can take stock of your situation and sit down with your financial expert to come up with a longer term plan. How to choose a card that’s right for you Now that you’ve decided to get a credit card, you have to ask yourself—which one should I apply for? Types of credit cards No or Low Annual Fee Cards: These cards offer the convenience of having a credit card in your wallet without a high annual fee. Most low or no annual fee cards offer basic rewards but may not accumulate perks as quickly as a fee-based card. Low Interest Rate Cards: Many cards have interest rates upwards of 19.5%, but there are cards available with lower interest rates in exchange for a low annual fee. These cards often don’t accumulate rewards quickly, but if you find yourself carrying a balance on your card month over month, this can be a smart choice. Cash Back Cards: Not all card rewards come in the form of points. For every purchase you make, cash back cards offer a percentage back in cash credited to your statement at a set time. Rewards Cards: For every purchase you make on your card, you’ll accumulate a set number of rewards points. Points can be redeemed for all sorts of different things, ranging from the latest gadgets and gift cards, to concert tickets and experiences. HonestMoney.ca Student Cards: You guessed it! These cards are specifically meant for students who are just starting to build their credit. These often come with low or no fees and offer basic rewards. Travel Rewards Cards: Similar to a rewards card, but focused on travel. Travel rewards cards feature points that can be redeemed for flights, hotels, and car rentals and often include insurance coverage for things like out-of-country medical, lost luggage, or changes to travel plans. US Dollar Cards: These cards allow you to make purchases directly in US dollars. It’s a good idea to be honest with yourself about how you plan to use your card and what’s really important to you. For instance, if you’re keeping your card in case of emergencies only, a low or no annual fee card might make the most sense. If you find yourself traveling often, the protections and perks that come with a travel rewards card might provide you with the best value. Once you have a better sense of your needs and habits, take the time to go online and do a little bit of research. Check out and compare different cards. Look at the features and benefits and what you need to apply. Some cards have a minimum income threshold to qualify or are designed specifically for students, so make sure you know what you’re getting yourself into. HonestMoney.ca Applying for your card Just because you want a credit card, doesn’t mean you can always get a credit card. Like any kind of credit, there is an application process to complete before you can start spending. 1. Go online (financial institution) or in branch. 2. Fill out an application; pay stubs, Social Insurance Number, ID, employment & income verification; other important info. 3. (If approved) activate your card! How to manage your card So, you have your credit card. Now what? While using your card is pretty straight forward, there are a couple of important things to know about managing your card. First, not every purchase on a credit card is created equal. While most of us tend to think of credit card purchases as tapping or swiping your card in a store or inputting your information online, you can also use your credit card to get cash or to make cash-like transactions. This is called a cash advance. Taking a cash advance might sound like a good idea, but this can be a costly way to access cash in the long run. Cash advances often charge a small fee to initiate and almost always charge a higher rate of interest than regular purchases. The other thing to keep in mind is how interest accumulates. With regular purchases, you have a grace period (usually 21 days or more) before interest begins to accumulate on the money you owe. When you take a cash advance, interest starts to accumulate right away and will continue to accrue until the whole amount of the advance is paid off in full. Don’t apply for every offer: Each time you apply for a credit card there will be an inquiry made on your credit history. Lots of inquiries over a short period of time can impact your credit score and lots of open, available cards can hurt your chances to qualify for more credit in the future. HonestMoney.ca You can also expect to get a monthly statement whether you use your card or not. Statements provide a detailed snapshot for a set period of time and outline your purchases, how much you owe, the minimum payment due, and when you need to make a payment. Statements are monthly, but may not run from the first day of the month to the last. When you get your statement, be sure to review it carefully. If something doesn’t make sense on your statement, or if there is something you don’t recognize, don’t be afraid to speak up and ask your card provider for more details. Credit card Dos and Don’ts DO DON’T • Pay off your full balance each month, if possible • Buy things you can easily pay for • Stay at around 50% of your credit limit • Check your balance on a regular basis • Become familiar with your grace periods and when interest kicks in • Make your payments on time • Take advantage of rewards programs • Just make the minimum payment required each month • Pay for things you can’t afford • Regularly run your balance close to your limit • Ignore your balance and transactions • Pay late or forget to make your payments altogether • Don’t make purchases just to gain rewards Be aware of your terms and conditions: If an offer seems too good to be true, it probably is. And the same goes for credit cards. Be cautious when it comes to 0% offers in exchange for making a big purchase and be sure that you understand the terms and conditions before signing up. Zero interest doesn’t last forever and some credit cards can charge very high interest rates once their introductory offers have expired. HonestMoney.ca Interest Every credit card has an interest rate. When you make a purchase with your credit card, your grace period for that transaction starts. This means that you have around 21-25 days (each credit card provider is different) to pay off that transaction before interest charges kick in. If you keep an unpaid balance on your credit card, interest will keep adding up month by month. But if you pay off the full balance on time, you’ll never have to pay interest! And remember, only making the minimum payment required each month still means you get charged interest on the full balance. % *Assumption: Based on an APR of 19.9%. This does not take into account minimum payments. INITIAL BALANCE $1,000* How your credit card accrues interest. Balance owing after 2 years $1,491.36 Balance owing after 3 years $1,821.27 Balance owing after 1 year $1,221.21 HonestMoney.ca Keeping your credit card info safe Credit cards have a lot of great security features, but knowing how to protect your card information is probably the biggest thing you can do to keep yourself from fraud. Here are a few easy tips that can help. Be aware of email phishing or fraudulent phone calls: Scammers will often try to create a sense of panic, trying to persuade you to give out your information. Don’t do it! Never give you credit card info over the phone or in an email: Your credit card provider or financial institution will never call you and ask for your credit card info over the phone/email. Cancel your card immediately if you ever lose it: Fraudulent transactions can be refunded if you report the card missing before they happen. Don’t write down your credit card information to store it: Enough said! Review your transactions regularly and ask lots of questions: Regularly check your transactions and balance and don’t be afraid to ask your card provider questions if you don’t recognize something. HonestMoney.ca Glossary APR This is short for Annual Percentage Rate. APR is the rate charged to the amount borrowed on your credit card, expressed as a percentage. (See Interest Rate definition.) ANNUAL FEE A yearly fee that is charged for having certain credit cards in your wallet. Not all credit cards have annual fees. The fee can range in price and typically includes access to other perks, points, or benefits above and beyond what you get with a standard, nofee card. BALANCE This is how much money you owe on your credit card. CREDIT LIMIT The maximum dollar amount you can spend on your credit card. GRACE PERIOD Typically, when you make a purchase on your credit card, interest doesn’t begin to accumulate immediately. Instead, you get a grace period (usually a minimum of 21 days) to make payments before you are charged interest. If you pay off the full amount owing on your card before the end of your grace period, you will not be charged interest. INTEREST RATE This is the percentage of interest that is charged on any balance owing on your card after the grace period is up. Interest is calculated daily and charged to your card monthly. Interest rates can vary from card to card. MINIMUM PAYMENT This is the smallest dollar amount that you can pay each month to keep your credit card account in good standing. STATEMENT Your credit card statement is a detailed list showing all of your transactions during your billing cycle, along with your balance owing (as of your statement date), your minimum payment, and when your payment is due.
Response must use only information contained in the context block to answer the question. Model should not rely on its own knowledge or outside sources of information when responding. What are the consequences of using your credit card to get cash instead of just making regular purchases? First Timer’s Guide: Credit Cards Used the right way, your credit card can be your new financial BFF. HonestMoney.ca Like most things, with great power comes great responsibility. And credit cards are no different. Used the right way, they can be your new financial BFF. But before you tap, swipe, and charge your way into a bold new financial future, it’s important to have a handle on the basics to avoid some of the downsides of living that plastic life. First things first: What is a credit card? In the most basic sense, a credit card is a piece of plastic that allows you to pay for things with borrowed money. It’s an agreement between you and a financial institution where you can opt to pay on credit rather than with actual money. In practice, it’s a little more involved than that. Your credit card comes with a limit—that is the amount of money you have to borrow against. And those charges? You’re going to pay interest on them if you carry a balance. But we’re getting ahead of ourselves. Before you get swiping, make sure you know why. And how, so you can do it responsibly. Why you should have a credit card? There are lots of reasons why having a credit card can make you into a financial super hero: TO BUILD CREDIT Somewhere down the line, you will need a credit history. And a credit card—when used correctly—is one of the easiest way to build credit. When the time comes to take out a car loan or get a mortgage, your financial institution will refer back to your credit history to see how reliable you are with borrowing money. So even if a credit card seems unnecessary, making frequent purchases with it and immediately paying it off will help you build a positive credit history, which will pay off in the future. FLIGHTS, RENTALS, HOTELS, AND ONLINE SHOPPING If you want to get on planes, trains, or automobiles, or to purchase the latest bobble from your favourite online retailer, you’re going to need a credit card. Ditto for booking a room in a hotel, booking concert tickets, and more. HonestMoney.ca REWARDS A lot of cards actually reward you for using them with things like cash back, travel points, or exclusive offers like concert tickets. As long as you’re managing your balance wisely, using your credit card frequently can help you treat yourself later. EMERGENCIES Hopefully it never happens, but every once and while we all get stuck in emergencies where we just don’t have cash on hand. And although you should never put something on your credit card if you don’t have the money to pay for it, your card might help you get out of a tough situation in the very short term – or at least until you can take stock of your situation and sit down with your financial expert to come up with a longer term plan. How to choose a card that’s right for you Now that you’ve decided to get a credit card, you have to ask yourself—which one should I apply for? Types of credit cards No or Low Annual Fee Cards: These cards offer the convenience of having a credit card in your wallet without a high annual fee. Most low or no annual fee cards offer basic rewards but may not accumulate perks as quickly as a fee-based card. Low Interest Rate Cards: Many cards have interest rates upwards of 19.5%, but there are cards available with lower interest rates in exchange for a low annual fee. These cards often don’t accumulate rewards quickly, but if you find yourself carrying a balance on your card month over month, this can be a smart choice. Cash Back Cards: Not all card rewards come in the form of points. For every purchase you make, cash back cards offer a percentage back in cash credited to your statement at a set time. Rewards Cards: For every purchase you make on your card, you’ll accumulate a set number of rewards points. Points can be redeemed for all sorts of different things, ranging from the latest gadgets and gift cards, to concert tickets and experiences. HonestMoney.ca Student Cards: You guessed it! These cards are specifically meant for students who are just starting to build their credit. These often come with low or no fees and offer basic rewards. Travel Rewards Cards: Similar to a rewards card, but focused on travel. Travel rewards cards feature points that can be redeemed for flights, hotels, and car rentals and often include insurance coverage for things like out-of-country medical, lost luggage, or changes to travel plans. US Dollar Cards: These cards allow you to make purchases directly in US dollars. It’s a good idea to be honest with yourself about how you plan to use your card and what’s really important to you. For instance, if you’re keeping your card in case of emergencies only, a low or no annual fee card might make the most sense. If you find yourself traveling often, the protections and perks that come with a travel rewards card might provide you with the best value. Once you have a better sense of your needs and habits, take the time to go online and do a little bit of research. Check out and compare different cards. Look at the features and benefits and what you need to apply. Some cards have a minimum income threshold to qualify or are designed specifically for students, so make sure you know what you’re getting yourself into. HonestMoney.ca Applying for your card Just because you want a credit card, doesn’t mean you can always get a credit card. Like any kind of credit, there is an application process to complete before you can start spending. 1. Go online (financial institution) or in branch. 2. Fill out an application; pay stubs, Social Insurance Number, ID, employment & income verification; other important info. 3. (If approved) activate your card! How to manage your card So, you have your credit card. Now what? While using your card is pretty straight forward, there are a couple of important things to know about managing your card. First, not every purchase on a credit card is created equal. While most of us tend to think of credit card purchases as tapping or swiping your card in a store or inputting your information online, you can also use your credit card to get cash or to make cash-like transactions. This is called a cash advance. Taking a cash advance might sound like a good idea, but this can be a costly way to access cash in the long run. Cash advances often charge a small fee to initiate and almost always charge a higher rate of interest than regular purchases. The other thing to keep in mind is how interest accumulates. With regular purchases, you have a grace period (usually 21 days or more) before interest begins to accumulate on the money you owe. When you take a cash advance, interest starts to accumulate right away and will continue to accrue until the whole amount of the advance is paid off in full. Don’t apply for every offer: Each time you apply for a credit card there will be an inquiry made on your credit history. Lots of inquiries over a short period of time can impact your credit score and lots of open, available cards can hurt your chances to qualify for more credit in the future. HonestMoney.ca You can also expect to get a monthly statement whether you use your card or not. Statements provide a detailed snapshot for a set period of time and outline your purchases, how much you owe, the minimum payment due, and when you need to make a payment. Statements are monthly, but may not run from the first day of the month to the last. When you get your statement, be sure to review it carefully. If something doesn’t make sense on your statement, or if there is something you don’t recognize, don’t be afraid to speak up and ask your card provider for more details. Credit card Dos and Don’ts DO DON’T • Pay off your full balance each month, if possible • Buy things you can easily pay for • Stay at around 50% of your credit limit • Check your balance on a regular basis • Become familiar with your grace periods and when interest kicks in • Make your payments on time • Take advantage of rewards programs • Just make the minimum payment required each month • Pay for things you can’t afford • Regularly run your balance close to your limit • Ignore your balance and transactions • Pay late or forget to make your payments altogether • Don’t make purchases just to gain rewards Be aware of your terms and conditions: If an offer seems too good to be true, it probably is. And the same goes for credit cards. Be cautious when it comes to 0% offers in exchange for making a big purchase and be sure that you understand the terms and conditions before signing up. Zero interest doesn’t last forever and some credit cards can charge very high interest rates once their introductory offers have expired. HonestMoney.ca Interest Every credit card has an interest rate. When you make a purchase with your credit card, your grace period for that transaction starts. This means that you have around 21-25 days (each credit card provider is different) to pay off that transaction before interest charges kick in. If you keep an unpaid balance on your credit card, interest will keep adding up month by month. But if you pay off the full balance on time, you’ll never have to pay interest! And remember, only making the minimum payment required each month still means you get charged interest on the full balance. % *Assumption: Based on an APR of 19.9%. This does not take into account minimum payments. INITIAL BALANCE $1,000* How your credit card accrues interest. Balance owing after 2 years $1,491.36 Balance owing after 3 years $1,821.27 Balance owing after 1 year $1,221.21 HonestMoney.ca Keeping your credit card info safe Credit cards have a lot of great security features, but knowing how to protect your card information is probably the biggest thing you can do to keep yourself from fraud. Here are a few easy tips that can help. Be aware of email phishing or fraudulent phone calls: Scammers will often try to create a sense of panic, trying to persuade you to give out your information. Don’t do it! Never give you credit card info over the phone or in an email: Your credit card provider or financial institution will never call you and ask for your credit card info over the phone/email. Cancel your card immediately if you ever lose it: Fraudulent transactions can be refunded if you report the card missing before they happen. Don’t write down your credit card information to store it: Enough said! Review your transactions regularly and ask lots of questions: Regularly check your transactions and balance and don’t be afraid to ask your card provider questions if you don’t recognize something. HonestMoney.ca Glossary APR This is short for Annual Percentage Rate. APR is the rate charged to the amount borrowed on your credit card, expressed as a percentage. (See Interest Rate definition.) ANNUAL FEE A yearly fee that is charged for having certain credit cards in your wallet. Not all credit cards have annual fees. The fee can range in price and typically includes access to other perks, points, or benefits above and beyond what you get with a standard, nofee card. BALANCE This is how much money you owe on your credit card. CREDIT LIMIT The maximum dollar amount you can spend on your credit card. GRACE PERIOD Typically, when you make a purchase on your credit card, interest doesn’t begin to accumulate immediately. Instead, you get a grace period (usually a minimum of 21 days) to make payments before you are charged interest. If you pay off the full amount owing on your card before the end of your grace period, you will not be charged interest. INTEREST RATE This is the percentage of interest that is charged on any balance owing on your card after the grace period is up. Interest is calculated daily and charged to your card monthly. Interest rates can vary from card to card. MINIMUM PAYMENT This is the smallest dollar amount that you can pay each month to keep your credit card account in good standing. STATEMENT Your credit card statement is a detailed list showing all of your transactions during your billing cycle, along with your balance owing (as of your statement date), your minimum payment, and when your payment is due.
Response must use only information contained in the context block to answer the question. Model should not rely on its own knowledge or outside sources of information when responding. EVIDENCE: First Timer’s Guide: Credit Cards Used the right way, your credit card can be your new financial BFF. HonestMoney.ca Like most things, with great power comes great responsibility. And credit cards are no different. Used the right way, they can be your new financial BFF. But before you tap, swipe, and charge your way into a bold new financial future, it’s important to have a handle on the basics to avoid some of the downsides of living that plastic life. First things first: What is a credit card? In the most basic sense, a credit card is a piece of plastic that allows you to pay for things with borrowed money. It’s an agreement between you and a financial institution where you can opt to pay on credit rather than with actual money. In practice, it’s a little more involved than that. Your credit card comes with a limit—that is the amount of money you have to borrow against. And those charges? You’re going to pay interest on them if you carry a balance. But we’re getting ahead of ourselves. Before you get swiping, make sure you know why. And how, so you can do it responsibly. Why you should have a credit card? There are lots of reasons why having a credit card can make you into a financial super hero: TO BUILD CREDIT Somewhere down the line, you will need a credit history. And a credit card—when used correctly—is one of the easiest way to build credit. When the time comes to take out a car loan or get a mortgage, your financial institution will refer back to your credit history to see how reliable you are with borrowing money. So even if a credit card seems unnecessary, making frequent purchases with it and immediately paying it off will help you build a positive credit history, which will pay off in the future. FLIGHTS, RENTALS, HOTELS, AND ONLINE SHOPPING If you want to get on planes, trains, or automobiles, or to purchase the latest bobble from your favourite online retailer, you’re going to need a credit card. Ditto for booking a room in a hotel, booking concert tickets, and more. HonestMoney.ca REWARDS A lot of cards actually reward you for using them with things like cash back, travel points, or exclusive offers like concert tickets. As long as you’re managing your balance wisely, using your credit card frequently can help you treat yourself later. EMERGENCIES Hopefully it never happens, but every once and while we all get stuck in emergencies where we just don’t have cash on hand. And although you should never put something on your credit card if you don’t have the money to pay for it, your card might help you get out of a tough situation in the very short term – or at least until you can take stock of your situation and sit down with your financial expert to come up with a longer term plan. How to choose a card that’s right for you Now that you’ve decided to get a credit card, you have to ask yourself—which one should I apply for? Types of credit cards No or Low Annual Fee Cards: These cards offer the convenience of having a credit card in your wallet without a high annual fee. Most low or no annual fee cards offer basic rewards but may not accumulate perks as quickly as a fee-based card. Low Interest Rate Cards: Many cards have interest rates upwards of 19.5%, but there are cards available with lower interest rates in exchange for a low annual fee. These cards often don’t accumulate rewards quickly, but if you find yourself carrying a balance on your card month over month, this can be a smart choice. Cash Back Cards: Not all card rewards come in the form of points. For every purchase you make, cash back cards offer a percentage back in cash credited to your statement at a set time. Rewards Cards: For every purchase you make on your card, you’ll accumulate a set number of rewards points. Points can be redeemed for all sorts of different things, ranging from the latest gadgets and gift cards, to concert tickets and experiences. HonestMoney.ca Student Cards: You guessed it! These cards are specifically meant for students who are just starting to build their credit. These often come with low or no fees and offer basic rewards. Travel Rewards Cards: Similar to a rewards card, but focused on travel. Travel rewards cards feature points that can be redeemed for flights, hotels, and car rentals and often include insurance coverage for things like out-of-country medical, lost luggage, or changes to travel plans. US Dollar Cards: These cards allow you to make purchases directly in US dollars. It’s a good idea to be honest with yourself about how you plan to use your card and what’s really important to you. For instance, if you’re keeping your card in case of emergencies only, a low or no annual fee card might make the most sense. If you find yourself traveling often, the protections and perks that come with a travel rewards card might provide you with the best value. Once you have a better sense of your needs and habits, take the time to go online and do a little bit of research. Check out and compare different cards. Look at the features and benefits and what you need to apply. Some cards have a minimum income threshold to qualify or are designed specifically for students, so make sure you know what you’re getting yourself into. HonestMoney.ca Applying for your card Just because you want a credit card, doesn’t mean you can always get a credit card. Like any kind of credit, there is an application process to complete before you can start spending. 1. Go online (financial institution) or in branch. 2. Fill out an application; pay stubs, Social Insurance Number, ID, employment & income verification; other important info. 3. (If approved) activate your card! How to manage your card So, you have your credit card. Now what? While using your card is pretty straight forward, there are a couple of important things to know about managing your card. First, not every purchase on a credit card is created equal. While most of us tend to think of credit card purchases as tapping or swiping your card in a store or inputting your information online, you can also use your credit card to get cash or to make cash-like transactions. This is called a cash advance. Taking a cash advance might sound like a good idea, but this can be a costly way to access cash in the long run. Cash advances often charge a small fee to initiate and almost always charge a higher rate of interest than regular purchases. The other thing to keep in mind is how interest accumulates. With regular purchases, you have a grace period (usually 21 days or more) before interest begins to accumulate on the money you owe. When you take a cash advance, interest starts to accumulate right away and will continue to accrue until the whole amount of the advance is paid off in full. Don’t apply for every offer: Each time you apply for a credit card there will be an inquiry made on your credit history. Lots of inquiries over a short period of time can impact your credit score and lots of open, available cards can hurt your chances to qualify for more credit in the future. HonestMoney.ca You can also expect to get a monthly statement whether you use your card or not. Statements provide a detailed snapshot for a set period of time and outline your purchases, how much you owe, the minimum payment due, and when you need to make a payment. Statements are monthly, but may not run from the first day of the month to the last. When you get your statement, be sure to review it carefully. If something doesn’t make sense on your statement, or if there is something you don’t recognize, don’t be afraid to speak up and ask your card provider for more details. Credit card Dos and Don’ts DO DON’T • Pay off your full balance each month, if possible • Buy things you can easily pay for • Stay at around 50% of your credit limit • Check your balance on a regular basis • Become familiar with your grace periods and when interest kicks in • Make your payments on time • Take advantage of rewards programs • Just make the minimum payment required each month • Pay for things you can’t afford • Regularly run your balance close to your limit • Ignore your balance and transactions • Pay late or forget to make your payments altogether • Don’t make purchases just to gain rewards Be aware of your terms and conditions: If an offer seems too good to be true, it probably is. And the same goes for credit cards. Be cautious when it comes to 0% offers in exchange for making a big purchase and be sure that you understand the terms and conditions before signing up. Zero interest doesn’t last forever and some credit cards can charge very high interest rates once their introductory offers have expired. HonestMoney.ca Interest Every credit card has an interest rate. When you make a purchase with your credit card, your grace period for that transaction starts. This means that you have around 21-25 days (each credit card provider is different) to pay off that transaction before interest charges kick in. If you keep an unpaid balance on your credit card, interest will keep adding up month by month. But if you pay off the full balance on time, you’ll never have to pay interest! And remember, only making the minimum payment required each month still means you get charged interest on the full balance. % *Assumption: Based on an APR of 19.9%. This does not take into account minimum payments. INITIAL BALANCE $1,000* How your credit card accrues interest. Balance owing after 2 years $1,491.36 Balance owing after 3 years $1,821.27 Balance owing after 1 year $1,221.21 HonestMoney.ca Keeping your credit card info safe Credit cards have a lot of great security features, but knowing how to protect your card information is probably the biggest thing you can do to keep yourself from fraud. Here are a few easy tips that can help. Be aware of email phishing or fraudulent phone calls: Scammers will often try to create a sense of panic, trying to persuade you to give out your information. Don’t do it! Never give you credit card info over the phone or in an email: Your credit card provider or financial institution will never call you and ask for your credit card info over the phone/email. Cancel your card immediately if you ever lose it: Fraudulent transactions can be refunded if you report the card missing before they happen. Don’t write down your credit card information to store it: Enough said! Review your transactions regularly and ask lots of questions: Regularly check your transactions and balance and don’t be afraid to ask your card provider questions if you don’t recognize something. HonestMoney.ca Glossary APR This is short for Annual Percentage Rate. APR is the rate charged to the amount borrowed on your credit card, expressed as a percentage. (See Interest Rate definition.) ANNUAL FEE A yearly fee that is charged for having certain credit cards in your wallet. Not all credit cards have annual fees. The fee can range in price and typically includes access to other perks, points, or benefits above and beyond what you get with a standard, nofee card. BALANCE This is how much money you owe on your credit card. CREDIT LIMIT The maximum dollar amount you can spend on your credit card. GRACE PERIOD Typically, when you make a purchase on your credit card, interest doesn’t begin to accumulate immediately. Instead, you get a grace period (usually a minimum of 21 days) to make payments before you are charged interest. If you pay off the full amount owing on your card before the end of your grace period, you will not be charged interest. INTEREST RATE This is the percentage of interest that is charged on any balance owing on your card after the grace period is up. Interest is calculated daily and charged to your card monthly. Interest rates can vary from card to card. MINIMUM PAYMENT This is the smallest dollar amount that you can pay each month to keep your credit card account in good standing. STATEMENT Your credit card statement is a detailed list showing all of your transactions during your billing cycle, along with your balance owing (as of your statement date), your minimum payment, and when your payment is due. USER: What are the consequences of using your credit card to get cash instead of just making regular purchases? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
29
18
2,143
null
12
"================ <TEXT PASSAGE> ======= [context document] ================ <QUESTION> ======= [user request] ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided."
In the reference text, three trends are stated. I would like you to summarize the central idea of each trend. Their respective implication should be included in the summary. Finally, in trend one, consumers in the US are expressing a desire to have other kinds of at-home kits, could you retrieve the test with the lowest percentage?
Five trends shaping the consumer health and wellness space in 2024 Fifty-eight percent of US respondents to our survey said they are prioritizing wellness more now than they did a year ago. The following five trends encompass their newly emerging priorities, as well as those that are consistent with our earlier research. Trend one: Health at home The COVID-19 pandemic made at-home testing kits a household item. As the pandemic has moved into its endemic phase, consumers are expressing greater interest in other kinds of at-home kits: 26 percent of US consumers are interested in testing for vitamin and mineral deficiencies at home, 24 percent for cold and flu symptoms, and 23 percent for cholesterol levels. At-home diagnostic tests are appealing to consumers because they offer greater convenience than going to a doctor’s office, quick results, and the ability to test frequently. In China, 35 percent of consumers reported that they had even replaced some in-person healthcare appointments with at-home diagnostic tests—a higher share than in the United States or the United Kingdom. Although there is growing interest in the space, some consumers express hesitancy. In the United States and the United Kingdom, top barriers to adoption include the preference to see a doctor in person, a perceived lack of need, and price; in China, test accuracy is a concern for approximately 30 percent of consumers. Implications for companies: Companies can address three critical considerations to help ensure success in this category. First, companies will want to determine the right price value equation for at-home diagnostic kits since cost still presents a major barrier for many consumers today. Second, companies should consider creating consumer feedback loops, encouraging users to take action based on their test results and then test again to assess the impact of those interventions. Third, companies that help consumers understand their test results—either through the use of generative AI to help analyze and deliver personalized results, or through integration with telehealth services—could develop a competitive advantage. Trend two: A new era for biomonitoring and wearables Roughly half of all consumers we surveyed have purchased a fitness wearable at some point in time. While wearable devices such as watches have been popular for years, new modalities powered by breakthrough technologies have ushered in a new era for biomonitoring and wearable devices. Wearable biometric rings, for example, are now equipped with sensors that provide consumers with insights about their sleep quality through paired mobile apps. Continuous glucose monitors, which can be applied to the back of the user’s arm, provide insights about the user’s blood sugar levels, which may then be interpreted by a nutritionist who can offer personalized health guidance. Roughly one-third of surveyed wearable users said they use their devices more often than they did last year, and more than 75 percent of all surveyed consumers indicated an openness to using a wearable in the future. We expect the use of wearable devices to continue to grow, particularly as companies track a wider range of health indicators. Implications for companies: While there is a range of effective wearable solutions on the market today for fitness and sleep, there are fewer for nutrition, weight management, and mindfulness, presenting an opportunity for companies to fill these gaps. Wearables makers and health product and services providers in areas such as nutrition, fitness, and sleep can explore partnerships that try to make the data collected through wearable devices actionable, which could drive greater behavioral change among consumers. One example: a consumer interested in managing stress levels might wear a device that tracks spikes in cortisol. Companies could then use this data to make personalized recommendations for products related to wellness, fitness, and mindfulness exercises. Businesses must keep data privacy and clarity of insights top of mind. Roughly 30 percent of China, UK, and US consumers are open to using a wearable device only if the data is shared exclusively with them. Additionally, requiring too much manual data input or sharing overly complicated insights could diminish the user experience. Ensuring that data collection is transparent and that insights are simple to understand and targeted to consumers’ specific health goals or risk factors will be crucial to attracting potential consumers. Trend three: Personalization’s gen AI boost Nearly one in five US consumers and one in three US millennials prefer personalized products and services. While the preference for personalized wellness products was lower than in years prior, we believe this is likely due to consumers becoming more selective about which personalized products and services they use. Technological advancements and the rise of first-party data are giving personalization a new edge. Approximately 20 percent of consumers in the United Kingdom and the United States and 30 percent in China look for personalized products and services that use biometric data to provide recommendations. There is an opportunity to pair these tools with gen AI to unlock greater precision and customization. In fact, gen AI has already made its way to the wearables and app space: some wearables use gen AI to design customized workouts for users based on their fitness data. Implications for companies: Companies that offer software-based health and wellness services to consumers are uniquely positioned to incorporate gen AI into their personalization offerings. Other businesses could explore partnerships with companies that use gen AI to create personalized wellness recommendations.
"================ <TEXT PASSAGE> ======= Five trends shaping the consumer health and wellness space in 2024 Fifty-eight percent of US respondents to our survey said they are prioritizing wellness more now than they did a year ago. The following five trends encompass their newly emerging priorities, as well as those that are consistent with our earlier research. Trend one: Health at home The COVID-19 pandemic made at-home testing kits a household item. As the pandemic has moved into its endemic phase, consumers are expressing greater interest in other kinds of at-home kits: 26 percent of US consumers are interested in testing for vitamin and mineral deficiencies at home, 24 percent for cold and flu symptoms, and 23 percent for cholesterol levels. At-home diagnostic tests are appealing to consumers because they offer greater convenience than going to a doctor’s office, quick results, and the ability to test frequently. In China, 35 percent of consumers reported that they had even replaced some in-person healthcare appointments with at-home diagnostic tests—a higher share than in the United States or the United Kingdom. Although there is growing interest in the space, some consumers express hesitancy. In the United States and the United Kingdom, top barriers to adoption include the preference to see a doctor in person, a perceived lack of need, and price; in China, test accuracy is a concern for approximately 30 percent of consumers. Implications for companies: Companies can address three critical considerations to help ensure success in this category. First, companies will want to determine the right price value equation for at-home diagnostic kits since cost still presents a major barrier for many consumers today. Second, companies should consider creating consumer feedback loops, encouraging users to take action based on their test results and then test again to assess the impact of those interventions. Third, companies that help consumers understand their test results—either through the use of generative AI to help analyze and deliver personalized results, or through integration with telehealth services—could develop a competitive advantage. Trend two: A new era for biomonitoring and wearables Roughly half of all consumers we surveyed have purchased a fitness wearable at some point in time. While wearable devices such as watches have been popular for years, new modalities powered by breakthrough technologies have ushered in a new era for biomonitoring and wearable devices. Wearable biometric rings, for example, are now equipped with sensors that provide consumers with insights about their sleep quality through paired mobile apps. Continuous glucose monitors, which can be applied to the back of the user’s arm, provide insights about the user’s blood sugar levels, which may then be interpreted by a nutritionist who can offer personalized health guidance. Roughly one-third of surveyed wearable users said they use their devices more often than they did last year, and more than 75 percent of all surveyed consumers indicated an openness to using a wearable in the future. We expect the use of wearable devices to continue to grow, particularly as companies track a wider range of health indicators. Implications for companies: While there is a range of effective wearable solutions on the market today for fitness and sleep, there are fewer for nutrition, weight management, and mindfulness, presenting an opportunity for companies to fill these gaps. Wearables makers and health product and services providers in areas such as nutrition, fitness, and sleep can explore partnerships that try to make the data collected through wearable devices actionable, which could drive greater behavioral change among consumers. One example: a consumer interested in managing stress levels might wear a device that tracks spikes in cortisol. Companies could then use this data to make personalized recommendations for products related to wellness, fitness, and mindfulness exercises. Businesses must keep data privacy and clarity of insights top of mind. Roughly 30 percent of China, UK, and US consumers are open to using a wearable device only if the data is shared exclusively with them. Additionally, requiring too much manual data input or sharing overly complicated insights could diminish the user experience. Ensuring that data collection is transparent and that insights are simple to understand and targeted to consumers’ specific health goals or risk factors will be crucial to attracting potential consumers. Trend three: Personalization’s gen AI boost Nearly one in five US consumers and one in three US millennials prefer personalized products and services. While the preference for personalized wellness products was lower than in years prior, we believe this is likely due to consumers becoming more selective about which personalized products and services they use. Technological advancements and the rise of first-party data are giving personalization a new edge. Approximately 20 percent of consumers in the United Kingdom and the United States and 30 percent in China look for personalized products and services that use biometric data to provide recommendations. There is an opportunity to pair these tools with gen AI to unlock greater precision and customization. In fact, gen AI has already made its way to the wearables and app space: some wearables use gen AI to design customized workouts for users based on their fitness data. Implications for companies: Companies that offer software-based health and wellness services to consumers are uniquely positioned to incorporate gen AI into their personalization offerings. Other businesses could explore partnerships with companies that use gen AI to create personalized wellness recommendations. https://www.mckinsey.com/industries/consumer-packaged-goods/our-insights/the-trends-defining-the-1-point-8-trillion-dollar-global-wellness-market-in-2024?stcr=E8E9B8D1DADC4FF7928252A2E8D12F2B&cid=other-eml-alt-mip-mck&hlkid=3ac2023292574ef9a3db1c1785acc32d&hctky=12113536&hdpid=0df4d40d-7d9b-4711-914d-82aea6c69268 ================ <QUESTION> ======= In the reference text, three trends are stated. I would like you to summarize the central idea of each trend. Their respective implication should be included in the summary. Finally, in trend one, consumers in the US are expressing a desire to have other kinds of at-home kits, could you retrieve the test with the lowest percentage? ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided."
"================ <TEXT PASSAGE> ======= [context document] ================ <QUESTION> ======= [user request] ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." EVIDENCE: Five trends shaping the consumer health and wellness space in 2024 Fifty-eight percent of US respondents to our survey said they are prioritizing wellness more now than they did a year ago. The following five trends encompass their newly emerging priorities, as well as those that are consistent with our earlier research. Trend one: Health at home The COVID-19 pandemic made at-home testing kits a household item. As the pandemic has moved into its endemic phase, consumers are expressing greater interest in other kinds of at-home kits: 26 percent of US consumers are interested in testing for vitamin and mineral deficiencies at home, 24 percent for cold and flu symptoms, and 23 percent for cholesterol levels. At-home diagnostic tests are appealing to consumers because they offer greater convenience than going to a doctor’s office, quick results, and the ability to test frequently. In China, 35 percent of consumers reported that they had even replaced some in-person healthcare appointments with at-home diagnostic tests—a higher share than in the United States or the United Kingdom. Although there is growing interest in the space, some consumers express hesitancy. In the United States and the United Kingdom, top barriers to adoption include the preference to see a doctor in person, a perceived lack of need, and price; in China, test accuracy is a concern for approximately 30 percent of consumers. Implications for companies: Companies can address three critical considerations to help ensure success in this category. First, companies will want to determine the right price value equation for at-home diagnostic kits since cost still presents a major barrier for many consumers today. Second, companies should consider creating consumer feedback loops, encouraging users to take action based on their test results and then test again to assess the impact of those interventions. Third, companies that help consumers understand their test results—either through the use of generative AI to help analyze and deliver personalized results, or through integration with telehealth services—could develop a competitive advantage. Trend two: A new era for biomonitoring and wearables Roughly half of all consumers we surveyed have purchased a fitness wearable at some point in time. While wearable devices such as watches have been popular for years, new modalities powered by breakthrough technologies have ushered in a new era for biomonitoring and wearable devices. Wearable biometric rings, for example, are now equipped with sensors that provide consumers with insights about their sleep quality through paired mobile apps. Continuous glucose monitors, which can be applied to the back of the user’s arm, provide insights about the user’s blood sugar levels, which may then be interpreted by a nutritionist who can offer personalized health guidance. Roughly one-third of surveyed wearable users said they use their devices more often than they did last year, and more than 75 percent of all surveyed consumers indicated an openness to using a wearable in the future. We expect the use of wearable devices to continue to grow, particularly as companies track a wider range of health indicators. Implications for companies: While there is a range of effective wearable solutions on the market today for fitness and sleep, there are fewer for nutrition, weight management, and mindfulness, presenting an opportunity for companies to fill these gaps. Wearables makers and health product and services providers in areas such as nutrition, fitness, and sleep can explore partnerships that try to make the data collected through wearable devices actionable, which could drive greater behavioral change among consumers. One example: a consumer interested in managing stress levels might wear a device that tracks spikes in cortisol. Companies could then use this data to make personalized recommendations for products related to wellness, fitness, and mindfulness exercises. Businesses must keep data privacy and clarity of insights top of mind. Roughly 30 percent of China, UK, and US consumers are open to using a wearable device only if the data is shared exclusively with them. Additionally, requiring too much manual data input or sharing overly complicated insights could diminish the user experience. Ensuring that data collection is transparent and that insights are simple to understand and targeted to consumers’ specific health goals or risk factors will be crucial to attracting potential consumers. Trend three: Personalization’s gen AI boost Nearly one in five US consumers and one in three US millennials prefer personalized products and services. While the preference for personalized wellness products was lower than in years prior, we believe this is likely due to consumers becoming more selective about which personalized products and services they use. Technological advancements and the rise of first-party data are giving personalization a new edge. Approximately 20 percent of consumers in the United Kingdom and the United States and 30 percent in China look for personalized products and services that use biometric data to provide recommendations. There is an opportunity to pair these tools with gen AI to unlock greater precision and customization. In fact, gen AI has already made its way to the wearables and app space: some wearables use gen AI to design customized workouts for users based on their fitness data. Implications for companies: Companies that offer software-based health and wellness services to consumers are uniquely positioned to incorporate gen AI into their personalization offerings. Other businesses could explore partnerships with companies that use gen AI to create personalized wellness recommendations. USER: In the reference text, three trends are stated. I would like you to summarize the central idea of each trend. Their respective implication should be included in the summary. Finally, in trend one, consumers in the US are expressing a desire to have other kinds of at-home kits, could you retrieve the test with the lowest percentage? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
49
57
885
null
298
[question] [user request] ===================== [text] [context document] ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources.
My kid is an elite athlete fielding several college scholarships. I want to know the legalities of what he can do with his name, image, and likeness deals. Several companies have approached my kid with brand deals and I want to know how it will work and what it will mean for his future financially and within the realm of his sport. What's up? We are based in California.
Rules The current NIL rules related to college athletes apply via state law and/or NCAA rules. Due to the potential of having a patchwork of so many NIL rules, the NCAA is now asking for federal legislation addressing this patchwork of NIL rules. The NCAA allows member institutions to recruit and sign high school athletes who have participated in NIL activities while the student was in high school. The current NIL rules related to high school athletes are established state to state. There is a possibility of 51 different sets of rules relating to NIL (50 states and the District of Columbia), and 51 different sets of rules relating to transfer (undue influence, bona fide moves, traditional academic transfer rules, and subsequent eligibility determinations) leads to many questions. As of October 4, 2023, the following 30 states and the District of Columbia allow interscholastic athletes to receive NIL payments: Alaska, California, Colorado, Connecticut, District of Columbia, Georgia, Idaho, Illinois, Iowa, Kansas, Louisiana, Maine, Maryland, Massachusetts, Minnesota, Nebraska, Nevada, New Jersey, New Mexico, New York, North Carolina, North Dakota, Oklahoma, Oregon, Pennsylvania, Rhode Island, Tennessee, Utah, Virginia and Washington. This developing area requires measured and contemplative rule-making. What the landscape in both the near- and far-future will look like for both collegiate and high school NIL is anyone’s guess. But one thing is for sure, NIL at both the collegiate and interscholastic levels is here to stay. New and Different Options The financial opportunities for high school athletes are not just for product endorsement, gear sales or the more traditional NIL models – interscholastic opportunities can be much more sophisticated. Now, there must be consideration of rules that apply to 1) the developing individual athlete “brands” which the individual athlete has personally created on social media, and 2) the number of new and developing avenues of revenue streams for these athletes that don’t necessarily include U.S. currency. An example of such an option for high school athletes is what former USC quarterback and Heisman Trophy winner Matt Leinart and current USC quarterback and Heisman Trophy winner Caleb Williams provide with their Hall of Goats organization. Hall of Goats has a platform designed to provide athletes the opportunity to drop exclusive NFT collections and take advantage of their NIL rights, while providing personal brand, content and digital resources. NFT collections are “non-functional tokens.” These tokens and the platform are designed to allow athletes to use the platform to help build their brands, tell their stories and set them up for long-term success. As a result, the traditional notions of what constitutes remunerative value for a high school athlete no longer is only about “cash” but is about other things of value that if properly “negotiated” can lead to additional, and non-traditional monetizing of the athlete, at a time the athlete chooses. The significance of athletes having their own brand is that it allows them access to NIL separate from their high school team and school district. The most common limitations on high school NIL are that the athlete cannot use the school’s name, uniform, mascot, etc., as part of what they market for NIL. At the high school level, the brand’s value is not linked to the school the athlete attends but is linked to the individual. The “athlete brand” can be quantified by the number of social media followers the athlete has, and the number of views on videos that the athlete has. This is where the value to the athlete is, which is different from the collegiate model where an athlete may get a percentage of profits from the sale of their school jersey with the number and name they wear, cashing in on the school’s brand and name.
[question] My kid is an elite athlete fielding several college scholarships. I want to know the legalities of what he can do with his name, image, and likeness deals. Several companies have approached my kid with brand deals and I want to know how it will work and what it will mean for his future financially and within the realm of his sport. What's up? We are based in California. ===================== [text] Rules The current NIL rules related to college athletes apply via state law and/or NCAA rules. Due to the potential of having a patchwork of so many NIL rules, the NCAA is now asking for federal legislation addressing this patchwork of NIL rules. The NCAA allows member institutions to recruit and sign high school athletes who have participated in NIL activities while the student was in high school. The current NIL rules related to high school athletes are established state to state. There is a possibility of 51 different sets of rules relating to NIL (50 states and the District of Columbia), and 51 different sets of rules relating to transfer (undue influence, bona fide moves, traditional academic transfer rules, and subsequent eligibility determinations) leads to many questions. As of October 4, 2023, the following 30 states and the District of Columbia allow interscholastic athletes to receive NIL payments: Alaska, California, Colorado, Connecticut, District of Columbia, Georgia, Idaho, Illinois, Iowa, Kansas, Louisiana, Maine, Maryland, Massachusetts, Minnesota, Nebraska, Nevada, New Jersey, New Mexico, New York, North Carolina, North Dakota, Oklahoma, Oregon, Pennsylvania, Rhode Island, Tennessee, Utah, Virginia and Washington. This developing area requires measured and contemplative rule-making. What the landscape in both the near- and far-future will look like for both collegiate and high school NIL is anyone’s guess. But one thing is for sure, NIL at both the collegiate and interscholastic levels is here to stay. New and Different Options The financial opportunities for high school athletes are not just for product endorsement, gear sales or the more traditional NIL models – interscholastic opportunities can be much more sophisticated. Now, there must be consideration of rules that apply to 1) the developing individual athlete “brands” which the individual athlete has personally created on social media, and 2) the number of new and developing avenues of revenue streams for these athletes that don’t necessarily include U.S. currency. An example of such an option for high school athletes is what former USC quarterback and Heisman Trophy winner Matt Leinart and current USC quarterback and Heisman Trophy winner Caleb Williams provide with their Hall of Goats organization. Hall of Goats has a platform designed to provide athletes the opportunity to drop exclusive NFT collections and take advantage of their NIL rights, while providing personal brand, content and digital resources. NFT collections are “non-functional tokens.” These tokens and the platform are designed to allow athletes to use the platform to help build their brands, tell their stories and set them up for long-term success. As a result, the traditional notions of what constitutes remunerative value for a high school athlete no longer is only about “cash” but is about other things of value that if properly “negotiated” can lead to additional, and non-traditional monetizing of the athlete, at a time the athlete chooses. The significance of athletes having their own brand is that it allows them access to NIL separate from their high school team and school district. The most common limitations on high school NIL are that the athlete cannot use the school’s name, uniform, mascot, etc., as part of what they market for NIL. At the high school level, the brand’s value is not linked to the school the athlete attends but is linked to the individual. The “athlete brand” can be quantified by the number of social media followers the athlete has, and the number of views on videos that the athlete has. This is where the value to the athlete is, which is different from the collegiate model where an athlete may get a percentage of profits from the sale of their school jersey with the number and name they wear, cashing in on the school’s brand and name. https://www.nfhs.org/articles/name-image-and-likeness-for-interscholastic-athletes-what-does-it-look-like/#:~:text=NIL%20allows%20high%20school%20athletes,their%20high%20school%20athletic%20eligibility. ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources.
[question] [user request] ===================== [text] [context document] ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. EVIDENCE: Rules The current NIL rules related to college athletes apply via state law and/or NCAA rules. Due to the potential of having a patchwork of so many NIL rules, the NCAA is now asking for federal legislation addressing this patchwork of NIL rules. The NCAA allows member institutions to recruit and sign high school athletes who have participated in NIL activities while the student was in high school. The current NIL rules related to high school athletes are established state to state. There is a possibility of 51 different sets of rules relating to NIL (50 states and the District of Columbia), and 51 different sets of rules relating to transfer (undue influence, bona fide moves, traditional academic transfer rules, and subsequent eligibility determinations) leads to many questions. As of October 4, 2023, the following 30 states and the District of Columbia allow interscholastic athletes to receive NIL payments: Alaska, California, Colorado, Connecticut, District of Columbia, Georgia, Idaho, Illinois, Iowa, Kansas, Louisiana, Maine, Maryland, Massachusetts, Minnesota, Nebraska, Nevada, New Jersey, New Mexico, New York, North Carolina, North Dakota, Oklahoma, Oregon, Pennsylvania, Rhode Island, Tennessee, Utah, Virginia and Washington. This developing area requires measured and contemplative rule-making. What the landscape in both the near- and far-future will look like for both collegiate and high school NIL is anyone’s guess. But one thing is for sure, NIL at both the collegiate and interscholastic levels is here to stay. New and Different Options The financial opportunities for high school athletes are not just for product endorsement, gear sales or the more traditional NIL models – interscholastic opportunities can be much more sophisticated. Now, there must be consideration of rules that apply to 1) the developing individual athlete “brands” which the individual athlete has personally created on social media, and 2) the number of new and developing avenues of revenue streams for these athletes that don’t necessarily include U.S. currency. An example of such an option for high school athletes is what former USC quarterback and Heisman Trophy winner Matt Leinart and current USC quarterback and Heisman Trophy winner Caleb Williams provide with their Hall of Goats organization. Hall of Goats has a platform designed to provide athletes the opportunity to drop exclusive NFT collections and take advantage of their NIL rights, while providing personal brand, content and digital resources. NFT collections are “non-functional tokens.” These tokens and the platform are designed to allow athletes to use the platform to help build their brands, tell their stories and set them up for long-term success. As a result, the traditional notions of what constitutes remunerative value for a high school athlete no longer is only about “cash” but is about other things of value that if properly “negotiated” can lead to additional, and non-traditional monetizing of the athlete, at a time the athlete chooses. The significance of athletes having their own brand is that it allows them access to NIL separate from their high school team and school district. The most common limitations on high school NIL are that the athlete cannot use the school’s name, uniform, mascot, etc., as part of what they market for NIL. At the high school level, the brand’s value is not linked to the school the athlete attends but is linked to the individual. The “athlete brand” can be quantified by the number of social media followers the athlete has, and the number of views on videos that the athlete has. This is where the value to the athlete is, which is different from the collegiate model where an athlete may get a percentage of profits from the sale of their school jersey with the number and name they wear, cashing in on the school’s brand and name. USER: My kid is an elite athlete fielding several college scholarships. I want to know the legalities of what he can do with his name, image, and likeness deals. Several companies have approached my kid with brand deals and I want to know how it will work and what it will mean for his future financially and within the realm of his sport. What's up? We are based in California. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
28
69
619
null
390
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document]
What did the Supreme Court hold in the case Miranda v. Arizona, 384 U.S. 436(1966), and did any justices dissent from the majority opinion?
Facts The Supreme Court’s decision in Miranda v. Arizona addressed four different cases involving custodial interrogations. In each of these cases, the defendant was questioned by police officers, detectives, or a prosecuting attorney in a room in which he was cut off from the outside world. In none of these cases was the defendant given a full and effective warning of his rights at the outset of the interrogation process. In all the cases, the questioning elicited oral admissions and, in three of them, signed statements that were admitted at trial. Miranda v. Arizona: Miranda was arrested at his home and taken in custody to a police station where he was identified by the complaining witness. He was then interrogated by two police officers for two hours, which resulted in a signed, written confession. At trial, the oral and written confessions were presented to the jury. Miranda was found guilty of kidnapping and rape and was sentenced to 20-30 years imprisonment on each count. On appeal, the Supreme Court of Arizona held that Miranda’s constitutional rights were not violated in obtaining the confession. Vignera v. New York: Vignera was picked up by New York police in connection with the robbery of a dress shop that had occurred three days prior. He was first taken to the 17th Detective Squad headquarters. He was then taken to the 66th Detective Squad, where he orally admitted the robbery and was placed under formal arrest. He was then taken to the 70th Precinct for detention, where he was questioned by an assistant district attorney in the presence of a hearing reporter who transcribed the questions and answers. At trial, the oral confession and the transcript were presented to the jury. Vignera was found guilty of first degree robbery and sentenced to 30-60 years imprisonment. The conviction was affirmed without opinion by the Appellate Division and the Court of Appeals. Westover v. United States: Westover was arrested by local police in Kansas City as a suspect in two Kansas City robberies and taken to a local police station. A report was also received from the FBI that Westover was wanted on a felony charge in California. Westover was interrogated the night of the arrest and the next morning by local police. Then, FBI agents continued the interrogation at the station. After two-and-a-half hours of interrogation by the FBI, Westover signed separate confessions, which had been prepared by one of the agents during the interrogation, to each of the two robberies in California. These statements were introduced at trial. Westover was convicted of the California robberies and sentenced to 15 years’ imprisonment on each count. The conviction was affirmed by the Court of Appeals for the Ninth Circuit. California v. Stewart: In the course of investigating a series of purse-snatch robberies in which one of the victims died of injuries inflicted by her assailant, Stewart was identified as the endorser of checks stolen in one of the robberies. Steward was arrested at his home. Police also arrested Stewart’s wife and three other people who were visiting him. Stewart was placed in a cell, and, over the next five days, was interrogated on nine different occasions. During the ninth interrogation session, Stewart stated that he had robbed the deceased, but had not meant to hurt her. At that time, police released the four other people arrested with Stewart because there was no evidence to connect any of them with the crime. At trial, Stewart’s statements were introduced. Stewart was convicted of robbery and first-degree murder and sentenced to death. The Supreme Court of California reversed, holding that Stewart should have been advised of his right to remain silent and his right to counsel. Issues Whether “statements obtained from an individual who is subjected to custodial police interrogation” are admissible against him in a criminal trial and whether “procedures which assure that the individual is accorded his privilege under the Fifth Amendment to the Constitution not to be compelled to incriminate himself” are necessary. Supreme Court holding The Court held that “there can be no doubt that the Fifth Amendment privilege is available outside of criminal court proceedings and serves to protect persons in all settings in which their freedom of action is curtailed in any significant way from being compelled to incriminate themselves.” As such, “the prosecution may not use statements, whether exculpatory or inculpatory, stemming from custodial interrogation of the defendant unless it demonstrates the use of procedural safeguards effective to secure the privilege against self-incrimination. By custodial interrogation, we mean questioning initiated by law enforcement officers after a person has been taken into custody or otherwise deprived of his freedom of action in any significant way.” The Court further held that “without proper safeguards the process of in-custody interrogation of persons suspected or accused of crime contains inherently compelling pressures which work to undermine the individual’s will to resist and to compel him to speak where he would otherwise do so freely.” Therefore, a defendant “must be warned prior to any questioning that he has the right to remain silent, that anything he says can be used against him in a court of law, that he has the right to the presence of an attorney, and that if he cannot afford an attorney one will be appointed for him prior to any questioning if he so desires.” The Supreme Court reversed the judgment of the Supreme Court of Arizona in Miranda, reversed the judgment of the New York Court of Appeals in Vignera, reversed the judgment of the Court of Appeals for the Ninth Circuit in Westover, and affirmed the judgment of the Supreme Court of California in Stewart. Argued: Feb. 28, March 1 and 2, 1966 Decided: June 13, 1966 Vote: 5-4 Majority opinion written by Chief Justice Warren and joined by Justices Black, Douglas, Brennan, and Fortas. Dissenting opinion written by Justice Harlan and joined by Justices Stewart and White. Dissenting in part opinion written by Justice Clark.
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> What did the Supreme Court hold in the case Miranda v. Arizona, 384 U.S. 436(1966), and did any justices dissent from the majority opinion? <TEXT> Facts The Supreme Court’s decision in Miranda v. Arizona addressed four different cases involving custodial interrogations. In each of these cases, the defendant was questioned by police officers, detectives, or a prosecuting attorney in a room in which he was cut off from the outside world. In none of these cases was the defendant given a full and effective warning of his rights at the outset of the interrogation process. In all the cases, the questioning elicited oral admissions and, in three of them, signed statements that were admitted at trial. Miranda v. Arizona: Miranda was arrested at his home and taken in custody to a police station where he was identified by the complaining witness. He was then interrogated by two police officers for two hours, which resulted in a signed, written confession. At trial, the oral and written confessions were presented to the jury. Miranda was found guilty of kidnapping and rape and was sentenced to 20-30 years imprisonment on each count. On appeal, the Supreme Court of Arizona held that Miranda’s constitutional rights were not violated in obtaining the confession. Vignera v. New York: Vignera was picked up by New York police in connection with the robbery of a dress shop that had occurred three days prior. He was first taken to the 17th Detective Squad headquarters. He was then taken to the 66th Detective Squad, where he orally admitted the robbery and was placed under formal arrest. He was then taken to the 70th Precinct for detention, where he was questioned by an assistant district attorney in the presence of a hearing reporter who transcribed the questions and answers. At trial, the oral confession and the transcript were presented to the jury. Vignera was found guilty of first degree robbery and sentenced to 30-60 years imprisonment. The conviction was affirmed without opinion by the Appellate Division and the Court of Appeals. Westover v. United States: Westover was arrested by local police in Kansas City as a suspect in two Kansas City robberies and taken to a local police station. A report was also received from the FBI that Westover was wanted on a felony charge in California. Westover was interrogated the night of the arrest and the next morning by local police. Then, FBI agents continued the interrogation at the station. After two-and-a-half hours of interrogation by the FBI, Westover signed separate confessions, which had been prepared by one of the agents during the interrogation, to each of the two robberies in California. These statements were introduced at trial. Westover was convicted of the California robberies and sentenced to 15 years’ imprisonment on each count. The conviction was affirmed by the Court of Appeals for the Ninth Circuit. California v. Stewart: In the course of investigating a series of purse-snatch robberies in which one of the victims died of injuries inflicted by her assailant, Stewart was identified as the endorser of checks stolen in one of the robberies. Steward was arrested at his home. Police also arrested Stewart’s wife and three other people who were visiting him. Stewart was placed in a cell, and, over the next five days, was interrogated on nine different occasions. During the ninth interrogation session, Stewart stated that he had robbed the deceased, but had not meant to hurt her. At that time, police released the four other people arrested with Stewart because there was no evidence to connect any of them with the crime. At trial, Stewart’s statements were introduced. Stewart was convicted of robbery and first-degree murder and sentenced to death. The Supreme Court of California reversed, holding that Stewart should have been advised of his right to remain silent and his right to counsel. Issues Whether “statements obtained from an individual who is subjected to custodial police interrogation” are admissible against him in a criminal trial and whether “procedures which assure that the individual is accorded his privilege under the Fifth Amendment to the Constitution not to be compelled to incriminate himself” are necessary. Supreme Court holding The Court held that “there can be no doubt that the Fifth Amendment privilege is available outside of criminal court proceedings and serves to protect persons in all settings in which their freedom of action is curtailed in any significant way from being compelled to incriminate themselves.” As such, “the prosecution may not use statements, whether exculpatory or inculpatory, stemming from custodial interrogation of the defendant unless it demonstrates the use of procedural safeguards effective to secure the privilege against self-incrimination. By custodial interrogation, we mean questioning initiated by law enforcement officers after a person has been taken into custody or otherwise deprived of his freedom of action in any significant way.” The Court further held that “without proper safeguards the process of in-custody interrogation of persons suspected or accused of crime contains inherently compelling pressures which work to undermine the individual’s will to resist and to compel him to speak where he would otherwise do so freely.” Therefore, a defendant “must be warned prior to any questioning that he has the right to remain silent, that anything he says can be used against him in a court of law, that he has the right to the presence of an attorney, and that if he cannot afford an attorney one will be appointed for him prior to any questioning if he so desires.” The Supreme Court reversed the judgment of the Supreme Court of Arizona in Miranda, reversed the judgment of the New York Court of Appeals in Vignera, reversed the judgment of the Court of Appeals for the Ninth Circuit in Westover, and affirmed the judgment of the Supreme Court of California in Stewart. Argued: Feb. 28, March 1 and 2, 1966 Decided: June 13, 1966 Vote: 5-4 Majority opinion written by Chief Justice Warren and joined by Justices Black, Douglas, Brennan, and Fortas. Dissenting opinion written by Justice Harlan and joined by Justices Stewart and White. Dissenting in part opinion written by Justice Clark. https://www.uscourts.gov/educational-resources/educational-activities/facts-and-case-summary-miranda-v-arizona
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document] EVIDENCE: Facts The Supreme Court’s decision in Miranda v. Arizona addressed four different cases involving custodial interrogations. In each of these cases, the defendant was questioned by police officers, detectives, or a prosecuting attorney in a room in which he was cut off from the outside world. In none of these cases was the defendant given a full and effective warning of his rights at the outset of the interrogation process. In all the cases, the questioning elicited oral admissions and, in three of them, signed statements that were admitted at trial. Miranda v. Arizona: Miranda was arrested at his home and taken in custody to a police station where he was identified by the complaining witness. He was then interrogated by two police officers for two hours, which resulted in a signed, written confession. At trial, the oral and written confessions were presented to the jury. Miranda was found guilty of kidnapping and rape and was sentenced to 20-30 years imprisonment on each count. On appeal, the Supreme Court of Arizona held that Miranda’s constitutional rights were not violated in obtaining the confession. Vignera v. New York: Vignera was picked up by New York police in connection with the robbery of a dress shop that had occurred three days prior. He was first taken to the 17th Detective Squad headquarters. He was then taken to the 66th Detective Squad, where he orally admitted the robbery and was placed under formal arrest. He was then taken to the 70th Precinct for detention, where he was questioned by an assistant district attorney in the presence of a hearing reporter who transcribed the questions and answers. At trial, the oral confession and the transcript were presented to the jury. Vignera was found guilty of first degree robbery and sentenced to 30-60 years imprisonment. The conviction was affirmed without opinion by the Appellate Division and the Court of Appeals. Westover v. United States: Westover was arrested by local police in Kansas City as a suspect in two Kansas City robberies and taken to a local police station. A report was also received from the FBI that Westover was wanted on a felony charge in California. Westover was interrogated the night of the arrest and the next morning by local police. Then, FBI agents continued the interrogation at the station. After two-and-a-half hours of interrogation by the FBI, Westover signed separate confessions, which had been prepared by one of the agents during the interrogation, to each of the two robberies in California. These statements were introduced at trial. Westover was convicted of the California robberies and sentenced to 15 years’ imprisonment on each count. The conviction was affirmed by the Court of Appeals for the Ninth Circuit. California v. Stewart: In the course of investigating a series of purse-snatch robberies in which one of the victims died of injuries inflicted by her assailant, Stewart was identified as the endorser of checks stolen in one of the robberies. Steward was arrested at his home. Police also arrested Stewart’s wife and three other people who were visiting him. Stewart was placed in a cell, and, over the next five days, was interrogated on nine different occasions. During the ninth interrogation session, Stewart stated that he had robbed the deceased, but had not meant to hurt her. At that time, police released the four other people arrested with Stewart because there was no evidence to connect any of them with the crime. At trial, Stewart’s statements were introduced. Stewart was convicted of robbery and first-degree murder and sentenced to death. The Supreme Court of California reversed, holding that Stewart should have been advised of his right to remain silent and his right to counsel. Issues Whether “statements obtained from an individual who is subjected to custodial police interrogation” are admissible against him in a criminal trial and whether “procedures which assure that the individual is accorded his privilege under the Fifth Amendment to the Constitution not to be compelled to incriminate himself” are necessary. Supreme Court holding The Court held that “there can be no doubt that the Fifth Amendment privilege is available outside of criminal court proceedings and serves to protect persons in all settings in which their freedom of action is curtailed in any significant way from being compelled to incriminate themselves.” As such, “the prosecution may not use statements, whether exculpatory or inculpatory, stemming from custodial interrogation of the defendant unless it demonstrates the use of procedural safeguards effective to secure the privilege against self-incrimination. By custodial interrogation, we mean questioning initiated by law enforcement officers after a person has been taken into custody or otherwise deprived of his freedom of action in any significant way.” The Court further held that “without proper safeguards the process of in-custody interrogation of persons suspected or accused of crime contains inherently compelling pressures which work to undermine the individual’s will to resist and to compel him to speak where he would otherwise do so freely.” Therefore, a defendant “must be warned prior to any questioning that he has the right to remain silent, that anything he says can be used against him in a court of law, that he has the right to the presence of an attorney, and that if he cannot afford an attorney one will be appointed for him prior to any questioning if he so desires.” The Supreme Court reversed the judgment of the Supreme Court of Arizona in Miranda, reversed the judgment of the New York Court of Appeals in Vignera, reversed the judgment of the Court of Appeals for the Ninth Circuit in Westover, and affirmed the judgment of the Supreme Court of California in Stewart. Argued: Feb. 28, March 1 and 2, 1966 Decided: June 13, 1966 Vote: 5-4 Majority opinion written by Chief Justice Warren and joined by Justices Black, Douglas, Brennan, and Fortas. Dissenting opinion written by Justice Harlan and joined by Justices Stewart and White. Dissenting in part opinion written by Justice Clark. USER: What did the Supreme Court hold in the case Miranda v. Arizona, 384 U.S. 436(1966), and did any justices dissent from the majority opinion? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
20
24
995
null
336
You must base your answer only on the provided text. You must not use any external sources or prior knowledge. Limit your response to 40 words.
If a customer's meter seal was broken and lost service, but now wants it reconnected, what must they do?
DISCONTINUANCE AND RECONNECTION 4.11 General: Failure of SRP at any time to suspend the delivery of service, to terminate an Agreement for Electric Service, or to seek any other legal remedy upon default or breach by the Customer will not affect SRP’s right to seek any such remedies for the same or any future default or breach by the Customer. If a Customer fails to perform as required by these Rules and Regulations, the Price Plans, the Electric Service Specifications, or the Customer’s Agreement for Electric Service, SRP may disconnect service. No personal visit to a Customer’s premises is required prior to disconnection of service. SRP also may disconnect service to the Customer when necessary to comply with any law or regulation applicable to SRP or the Customer, or if a Governmental Entity revokes its clearance for the provision of electrical service. 4.12 Reconnect After Disconnect for Non-Payment: 4.12.1 Seven calendar days prior to disconnecting service for a delinquent SRP billing, SRP will mail, e-mail, or personally deliver to the Customer’s premises a written notice stating the delinquent amount and that SRP intends to disconnect service unless the delinquent amount is promptly paid. This notification requirement does not apply to delinquent extensions for payment of prior billings when a seven-day notice was previously given, to delinquent extensions for payment of deposits or other up-front charges that were billed as a courtesy to the Customer, to a pre-pay account when the Customer controls timing of the disconnection based on self-management of the pre-pay balance, or to insufficient funds regarding the Customer’s payment. 4.12.2 Once SRP disconnects service, SRP will not reconnect service until the Customer (a) applies for service; (b) pays all amounts the Customer owes SRP, including past-due bills and any charges for the cost of disconnecting and reconnecting service; and (c) corrects the condition that resulted in the SALT RIVER PROJECT AGRICULTURAL IMPROVEMENT AND POWER DISTRICT RULES AND REGULATIONS 29 disconnection. SRP may require an additional security deposit based on its evaluation of the Customer’s creditworthiness. 4.13 Disconnect and Reconnect Pursuant to a Request of a Governmental Entity: 4.13.1 If SRP receives a request to discontinue service from a Governmental Entity stating that it hasrevoked its clearance for the provision of electricalservice, SRP may immediately disconnect service to the Customer without notice. 4.13.2 Once SRP disconnects service due to a request by a Governmental Entity, SRP will not reconnect service until it receives notice that the clearance for the provision of electrical service has been restored by the Governmental Entity. 4.14 Fraud: 4.14.1 No Person shall connect a wire or contrivance to any apparatus used by SRP to supply electricity to a Customer, nor shall any Person provide Power to any device by induction from SRP’s Lines, in such manner that the Person takes electricity that is not properly metered or accounted for. No meter or other instrument installed for measuring the quantity of electricity consumed may be wrongfully obstructed, altered, injured, or prevented from functioning. When a meter seal has been broken by someone other than SRP’s personnel, SRP may assess a reconnection fee to the Customer’s billing. Bills for unmetered electricity may include the full cost or expense incurred by SRP to investigate and confirm diversion of electricity. SRP also reserves the right to impose additional charges, as it deems appropriate, when a provision of this Section 4.14.1 has been violated. Bills for all such charges are due and payable immediately upon presentation unless otherwise agreed by SRP. In addition to the remedies herein, SRP reserves all legal rights available to it including pursuing criminal prosecutions against, and criminal and civil damages from, any Party that violates this Section 4.14.1 or applicable law. 4.14.2 If SRP has evidence that any Customer has caused or allowed any of the conditions of Section 4.14.1 to exist, SRP may, at any time, without notice, discontinue the supply of electricity to the Customer and remove the meter or meters, apparatus, wires, and Service Lateral, as well as any evidence of the condition. 4.14.3 SRP will charge the Customer for periods of unmetered service, estimated using data from available records and information. In the event of damage to meters or Service Equipment, the current Customer of record shall pay SRP based on estimated Energy usage not previously billed as well as any SRP costs associated with restoring proper metering or service. SALT RIVER PROJECT AGRICULTURAL IMPROVEMENT AND POWER DISTRICT RULES AND REGULATIONS 30 4.14.4 If SRP disconnects service to the Customer because of a violation of these Rules and Regulations, SRP will not restore service to the Customer until all amounts due SRP have been paid. SRP will include the full cost or expense incurred by SRP for the removal and reinstallation of the meter or meters, apparatus, wires, and Service Lateral. The Customer’s service entrance must comply with SRP’s then-current Electric Service Specifications before it can be re-energized. 4.15 Other Reasons for Discontinuance: 4.15.1 SRP may terminate an Agreement for Electric Service with a Customer or suspend the delivery of service for any other default or breach of the Agreement by the Customer, but, except as expressly provided otherwise in these Rules and Regulations, SRP will not terminate or suspend service without first giving written notice to the Customer, stating in what particular way the Agreement has been violated. 4.15.2 SRP may terminate or suspend delivery of service in the event of a short circuit or other electrical system failure on the Customer’s side of the Point of Delivery or, if the utilization of the service by the Customer, in SRP’s sole discretion, is a safety hazard or may cause damage to Persons or property (“Emergency Disconnect”). Notwithstanding any other provision of these Rules and Regulations, the Price Plans, the Electric Service Specifications, a Customer’s Agreement for Electric Service, or the Distributed Generation Interconnection Handbook, no advance notice need be given to the Customer in the event of an Emergency Disconnect. 4.15.3 Upon prior written notice, SRP may terminate or suspend the delivery of service if: (a) the Customer refuses to grant or is unable to procure easements necessary for or incidental to SRP’s facilities or its provision of service to the Customer according to Section 5.1.5 or any written agreement between SRP and the Customer, or (b) SRP is not provided proper access to SRP Lines, Service Laterals, meters, or other equipment located on property owned or controlled by the Customer to perform maintenance or repair of SRP facilities, to provide service to the Customer, or to read meters on the Customer’s premises. 4.15.4 Notwithstanding any other provision of these Rules and Regulations, the Price Plans, the Electric Service Specifications, or the Customer’s Agreement for Electric Service or the Distributed Generation Interconnection Handbook, SRP may disconnect a Customer at any time, without notice, and remove the meter or meters if the Customer has misrepresented his or her identity in any manner. SALT RIVER PROJECT AGRICULTURAL IMPROVEMENT AND POWER DISTRICT RULES AND REGULATIONS 31 4.15.5 Upon prior written notice to Customer, SRP may terminate or suspend the delivery of electric service to any Customer who: (a) without obtaining SRP’s prior written approval, connects or allows the connection of a Distributed Energy Device to any portion of such Customer’s electric system; (b) is required to sign SRP’s Interconnection Agreement but refuses to do so; (c) fails to procure the signature of a third-party owner or operator on SRP’s Interconnection Agreement; or (d) fails to comply with the SRP’s Distributed Generation Interconnection Handbook. 4.15.6 SRP may terminate or suspend delivery of electric service at any time, without notice, if a Customer’s identity cannot be established to SRP’s satisfaction
system instruction: You must base your answer only on the provided text. You must not use any external sources or prior knowledge. Limit your response to 40 words. question: If a customer's meter seal was broken and lost service, but now wants it reconnected, what must they do? context block: DISCONTINUANCE AND RECONNECTION 4.11 General: Failure of SRP at any time to suspend the delivery of service, to terminate an Agreement for Electric Service, or to seek any other legal remedy upon default or breach by the Customer will not affect SRP’s right to seek any such remedies for the same or any future default or breach by the Customer. If a Customer fails to perform as required by these Rules and Regulations, the Price Plans, the Electric Service Specifications, or the Customer’s Agreement for Electric Service, SRP may disconnect service. No personal visit to a Customer’s premises is required prior to disconnection of service. SRP also may disconnect service to the Customer when necessary to comply with any law or regulation applicable to SRP or the Customer, or if a Governmental Entity revokes its clearance for the provision of electrical service. 4.12 Reconnect After Disconnect for Non-Payment: 4.12.1 Seven calendar days prior to disconnecting service for a delinquent SRP billing, SRP will mail, e-mail, or personally deliver to the Customer’s premises a written notice stating the delinquent amount and that SRP intends to disconnect service unless the delinquent amount is promptly paid. This notification requirement does not apply to delinquent extensions for payment of prior billings when a seven-day notice was previously given, to delinquent extensions for payment of deposits or other up-front charges that were billed as a courtesy to the Customer, to a pre-pay account when the Customer controls timing of the disconnection based on self-management of the pre-pay balance, or to insufficient funds regarding the Customer’s payment. 4.12.2 Once SRP disconnects service, SRP will not reconnect service until the Customer (a) applies for service; (b) pays all amounts the Customer owes SRP, including past-due bills and any charges for the cost of disconnecting and reconnecting service; and (c) corrects the condition that resulted in the SALT RIVER PROJECT AGRICULTURAL IMPROVEMENT AND POWER DISTRICT RULES AND REGULATIONS 29 disconnection. SRP may require an additional security deposit based on its evaluation of the Customer’s creditworthiness. 4.13 Disconnect and Reconnect Pursuant to a Request of a Governmental Entity: 4.13.1 If SRP receives a request to discontinue service from a Governmental Entity stating that it hasrevoked its clearance for the provision of electricalservice, SRP may immediately disconnect service to the Customer without notice. 4.13.2 Once SRP disconnects service due to a request by a Governmental Entity, SRP will not reconnect service until it receives notice that the clearance for the provision of electrical service has been restored by the Governmental Entity. 4.14 Fraud: 4.14.1 No Person shall connect a wire or contrivance to any apparatus used by SRP to supply electricity to a Customer, nor shall any Person provide Power to any device by induction from SRP’s Lines, in such manner that the Person takes electricity that is not properly metered or accounted for. No meter or other instrument installed for measuring the quantity of electricity consumed may be wrongfully obstructed, altered, injured, or prevented from functioning. When a meter seal has been broken by someone other than SRP’s personnel, SRP may assess a reconnection fee to the Customer’s billing. Bills for unmetered electricity may include the full cost or expense incurred by SRP to investigate and confirm diversion of electricity. SRP also reserves the right to impose additional charges, as it deems appropriate, when a provision of this Section 4.14.1 has been violated. Bills for all such charges are due and payable immediately upon presentation unless otherwise agreed by SRP. In addition to the remedies herein, SRP reserves all legal rights available to it including pursuing criminal prosecutions against, and criminal and civil damages from, any Party that violates this Section 4.14.1 or applicable law. 4.14.2 If SRP has evidence that any Customer has caused or allowed any of the conditions of Section 4.14.1 to exist, SRP may, at any time, without notice, discontinue the supply of electricity to the Customer and remove the meter or meters, apparatus, wires, and Service Lateral, as well as any evidence of the condition. 4.14.3 SRP will charge the Customer for periods of unmetered service, estimated using data from available records and information. In the event of damage to meters or Service Equipment, the current Customer of record shall pay SRP based on estimated Energy usage not previously billed as well as any SRP costs associated with restoring proper metering or service. SALT RIVER PROJECT AGRICULTURAL IMPROVEMENT AND POWER DISTRICT RULES AND REGULATIONS 30 4.14.4 If SRP disconnects service to the Customer because of a violation of these Rules and Regulations, SRP will not restore service to the Customer until all amounts due SRP have been paid. SRP will include the full cost or expense incurred by SRP for the removal and reinstallation of the meter or meters, apparatus, wires, and Service Lateral. The Customer’s service entrance must comply with SRP’s then-current Electric Service Specifications before it can be re-energized. 4.15 Other Reasons for Discontinuance: 4.15.1 SRP may terminate an Agreement for Electric Service with a Customer or suspend the delivery of service for any other default or breach of the Agreement by the Customer, but, except as expressly provided otherwise in these Rules and Regulations, SRP will not terminate or suspend service without first giving written notice to the Customer, stating in what particular way the Agreement has been violated. 4.15.2 SRP may terminate or suspend delivery of service in the event of a short circuit or other electrical system failure on the Customer’s side of the Point of Delivery or, if the utilization of the service by the Customer, in SRP’s sole discretion, is a safety hazard or may cause damage to Persons or property (“Emergency Disconnect”). Notwithstanding any other provision of these Rules and Regulations, the Price Plans, the Electric Service Specifications, a Customer’s Agreement for Electric Service, or the Distributed Generation Interconnection Handbook, no advance notice need be given to the Customer in the event of an Emergency Disconnect. 4.15.3 Upon prior written notice, SRP may terminate or suspend the delivery of service if: (a) the Customer refuses to grant or is unable to procure easements necessary for or incidental to SRP’s facilities or its provision of service to the Customer according to Section 5.1.5 or any written agreement between SRP and the Customer, or (b) SRP is not provided proper access to SRP Lines, Service Laterals, meters, or other equipment located on property owned or controlled by the Customer to perform maintenance or repair of SRP facilities, to provide service to the Customer, or to read meters on the Customer’s premises. 4.15.4 Notwithstanding any other provision of these Rules and Regulations, the Price Plans, the Electric Service Specifications, or the Customer’s Agreement for Electric Service or the Distributed Generation Interconnection Handbook, SRP may disconnect a Customer at any time, without notice, and remove the meter or meters if the Customer has misrepresented his or her identity in any manner. SALT RIVER PROJECT AGRICULTURAL IMPROVEMENT AND POWER DISTRICT RULES AND REGULATIONS 31 4.15.5 Upon prior written notice to Customer, SRP may terminate or suspend the delivery of electric service to any Customer who: (a) without obtaining SRP’s prior written approval, connects or allows the connection of a Distributed Energy Device to any portion of such Customer’s electric system; (b) is required to sign SRP’s Interconnection Agreement but refuses to do so; (c) fails to procure the signature of a third-party owner or operator on SRP’s Interconnection Agreement; or (d) fails to comply with the SRP’s Distributed Generation Interconnection Handbook. 4.15.6 SRP may terminate or suspend delivery of electric service at any time, without notice, if a Customer’s identity cannot be established to SRP’s satisfaction
You must base your answer only on the provided text. You must not use any external sources or prior knowledge. Limit your response to 40 words. EVIDENCE: DISCONTINUANCE AND RECONNECTION 4.11 General: Failure of SRP at any time to suspend the delivery of service, to terminate an Agreement for Electric Service, or to seek any other legal remedy upon default or breach by the Customer will not affect SRP’s right to seek any such remedies for the same or any future default or breach by the Customer. If a Customer fails to perform as required by these Rules and Regulations, the Price Plans, the Electric Service Specifications, or the Customer’s Agreement for Electric Service, SRP may disconnect service. No personal visit to a Customer’s premises is required prior to disconnection of service. SRP also may disconnect service to the Customer when necessary to comply with any law or regulation applicable to SRP or the Customer, or if a Governmental Entity revokes its clearance for the provision of electrical service. 4.12 Reconnect After Disconnect for Non-Payment: 4.12.1 Seven calendar days prior to disconnecting service for a delinquent SRP billing, SRP will mail, e-mail, or personally deliver to the Customer’s premises a written notice stating the delinquent amount and that SRP intends to disconnect service unless the delinquent amount is promptly paid. This notification requirement does not apply to delinquent extensions for payment of prior billings when a seven-day notice was previously given, to delinquent extensions for payment of deposits or other up-front charges that were billed as a courtesy to the Customer, to a pre-pay account when the Customer controls timing of the disconnection based on self-management of the pre-pay balance, or to insufficient funds regarding the Customer’s payment. 4.12.2 Once SRP disconnects service, SRP will not reconnect service until the Customer (a) applies for service; (b) pays all amounts the Customer owes SRP, including past-due bills and any charges for the cost of disconnecting and reconnecting service; and (c) corrects the condition that resulted in the SALT RIVER PROJECT AGRICULTURAL IMPROVEMENT AND POWER DISTRICT RULES AND REGULATIONS 29 disconnection. SRP may require an additional security deposit based on its evaluation of the Customer’s creditworthiness. 4.13 Disconnect and Reconnect Pursuant to a Request of a Governmental Entity: 4.13.1 If SRP receives a request to discontinue service from a Governmental Entity stating that it hasrevoked its clearance for the provision of electricalservice, SRP may immediately disconnect service to the Customer without notice. 4.13.2 Once SRP disconnects service due to a request by a Governmental Entity, SRP will not reconnect service until it receives notice that the clearance for the provision of electrical service has been restored by the Governmental Entity. 4.14 Fraud: 4.14.1 No Person shall connect a wire or contrivance to any apparatus used by SRP to supply electricity to a Customer, nor shall any Person provide Power to any device by induction from SRP’s Lines, in such manner that the Person takes electricity that is not properly metered or accounted for. No meter or other instrument installed for measuring the quantity of electricity consumed may be wrongfully obstructed, altered, injured, or prevented from functioning. When a meter seal has been broken by someone other than SRP’s personnel, SRP may assess a reconnection fee to the Customer’s billing. Bills for unmetered electricity may include the full cost or expense incurred by SRP to investigate and confirm diversion of electricity. SRP also reserves the right to impose additional charges, as it deems appropriate, when a provision of this Section 4.14.1 has been violated. Bills for all such charges are due and payable immediately upon presentation unless otherwise agreed by SRP. In addition to the remedies herein, SRP reserves all legal rights available to it including pursuing criminal prosecutions against, and criminal and civil damages from, any Party that violates this Section 4.14.1 or applicable law. 4.14.2 If SRP has evidence that any Customer has caused or allowed any of the conditions of Section 4.14.1 to exist, SRP may, at any time, without notice, discontinue the supply of electricity to the Customer and remove the meter or meters, apparatus, wires, and Service Lateral, as well as any evidence of the condition. 4.14.3 SRP will charge the Customer for periods of unmetered service, estimated using data from available records and information. In the event of damage to meters or Service Equipment, the current Customer of record shall pay SRP based on estimated Energy usage not previously billed as well as any SRP costs associated with restoring proper metering or service. SALT RIVER PROJECT AGRICULTURAL IMPROVEMENT AND POWER DISTRICT RULES AND REGULATIONS 30 4.14.4 If SRP disconnects service to the Customer because of a violation of these Rules and Regulations, SRP will not restore service to the Customer until all amounts due SRP have been paid. SRP will include the full cost or expense incurred by SRP for the removal and reinstallation of the meter or meters, apparatus, wires, and Service Lateral. The Customer’s service entrance must comply with SRP’s then-current Electric Service Specifications before it can be re-energized. 4.15 Other Reasons for Discontinuance: 4.15.1 SRP may terminate an Agreement for Electric Service with a Customer or suspend the delivery of service for any other default or breach of the Agreement by the Customer, but, except as expressly provided otherwise in these Rules and Regulations, SRP will not terminate or suspend service without first giving written notice to the Customer, stating in what particular way the Agreement has been violated. 4.15.2 SRP may terminate or suspend delivery of service in the event of a short circuit or other electrical system failure on the Customer’s side of the Point of Delivery or, if the utilization of the service by the Customer, in SRP’s sole discretion, is a safety hazard or may cause damage to Persons or property (“Emergency Disconnect”). Notwithstanding any other provision of these Rules and Regulations, the Price Plans, the Electric Service Specifications, a Customer’s Agreement for Electric Service, or the Distributed Generation Interconnection Handbook, no advance notice need be given to the Customer in the event of an Emergency Disconnect. 4.15.3 Upon prior written notice, SRP may terminate or suspend the delivery of service if: (a) the Customer refuses to grant or is unable to procure easements necessary for or incidental to SRP’s facilities or its provision of service to the Customer according to Section 5.1.5 or any written agreement between SRP and the Customer, or (b) SRP is not provided proper access to SRP Lines, Service Laterals, meters, or other equipment located on property owned or controlled by the Customer to perform maintenance or repair of SRP facilities, to provide service to the Customer, or to read meters on the Customer’s premises. 4.15.4 Notwithstanding any other provision of these Rules and Regulations, the Price Plans, the Electric Service Specifications, or the Customer’s Agreement for Electric Service or the Distributed Generation Interconnection Handbook, SRP may disconnect a Customer at any time, without notice, and remove the meter or meters if the Customer has misrepresented his or her identity in any manner. SALT RIVER PROJECT AGRICULTURAL IMPROVEMENT AND POWER DISTRICT RULES AND REGULATIONS 31 4.15.5 Upon prior written notice to Customer, SRP may terminate or suspend the delivery of electric service to any Customer who: (a) without obtaining SRP’s prior written approval, connects or allows the connection of a Distributed Energy Device to any portion of such Customer’s electric system; (b) is required to sign SRP’s Interconnection Agreement but refuses to do so; (c) fails to procure the signature of a third-party owner or operator on SRP’s Interconnection Agreement; or (d) fails to comply with the SRP’s Distributed Generation Interconnection Handbook. 4.15.6 SRP may terminate or suspend delivery of electric service at any time, without notice, if a Customer’s identity cannot be established to SRP’s satisfaction USER: If a customer's meter seal was broken and lost service, but now wants it reconnected, what must they do? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
19
1,275
null
388
Answer only based on information from the below text. Use a bulleted list.
Find and summarize each instance where the text talks about convenience. Please make it highly detailed.
Smart trams have successfully addressed the problem.The main objective of this initiative is to reduce the length of time that customers have to wait before they can pay their bills [1]. The pricing and billing for the items in the cart are automated. This application comprises an Arduino Uno, an LCD display, a buzzer, RFID tags, and an RFID reader. The Arduino development board used in this system has fully accessible input/output pins to enable communication with the reader. The trolley is outfitted with an RFID reader, and each product is linked to an RFID tag [2]. Once the products have been placed in the shopping cart, the RFID reader quickly deciphers the tags. The relevant information, such as the product's name, price, and quantity, is then shown on the LCD screen. The user will receive a prompt to scan the product using an automated alert system equipped with a buzzer. As a result, a bill is produced immediately on the cart. The eradication of human error is a direct result of the full automation of the process. Every day, a substantial amount of people are attracted to shopping malls in order to participate in shopping, self-improvement, and entertainment [3]. With the increasing popularity of online shopping, traditional retail stores have faced challenges in maintaining their customer base. Shopping malls have been actively seeking innovative methods to offer a customized shopping experience in order to attract and retain customers. An effective approach involves employing intelligent individuals to monitor and oversee the movement of shopping carts. Autonomous shopping carts, which are engineered to replicate human locomotion, possess the capability to autonomously track customers, thereby obviating the necessity for them to manually propel the cart [4]. This technology provides shoppers with simplicity and convenience, enabling them to concentrate on their purchases while deriving pleasure from the experience. While a customer is making purchases, their location is monitored by an intelligent trolley that integrates numerous sensors and cameras. The utilization of intelligent shopping carts that monitor human movements provides the benefit of augmenting a customised shopping experience. Patrons are able ABSTRACT Time is an expensive resource in our fast-paced society, and people frequently lose a good deal of it waiting at supermarket and shopping mall checkout counters. An automated intelligent shopping cart has been designed for supermarkets to solve the shortcomings of the current billing systems. This trolley reduces the amount of time customers spend at the checkout counter, improving convenience and saving time, by scanning products using the Atmega 328 controller and RFID tags. Customers can better their shopping experience by monitoring the amount of items and the overall cost thanks to the digital document shown on an LCD. With electronic bills sent via email and thorough purchase information available through the shop's website, the intelligent cart manages shopping and payment procedures, allowing customers to buy things and leave the store fast. In order to manage product and customer information, this system needs an Arduino board, an RFID reader, an RFID tag, an LCD display, a database manager, and a website. Leveraging the Internet of Things (IoT) for smooth connection with the worldwide network, the administrator can access this information anywhere. Keywords: Arduino UNO; Ultrasonic sensor; IR sensor; DC motors; RFID reader; LCD display; Atmega328 controller; Motor drivers. Irish Interdisciplinary Journal of Science & Research (IIJSR) Volume 8, Issue 2, Pages 113-122, April-June 2024 ISSN: 2582-3981 [114] to effortlessly traverse the establishment, circumventing the necessity to push their shopping cart or be concerned with its misplacement. By effortlessly concentrating on the products they are interested in purchasing, they are able to dedicate more time to perusing [5]. Customers with limited mobility or disabilities get an added level of assistance from intelligent shopping carts that track and follow them. These customers may find it difficult to propel a shopping cart. Nevertheless, the intelligent trolley presents a viable resolution that holds the capacity to augment the ease and pleasure derived from the act of shopping [6]. In addition, the integration of intelligent shopping carts—which possess the capability to independently navigate and accompany customers—substantially augments the shopping experience in terms of convenience and effectiveness. Consumers are able to effortlessly locate the desired products, incorporate them into their shopping carts, and proceed to the subsequent item without the necessity of monitoring their carts. As a result, patrons are able to enhance their shopping experience through time conservation and a reduction in the customary anxiety linked to the procedure [7]. By maintaining a linear trajectory, the robot is capable of traversing the lane of shopping racks with ease. An ultrasonic sensor is additionally affixed to the front of the robotic vehicle. The sensor is utilized to determine the user's proximity to the robot [8]. The customer is monitored by the robot from a predetermined distance as they navigate the shopping lane. The system therefore recommends a sophisticated shopping cart for contemporary shopping malls. A smart shopping cart that makes use of Internet of Things (IoT) technology is the proposed concept [9]. A versatile application and Radio Frequency Identification (RFID) sensors are integrated into it. Additionally, an Arduino microcontroller is also present. RFID sensors operate via wireless transmission. The process consists of two essential elements: an RFID tag affixed to every item and a user-specific RFID reader that efficiently scans the item data. The corresponding data for each item is then displayed within the mobile application. The client effectively oversees the shopping list using the adaptable application in accordance with their personal preferences. The shopping information is subsequently transmitted remotely to the employee, who generates the charges. The primary aim of this testing framework is to eliminate arduous shopping processes and technical administration complications. Subsequently, the proposed framework may be readily deployable and verifiable in an extensive operational setting [10]. This clarifies the rationale behind the proposed model's higher level of stringency in comparison to alternative methodologies. The integration of state-of-the-art technologies into a smart shopping cart is intended to revolutionize the traditional shopping experience in multiple ways. It optimizes operational effectiveness through the provision of user-friendly functionalities that streamline the process of item retrieval and diminish the duration of shopping [11]. Digital shopping lists, automated item scanning, and user-friendly payment methods substantially enhance convenience. By encouraging the use of reusable bags, reducing plastic waste, and informing customers about sustainable products, the cart promotes sustainability. By providing customers with real-time pricing comparisons, discounts, and promotions, cost-effectiveness is achieved and they are able to make more informed decisions. The shopping cart incorporates accessibility features that accommodate a diverse array of customers, including individuals with disabilities. Customer access to recipes, nutritional information, and personalized recommendations, while retailers gain insights into consumer behavior, purchasing patterns, and inventory management that are driven by data. Irish Interdisciplinary Journal of Science & Research (IIJSR) Volume 8, Issue 2, Pages 113-122, April-June 2024 ISSN: 2582-3981 [115] Ensuring safety through the implementation of secure locking mechanisms, RFID technology for item tracking, and hazard alarms, seamless connectivity with mobile devices is provided. Constant advancements in functionality and design ensure that the shopping cart remains at the forefront of market trends, with the ultimate goal of improving the customer experience by providing a seamless, enjoyable, and expedient journey that cultivates loyalty towards the retailer.
Find and summarize each instance where the text talks about convenience. Please make it highly detailed. Answer only based on information from the below text. Use a bulleted list. Smart trams have successfully addressed the problem.The main objective of this initiative is to reduce the length of time that customers have to wait before they can pay their bills [1]. The pricing and billing for the items in the cart are automated. This application comprises an Arduino Uno, an LCD display, a buzzer, RFID tags, and an RFID reader. The Arduino development board used in this system has fully accessible input/output pins to enable communication with the reader. The trolley is outfitted with an RFID reader, and each product is linked to an RFID tag [2]. Once the products have been placed in the shopping cart, the RFID reader quickly deciphers the tags. The relevant information, such as the product's name, price, and quantity, is then shown on the LCD screen. The user will receive a prompt to scan the product using an automated alert system equipped with a buzzer. As a result, a bill is produced immediately on the cart. The eradication of human error is a direct result of the full automation of the process. Every day, a substantial amount of people are attracted to shopping malls in order to participate in shopping, self-improvement, and entertainment [3]. With the increasing popularity of online shopping, traditional retail stores have faced challenges in maintaining their customer base. Shopping malls have been actively seeking innovative methods to offer a customized shopping experience in order to attract and retain customers. An effective approach involves employing intelligent individuals to monitor and oversee the movement of shopping carts. Autonomous shopping carts, which are engineered to replicate human locomotion, possess the capability to autonomously track customers, thereby obviating the necessity for them to manually propel the cart [4]. This technology provides shoppers with simplicity and convenience, enabling them to concentrate on their purchases while deriving pleasure from the experience. While a customer is making purchases, their location is monitored by an intelligent trolley that integrates numerous sensors and cameras. The utilization of intelligent shopping carts that monitor human movements provides the benefit of augmenting a customised shopping experience. Patrons are able ABSTRACT Time is an expensive resource in our fast-paced society, and people frequently lose a good deal of it waiting at supermarket and shopping mall checkout counters. An automated intelligent shopping cart has been designed for supermarkets to solve the shortcomings of the current billing systems. This trolley reduces the amount of time customers spend at the checkout counter, improving convenience and saving time, by scanning products using the Atmega 328 controller and RFID tags. Customers can better their shopping experience by monitoring the amount of items and the overall cost thanks to the digital document shown on an LCD. With electronic bills sent via email and thorough purchase information available through the shop's website, the intelligent cart manages shopping and payment procedures, allowing customers to buy things and leave the store fast. In order to manage product and customer information, this system needs an Arduino board, an RFID reader, an RFID tag, an LCD display, a database manager, and a website. Leveraging the Internet of Things (IoT) for smooth connection with the worldwide network, the administrator can access this information anywhere. Keywords: Arduino UNO; Ultrasonic sensor; IR sensor; DC motors; RFID reader; LCD display; Atmega328 controller; Motor drivers. Irish Interdisciplinary Journal of Science & Research (IIJSR) Volume 8, Issue 2, Pages 113-122, April-June 2024 ISSN: 2582-3981 [114] to effortlessly traverse the establishment, circumventing the necessity to push their shopping cart or be concerned with its misplacement. By effortlessly concentrating on the products they are interested in purchasing, they are able to dedicate more time to perusing [5]. Customers with limited mobility or disabilities get an added level of assistance from intelligent shopping carts that track and follow them. These customers may find it difficult to propel a shopping cart. Nevertheless, the intelligent trolley presents a viable resolution that holds the capacity to augment the ease and pleasure derived from the act of shopping [6]. In addition, the integration of intelligent shopping carts—which possess the capability to independently navigate and accompany customers—substantially augments the shopping experience in terms of convenience and effectiveness. Consumers are able to effortlessly locate the desired products, incorporate them into their shopping carts, and proceed to the subsequent item without the necessity of monitoring their carts. As a result, patrons are able to enhance their shopping experience through time conservation and a reduction in the customary anxiety linked to the procedure [7]. By maintaining a linear trajectory, the robot is capable of traversing the lane of shopping racks with ease. An ultrasonic sensor is additionally affixed to the front of the robotic vehicle. The sensor is utilized to determine the user's proximity to the robot [8]. The customer is monitored by the robot from a predetermined distance as they navigate the shopping lane. The system therefore recommends a sophisticated shopping cart for contemporary shopping malls. A smart shopping cart that makes use of Internet of Things (IoT) technology is the proposed concept [9]. A versatile application and Radio Frequency Identification (RFID) sensors are integrated into it. Additionally, an Arduino microcontroller is also present. RFID sensors operate via wireless transmission. The process consists of two essential elements: an RFID tag affixed to every item and a user-specific RFID reader that efficiently scans the item data. The corresponding data for each item is then displayed within the mobile application. The client effectively oversees the shopping list using the adaptable application in accordance with their personal preferences. The shopping information is subsequently transmitted remotely to the employee, who generates the charges. The primary aim of this testing framework is to eliminate arduous shopping processes and technical administration complications. Subsequently, the proposed framework may be readily deployable and verifiable in an extensive operational setting [10]. This clarifies the rationale behind the proposed model's higher level of stringency in comparison to alternative methodologies. The integration of state-of-the-art technologies into a smart shopping cart is intended to revolutionize the traditional shopping experience in multiple ways. It optimizes operational effectiveness through the provision of user-friendly functionalities that streamline the process of item retrieval and diminish the duration of shopping [11]. Digital shopping lists, automated item scanning, and user-friendly payment methods substantially enhance convenience. By encouraging the use of reusable bags, reducing plastic waste, and informing customers about sustainable products, the cart promotes sustainability. By providing customers with real-time pricing comparisons, discounts, and promotions, cost-effectiveness is achieved and they are able to make more informed decisions. The shopping cart incorporates accessibility features that accommodate a diverse array of customers, including individuals with disabilities. Customer access to recipes, nutritional information, and personalized recommendations, while retailers gain insights into consumer behavior, purchasing patterns, and inventory management that are driven by data. Irish Interdisciplinary Journal of Science & Research (IIJSR) Volume 8, Issue 2, Pages 113-122, April-June 2024 ISSN: 2582-3981 [115] Ensuring safety through the implementation of secure locking mechanisms, RFID technology for item tracking, and hazard alarms, seamless connectivity with mobile devices is provided. Constant advancements in functionality and design ensure that the shopping cart remains at the forefront of market trends, with the ultimate goal of improving the customer experience by providing a seamless, enjoyable, and expedient journey that cultivates loyalty towards the retailer.
Answer only based on information from the below text. Use a bulleted list. EVIDENCE: Smart trams have successfully addressed the problem.The main objective of this initiative is to reduce the length of time that customers have to wait before they can pay their bills [1]. The pricing and billing for the items in the cart are automated. This application comprises an Arduino Uno, an LCD display, a buzzer, RFID tags, and an RFID reader. The Arduino development board used in this system has fully accessible input/output pins to enable communication with the reader. The trolley is outfitted with an RFID reader, and each product is linked to an RFID tag [2]. Once the products have been placed in the shopping cart, the RFID reader quickly deciphers the tags. The relevant information, such as the product's name, price, and quantity, is then shown on the LCD screen. The user will receive a prompt to scan the product using an automated alert system equipped with a buzzer. As a result, a bill is produced immediately on the cart. The eradication of human error is a direct result of the full automation of the process. Every day, a substantial amount of people are attracted to shopping malls in order to participate in shopping, self-improvement, and entertainment [3]. With the increasing popularity of online shopping, traditional retail stores have faced challenges in maintaining their customer base. Shopping malls have been actively seeking innovative methods to offer a customized shopping experience in order to attract and retain customers. An effective approach involves employing intelligent individuals to monitor and oversee the movement of shopping carts. Autonomous shopping carts, which are engineered to replicate human locomotion, possess the capability to autonomously track customers, thereby obviating the necessity for them to manually propel the cart [4]. This technology provides shoppers with simplicity and convenience, enabling them to concentrate on their purchases while deriving pleasure from the experience. While a customer is making purchases, their location is monitored by an intelligent trolley that integrates numerous sensors and cameras. The utilization of intelligent shopping carts that monitor human movements provides the benefit of augmenting a customised shopping experience. Patrons are able ABSTRACT Time is an expensive resource in our fast-paced society, and people frequently lose a good deal of it waiting at supermarket and shopping mall checkout counters. An automated intelligent shopping cart has been designed for supermarkets to solve the shortcomings of the current billing systems. This trolley reduces the amount of time customers spend at the checkout counter, improving convenience and saving time, by scanning products using the Atmega 328 controller and RFID tags. Customers can better their shopping experience by monitoring the amount of items and the overall cost thanks to the digital document shown on an LCD. With electronic bills sent via email and thorough purchase information available through the shop's website, the intelligent cart manages shopping and payment procedures, allowing customers to buy things and leave the store fast. In order to manage product and customer information, this system needs an Arduino board, an RFID reader, an RFID tag, an LCD display, a database manager, and a website. Leveraging the Internet of Things (IoT) for smooth connection with the worldwide network, the administrator can access this information anywhere. Keywords: Arduino UNO; Ultrasonic sensor; IR sensor; DC motors; RFID reader; LCD display; Atmega328 controller; Motor drivers. Irish Interdisciplinary Journal of Science & Research (IIJSR) Volume 8, Issue 2, Pages 113-122, April-June 2024 ISSN: 2582-3981 [114] to effortlessly traverse the establishment, circumventing the necessity to push their shopping cart or be concerned with its misplacement. By effortlessly concentrating on the products they are interested in purchasing, they are able to dedicate more time to perusing [5]. Customers with limited mobility or disabilities get an added level of assistance from intelligent shopping carts that track and follow them. These customers may find it difficult to propel a shopping cart. Nevertheless, the intelligent trolley presents a viable resolution that holds the capacity to augment the ease and pleasure derived from the act of shopping [6]. In addition, the integration of intelligent shopping carts—which possess the capability to independently navigate and accompany customers—substantially augments the shopping experience in terms of convenience and effectiveness. Consumers are able to effortlessly locate the desired products, incorporate them into their shopping carts, and proceed to the subsequent item without the necessity of monitoring their carts. As a result, patrons are able to enhance their shopping experience through time conservation and a reduction in the customary anxiety linked to the procedure [7]. By maintaining a linear trajectory, the robot is capable of traversing the lane of shopping racks with ease. An ultrasonic sensor is additionally affixed to the front of the robotic vehicle. The sensor is utilized to determine the user's proximity to the robot [8]. The customer is monitored by the robot from a predetermined distance as they navigate the shopping lane. The system therefore recommends a sophisticated shopping cart for contemporary shopping malls. A smart shopping cart that makes use of Internet of Things (IoT) technology is the proposed concept [9]. A versatile application and Radio Frequency Identification (RFID) sensors are integrated into it. Additionally, an Arduino microcontroller is also present. RFID sensors operate via wireless transmission. The process consists of two essential elements: an RFID tag affixed to every item and a user-specific RFID reader that efficiently scans the item data. The corresponding data for each item is then displayed within the mobile application. The client effectively oversees the shopping list using the adaptable application in accordance with their personal preferences. The shopping information is subsequently transmitted remotely to the employee, who generates the charges. The primary aim of this testing framework is to eliminate arduous shopping processes and technical administration complications. Subsequently, the proposed framework may be readily deployable and verifiable in an extensive operational setting [10]. This clarifies the rationale behind the proposed model's higher level of stringency in comparison to alternative methodologies. The integration of state-of-the-art technologies into a smart shopping cart is intended to revolutionize the traditional shopping experience in multiple ways. It optimizes operational effectiveness through the provision of user-friendly functionalities that streamline the process of item retrieval and diminish the duration of shopping [11]. Digital shopping lists, automated item scanning, and user-friendly payment methods substantially enhance convenience. By encouraging the use of reusable bags, reducing plastic waste, and informing customers about sustainable products, the cart promotes sustainability. By providing customers with real-time pricing comparisons, discounts, and promotions, cost-effectiveness is achieved and they are able to make more informed decisions. The shopping cart incorporates accessibility features that accommodate a diverse array of customers, including individuals with disabilities. Customer access to recipes, nutritional information, and personalized recommendations, while retailers gain insights into consumer behavior, purchasing patterns, and inventory management that are driven by data. Irish Interdisciplinary Journal of Science & Research (IIJSR) Volume 8, Issue 2, Pages 113-122, April-June 2024 ISSN: 2582-3981 [115] Ensuring safety through the implementation of secure locking mechanisms, RFID technology for item tracking, and hazard alarms, seamless connectivity with mobile devices is provided. Constant advancements in functionality and design ensure that the shopping cart remains at the forefront of market trends, with the ultimate goal of improving the customer experience by providing a seamless, enjoyable, and expedient journey that cultivates loyalty towards the retailer. USER: Find and summarize each instance where the text talks about convenience. Please make it highly detailed. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
13
16
1,206
null
790
Use only the information provided above to answer the question. Answer in paragraph form and keep your answer to under 150 words.
What happens if a home title is listed as Joint Tenants with Rights of Survivorship when one of the owners sells their share of the property to someone else?
To create a joint tenancy, be sure to get the right legal words on the deed or title document. Joint tenancy with the right of survivorship is a popular way to avoid probate. It certainly has the virtue of simplicity. To create a joint tenancy with the right of survivorship, all you need to do is put the right words on the title document, such as a deed to real estate, a car's title slip, or the signature card establishing a bank account. What exactly is a joint tenancy with right of survivorship (often shortened simply to "joint tenancy")? It's a co-ownership method that comes with the right to take a deceased co-owner's share of the property. If you co-own a piece of property with someone as joint tenants with the right of survivorship, when your co-owner dies, you automatically own their half of the property, and vice versa. (Contrast joint tenancy with a tenancy in common.) While many use "joint tenancy" interchangeably with "joint tenancy with right of survivorship," and we do so as well in this article, be aware that a few states (such as Texas) have different norms. In situations where you want to be absolutely clear, be sure to include "with right of survivorship." In the great majority of states, if you and your co-owners own property as "joint tenants with the right of survivorship" or put the abbreviation "JT WROS" after your names on the title document, you not only co-own the property, but you own it in a way that automatically determines who will own it when one of you dies. A car salesman or bank staffer may assure you that other words are enough. For example, connecting the names of the owners with the word "or," not "and," does create a joint tenancy, in some circumstances, in some states. But it's always better to unambiguously spell out what you want: joint tenancy with right of survivorship. When Ken and his wife, Janelle, buy a house, they want to take title in joint tenancy. When the deed that transfers the house to them is prepared, all they need to do is tell the title company to identify them on the deed in this way: Kenneth J. Hartman and Janelle M. Grubcek, as joint tenants with right of survivorship. There should be no extra cost or paperwork. Joint tenancy—or a form of ownership that achieves the same probate-avoiding result—is available in all states, although a few impose restrictions, such as the ones summarized below. In addition, one rule applies in every state except Colorado, Connecticut, North Carolina, Ohio, and Vermont: All joint tenants must own equal shares of the property. If you want a different arrangement, such as 60%-40% ownership, joint tenancy is not for you. Alaska: Joint tenancy is not allowed for real estate, but married spouses may own as tenants by the entirety. Oregon: A transfer to married spouses creates tenancy by the entirety unless the document clearly states otherwise. Tennessee: A transfer to husband and wife creates tenancy by the entirety, not joint tenancy. Wisconsin: Joint tenancy is not available between spouses, but survivorship marital property is. Learn more about tenancy by the entirety, which has many similarities to joint tenancy, but is available only to married couples. Especially when it comes to real estate, all law is local, so be sure you know your state's rules on what language is required to create a joint tenancy with the right of survivorship. While "as joint tenants with right of survivorship" works in many situations, the specific laws of your state might vary slightly. Joint tenancy deeds can look a little different, depending on your state. If you're not sure, talk to a local real estate lawyer. Here are just a few special state rules. Michigan: Michigan has two forms of joint tenancy. A traditional joint tenancy is formed when property is transferred to two or more persons using the language "as joint tenants and not as tenants in common." Any owner may terminate the joint tenancy unilaterally (without the consent of the other owner). If, however, property is transferred to the new owners using the language "as joint tenants with right of survivorship" or to the new owners "and the survivor of them," the result is different. No owner can destroy this joint tenancy unilaterally. Even if you transfer your interest to someone else, that person takes it subject to the rights of your original co-owner. So if you were to die before your original co-owner, that co-owner would automatically own the whole property. EXAMPLE: Alice and Ben own land in Michigan as "joint tenants with full right of survivorship." Alice sells her interest to Catherine and dies a few years later, while Ben is still alive. Ben now owns the whole property; Catherine owns nothing. Oregon: Oregon doesn't use the term "joint tenancy"; instead, you create a survivorship estate. The result is the same as with a joint tenancy: when one owner dies, the surviving owner owns the whole property. But technically, creating a survivorship estate creates what the lawyers call "a tenancy in common in the life estate with cross-contingent remainders in the fee simple." (That clears it up, doesn't it?) South Carolina: To hold real estate in joint tenancy, the deed should use the words "as joint tenants with rights of survivorship, and not as tenants in common," just to make it crystal clear. (S.C. Code Ann. § 27-7-40.) Texas: If you want to set up a joint tenancy in Texas, you and the other joint tenants might have to sign a written agreement. For example, if you want to create a joint tenancy bank account, so that the survivor will get all the funds, specifying your arrangement on the bank's signature card may not be enough. Fortunately, a bank or real estate office should be able to give you a fill-in-the-blanks form. Take this requirement seriously. A dispute over such an account ended up in the Texas Supreme Court. Two sisters had set up an account together, using a signature card that allowed the survivor to withdraw the funds. But when one sister died, and the other withdrew the funds, the estate of the deceased sister sued—and won the funds—because the signature card's language didn't satisfy the requirements of the Texas statute. (Stauffer v. Henderson, 801 S.W.2d 858 (Tex. 1991).) More recently, the Texas Supreme Court ruled that a married couple who owned investment accounts labeled "JT TEN" did have survivorship rights, even though they hadn't signed anything stating whether or not the account had a survivorship feature. Holmes v. Beatty, 290 S.W.3d 852 (Tex. 2009). But it's still better to be explicit about your intentions. Joint tenancy and the different ways of co-owning property can be complicated. If you're dealing with the co-owned property of a loved one who died, and you're not sure how they co-owned it or what the implications are, find a probate attorney to help.
Context Block: To create a joint tenancy, be sure to get the right legal words on the deed or title document. Joint tenancy with the right of survivorship is a popular way to avoid probate. It certainly has the virtue of simplicity. To create a joint tenancy with the right of survivorship, all you need to do is put the right words on the title document, such as a deed to real estate, a car's title slip, or the signature card establishing a bank account. What exactly is a joint tenancy with right of survivorship (often shortened simply to "joint tenancy")? It's a co-ownership method that comes with the right to take a deceased co-owner's share of the property. If you co-own a piece of property with someone as joint tenants with the right of survivorship, when your co-owner dies, you automatically own their half of the property, and vice versa. (Contrast joint tenancy with a tenancy in common.) While many use "joint tenancy" interchangeably with "joint tenancy with right of survivorship," and we do so as well in this article, be aware that a few states (such as Texas) have different norms. In situations where you want to be absolutely clear, be sure to include "with right of survivorship." In the great majority of states, if you and your co-owners own property as "joint tenants with the right of survivorship" or put the abbreviation "JT WROS" after your names on the title document, you not only co-own the property, but you own it in a way that automatically determines who will own it when one of you dies. A car salesman or bank staffer may assure you that other words are enough. For example, connecting the names of the owners with the word "or," not "and," does create a joint tenancy, in some circumstances, in some states. But it's always better to unambiguously spell out what you want: joint tenancy with right of survivorship. When Ken and his wife, Janelle, buy a house, they want to take title in joint tenancy. When the deed that transfers the house to them is prepared, all they need to do is tell the title company to identify them on the deed in this way: Kenneth J. Hartman and Janelle M. Grubcek, as joint tenants with right of survivorship. There should be no extra cost or paperwork. Joint tenancy—or a form of ownership that achieves the same probate-avoiding result—is available in all states, although a few impose restrictions, such as the ones summarized below. In addition, one rule applies in every state except Colorado, Connecticut, North Carolina, Ohio, and Vermont: All joint tenants must own equal shares of the property. If you want a different arrangement, such as 60%-40% ownership, joint tenancy is not for you. Alaska: Joint tenancy is not allowed for real estate, but married spouses may own as tenants by the entirety. Oregon: A transfer to married spouses creates tenancy by the entirety unless the document clearly states otherwise. Tennessee: A transfer to husband and wife creates tenancy by the entirety, not joint tenancy. Wisconsin: Joint tenancy is not available between spouses, but survivorship marital property is. Learn more about tenancy by the entirety, which has many similarities to joint tenancy, but is available only to married couples. Especially when it comes to real estate, all law is local, so be sure you know your state's rules on what language is required to create a joint tenancy with the right of survivorship. While "as joint tenants with right of survivorship" works in many situations, the specific laws of your state might vary slightly. Joint tenancy deeds can look a little different, depending on your state. If you're not sure, talk to a local real estate lawyer. Here are just a few special state rules. Michigan: Michigan has two forms of joint tenancy. A traditional joint tenancy is formed when property is transferred to two or more persons using the language "as joint tenants and not as tenants in common." Any owner may terminate the joint tenancy unilaterally (without the consent of the other owner). If, however, property is transferred to the new owners using the language "as joint tenants with right of survivorship" or to the new owners "and the survivor of them," the result is different. No owner can destroy this joint tenancy unilaterally. Even if you transfer your interest to someone else, that person takes it subject to the rights of your original co-owner. So if you were to die before your original co-owner, that co-owner would automatically own the whole property. EXAMPLE: Alice and Ben own land in Michigan as "joint tenants with full right of survivorship." Alice sells her interest to Catherine and dies a few years later, while Ben is still alive. Ben now owns the whole property; Catherine owns nothing. Oregon: Oregon doesn't use the term "joint tenancy"; instead, you create a survivorship estate. The result is the same as with a joint tenancy: when one owner dies, the surviving owner owns the whole property. But technically, creating a survivorship estate creates what the lawyers call "a tenancy in common in the life estate with cross-contingent remainders in the fee simple." (That clears it up, doesn't it?) South Carolina: To hold real estate in joint tenancy, the deed should use the words "as joint tenants with rights of survivorship, and not as tenants in common," just to make it crystal clear. (S.C. Code Ann. § 27-7-40.) Texas: If you want to set up a joint tenancy in Texas, you and the other joint tenants might have to sign a written agreement. For example, if you want to create a joint tenancy bank account, so that the survivor will get all the funds, specifying your arrangement on the bank's signature card may not be enough. Fortunately, a bank or real estate office should be able to give you a fill-in-the-blanks form. Take this requirement seriously. A dispute over such an account ended up in the Texas Supreme Court. Two sisters had set up an account together, using a signature card that allowed the survivor to withdraw the funds. But when one sister died, and the other withdrew the funds, the estate of the deceased sister sued—and won the funds—because the signature card's language didn't satisfy the requirements of the Texas statute. (Stauffer v. Henderson, 801 S.W.2d 858 (Tex. 1991).) More recently, the Texas Supreme Court ruled that a married couple who owned investment accounts labeled "JT TEN" did have survivorship rights, even though they hadn't signed anything stating whether or not the account had a survivorship feature. Holmes v. Beatty, 290 S.W.3d 852 (Tex. 2009). But it's still better to be explicit about your intentions. Joint tenancy and the different ways of co-owning property can be complicated. If you're dealing with the co-owned property of a loved one who died, and you're not sure how they co-owned it or what the implications are, find a probate attorney to help. System Instructions: Use only the information provided above to answer the question. Answer in paragraph form and keep your answer to under 150 words. Question: What happens if a home title is listed as Joint Tenants with Rights of Survivorship when one of the owners sells their share of the property to someone else?
Use only the information provided above to answer the question. Answer in paragraph form and keep your answer to under 150 words. EVIDENCE: To create a joint tenancy, be sure to get the right legal words on the deed or title document. Joint tenancy with the right of survivorship is a popular way to avoid probate. It certainly has the virtue of simplicity. To create a joint tenancy with the right of survivorship, all you need to do is put the right words on the title document, such as a deed to real estate, a car's title slip, or the signature card establishing a bank account. What exactly is a joint tenancy with right of survivorship (often shortened simply to "joint tenancy")? It's a co-ownership method that comes with the right to take a deceased co-owner's share of the property. If you co-own a piece of property with someone as joint tenants with the right of survivorship, when your co-owner dies, you automatically own their half of the property, and vice versa. (Contrast joint tenancy with a tenancy in common.) While many use "joint tenancy" interchangeably with "joint tenancy with right of survivorship," and we do so as well in this article, be aware that a few states (such as Texas) have different norms. In situations where you want to be absolutely clear, be sure to include "with right of survivorship." In the great majority of states, if you and your co-owners own property as "joint tenants with the right of survivorship" or put the abbreviation "JT WROS" after your names on the title document, you not only co-own the property, but you own it in a way that automatically determines who will own it when one of you dies. A car salesman or bank staffer may assure you that other words are enough. For example, connecting the names of the owners with the word "or," not "and," does create a joint tenancy, in some circumstances, in some states. But it's always better to unambiguously spell out what you want: joint tenancy with right of survivorship. When Ken and his wife, Janelle, buy a house, they want to take title in joint tenancy. When the deed that transfers the house to them is prepared, all they need to do is tell the title company to identify them on the deed in this way: Kenneth J. Hartman and Janelle M. Grubcek, as joint tenants with right of survivorship. There should be no extra cost or paperwork. Joint tenancy—or a form of ownership that achieves the same probate-avoiding result—is available in all states, although a few impose restrictions, such as the ones summarized below. In addition, one rule applies in every state except Colorado, Connecticut, North Carolina, Ohio, and Vermont: All joint tenants must own equal shares of the property. If you want a different arrangement, such as 60%-40% ownership, joint tenancy is not for you. Alaska: Joint tenancy is not allowed for real estate, but married spouses may own as tenants by the entirety. Oregon: A transfer to married spouses creates tenancy by the entirety unless the document clearly states otherwise. Tennessee: A transfer to husband and wife creates tenancy by the entirety, not joint tenancy. Wisconsin: Joint tenancy is not available between spouses, but survivorship marital property is. Learn more about tenancy by the entirety, which has many similarities to joint tenancy, but is available only to married couples. Especially when it comes to real estate, all law is local, so be sure you know your state's rules on what language is required to create a joint tenancy with the right of survivorship. While "as joint tenants with right of survivorship" works in many situations, the specific laws of your state might vary slightly. Joint tenancy deeds can look a little different, depending on your state. If you're not sure, talk to a local real estate lawyer. Here are just a few special state rules. Michigan: Michigan has two forms of joint tenancy. A traditional joint tenancy is formed when property is transferred to two or more persons using the language "as joint tenants and not as tenants in common." Any owner may terminate the joint tenancy unilaterally (without the consent of the other owner). If, however, property is transferred to the new owners using the language "as joint tenants with right of survivorship" or to the new owners "and the survivor of them," the result is different. No owner can destroy this joint tenancy unilaterally. Even if you transfer your interest to someone else, that person takes it subject to the rights of your original co-owner. So if you were to die before your original co-owner, that co-owner would automatically own the whole property. EXAMPLE: Alice and Ben own land in Michigan as "joint tenants with full right of survivorship." Alice sells her interest to Catherine and dies a few years later, while Ben is still alive. Ben now owns the whole property; Catherine owns nothing. Oregon: Oregon doesn't use the term "joint tenancy"; instead, you create a survivorship estate. The result is the same as with a joint tenancy: when one owner dies, the surviving owner owns the whole property. But technically, creating a survivorship estate creates what the lawyers call "a tenancy in common in the life estate with cross-contingent remainders in the fee simple." (That clears it up, doesn't it?) South Carolina: To hold real estate in joint tenancy, the deed should use the words "as joint tenants with rights of survivorship, and not as tenants in common," just to make it crystal clear. (S.C. Code Ann. § 27-7-40.) Texas: If you want to set up a joint tenancy in Texas, you and the other joint tenants might have to sign a written agreement. For example, if you want to create a joint tenancy bank account, so that the survivor will get all the funds, specifying your arrangement on the bank's signature card may not be enough. Fortunately, a bank or real estate office should be able to give you a fill-in-the-blanks form. Take this requirement seriously. A dispute over such an account ended up in the Texas Supreme Court. Two sisters had set up an account together, using a signature card that allowed the survivor to withdraw the funds. But when one sister died, and the other withdrew the funds, the estate of the deceased sister sued—and won the funds—because the signature card's language didn't satisfy the requirements of the Texas statute. (Stauffer v. Henderson, 801 S.W.2d 858 (Tex. 1991).) More recently, the Texas Supreme Court ruled that a married couple who owned investment accounts labeled "JT TEN" did have survivorship rights, even though they hadn't signed anything stating whether or not the account had a survivorship feature. Holmes v. Beatty, 290 S.W.3d 852 (Tex. 2009). But it's still better to be explicit about your intentions. Joint tenancy and the different ways of co-owning property can be complicated. If you're dealing with the co-owned property of a loved one who died, and you're not sure how they co-owned it or what the implications are, find a probate attorney to help. USER: What happens if a home title is listed as Joint Tenants with Rights of Survivorship when one of the owners sells their share of the property to someone else? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
22
29
1,162
null
679
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document]
Summarize this article in 400 words or list. Create bullet lists for what causes skin tags and how skin tags are created. Include a list of people most likely to get skin tags and why.
Skin tags: Why they develop, and how to remove them Skin tags are harmless growths that can appear anywhere on your skin, but often develop on the neck, eyelids, or underarms. They may be the same color as your skin or darker. Some are pink. Others turn red when irritated. You may see one dangling from a stalk, while another is firmly fixed to the skin. With all this variation, there is one thing that acrochordons (medical name for skin tags) seem to have in common. Many people want to remove them. You only need to remove a skin tag if it becomes irritated, feels uncomfortable, or affects your eyesight. If one or more of your skin tags fits this description, contact a board-certified dermatologist because no one understands your skin better. Man placing his finger just below an irritated skin tag on his neck. The following explains how dermatologists remove skin tags. It also answers other questions that patients frequently ask their dermatologist. Why am I getting skin tags? These growths can appear anywhere on the skin, but they usually develop where skin has been rubbing against skin, jewelry, or clothing for some time. That’s why they usually occur in one or more of these areas: Breasts (beneath) Eyelids Groin Neck creases (or where clothing or jewelry rubs against the neck) Underarms Skin tags are also commonly found on the sides, abdomen, or back. Because they develop where skin rubs against skin, people who are overweight, pregnant, or have loose skin are more likely to get skin tags. You also have a higher risk of developing skin tags if you have diabetes, metabolic syndrome (high blood pressure, unhealthy blood sugar levels, extra fat around your waist, or unhealthy cholesterol levels), or a blood relative has skin tags. It’s important to keep in mind that these growths are harmless. Should I remove a skin tag? Because they’re harmless, a skin tag only needs to be removed if it: Becomes irritated or bleeds Develops on your eyelid and affects your eyesight Feels painful, especially when the pain comes on suddenly A skin tag can become irritated if it frequently rubs against jewelry, clothing, or a seat belt. Shaving can also irritate it, especially if you nick the skin tag. A dermatologist can remove these skin tags. Suddenly developing many skin tags while rare can be a sign that something is going on inside your body. If this happens, see a board-certified dermatologist, who can make sure you have skin tags and may recommend that you see your primary care doctor. Several skin tags on a person’s skin If you dislike the way a skin tag looks, your dermatologist can also remove it. However, you’ll likely pay the cost. Insurance providers consider removing a skin growth for looks alone a cosmetic treatment. Insurance rarely covers the cost of cosmetic treatments. How does a dermatologist remove skin tags? Your dermatologist can quickly and safely remove one or more skin tags during an office visit, and usually without the need for a follow-up appointment. The treatment that your dermatologist uses will depend on the size of the skin tag, where it appears on your body, and other considerations. Your dermatologist may use: Cryosurgery: During this treatment, your dermatologist applies an extremely cold substance like liquid nitrogen to freeze and destroy the skin tag. Sometimes, freezing causes a blister or scab. When the blister or scab falls off, so will the skin tag. When using cryosurgery, your dermatologist may freeze only the bottom of the skin tag and then snip it off with a sterile surgical blade or scissors. Electrodesiccation: Your dermatologistuses a tiny needle to zap the skin tag, which destroys it.You’ll develop a scab on the treated skin that will heal in one to three weeks. Snip: Your dermatologist will numb the area, use sterile surgical scissors or a blade to remove the skin tag, and then apply a solution to stop the bleeding. After treatment, your dermatologist may give you aftercare instructions to follow. This may include removing the bandage, washing the area carefully, and covering it with a new bandage. Follow your aftercare instructions carefully to prevent problems like an infection. Products that you can use at-home to remove skin tags are not recommended The U.S. Food and Drug Administration (FDA) has not approved any of these products. Because of the harm these products can cause, the FDA warns people NOT to use them. To find out more, go to 5 reasons to see a dermatologist for mole, skin tag removal. Does wart remover work on skin tags? Given that some skin tags look like warts, it’s easy to think wart remover would work well. It doesn’t. Warts are hard and need strong medication. Skin tags are soft, so using a wart remover on them can damage your skin. You may develop scarring or irritated skin where you apply wart remover. Seeing a dermatologist can give you peace of mind Skin tags come in many shapes and sizes, so you may mistake a wart or even a skin cancer for a skin tag. Board-certified dermatologists know the difference between something small and something major. By seeing a dermatologist, you’ll find out what’s going on and that can bring peace of mind. Related AAD resources 5 reasons to see a dermatologist for mole, skin tag removal Images Image 1: Getty Images Image 2: Used with permission of the Journal of the American Academy of Dermatology. (J Am Acad Dermatol. 2019;81:1037-57.) References Belgam Syed SY, Lipoff JB, et al. “Acrochordon.” In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2023 Jan. Farshchian M, Kimyai-Asadi A et al. “Cryosnip for skin tag removal.” J Am Acad Dermatol. 2021 May 30:S0190-9622(21)01032-X. doi: 10.1016/j.jaad.2021.05.039. Epub ahead of print. Hirt PA, Castillo DE, et al. “Skin changes in the obese patient.” J Am Acad Dermatol. 2019 Nov;81(5):1037-57. Kutzner HH, Kamino H, et al. “Fibrous and fibrohistiocytic proliferations of the skin and tendons.” In: Bolognia JL, et al. Dermatology. (fourth edition). Mosby Elsevier, China, 2018: 2068-9. Schwartz, RA. “Acrochordon.” In:Medscape(Elston DM., Ed.) Last updated 10/26/2022. Last accessed 3/28/2023. Tucker, R. “Advice on how to treat skin tags.” The Pharm Jour. Published March 1, 2011. Last accessed March 23, 2023. U.S. Food and Drug Administration. “Products marketed for removing moles and other skin lesions can cause injuries, scarring.” Last updated 8/10/22. Last visited 3/30/23.
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> Summarize this article in 400 words or list. Create bullet lists for what causes skin tags and how skin tags are created. Include a list of people most likely to get skin tags and why. <TEXT> Skin tags: Why they develop, and how to remove them Skin tags are harmless growths that can appear anywhere on your skin, but often develop on the neck, eyelids, or underarms. They may be the same color as your skin or darker. Some are pink. Others turn red when irritated. You may see one dangling from a stalk, while another is firmly fixed to the skin. With all this variation, there is one thing that acrochordons (medical name for skin tags) seem to have in common. Many people want to remove them. You only need to remove a skin tag if it becomes irritated, feels uncomfortable, or affects your eyesight. If one or more of your skin tags fits this description, contact a board-certified dermatologist because no one understands your skin better. Man placing his finger just below an irritated skin tag on his neck. The following explains how dermatologists remove skin tags. It also answers other questions that patients frequently ask their dermatologist. Why am I getting skin tags? These growths can appear anywhere on the skin, but they usually develop where skin has been rubbing against skin, jewelry, or clothing for some time. That’s why they usually occur in one or more of these areas: Breasts (beneath) Eyelids Groin Neck creases (or where clothing or jewelry rubs against the neck) Underarms Skin tags are also commonly found on the sides, abdomen, or back. Because they develop where skin rubs against skin, people who are overweight, pregnant, or have loose skin are more likely to get skin tags. You also have a higher risk of developing skin tags if you have diabetes, metabolic syndrome (high blood pressure, unhealthy blood sugar levels, extra fat around your waist, or unhealthy cholesterol levels), or a blood relative has skin tags. It’s important to keep in mind that these growths are harmless. Should I remove a skin tag? Because they’re harmless, a skin tag only needs to be removed if it: Becomes irritated or bleeds Develops on your eyelid and affects your eyesight Feels painful, especially when the pain comes on suddenly A skin tag can become irritated if it frequently rubs against jewelry, clothing, or a seat belt. Shaving can also irritate it, especially if you nick the skin tag. A dermatologist can remove these skin tags. Suddenly developing many skin tags while rare can be a sign that something is going on inside your body. If this happens, see a board-certified dermatologist, who can make sure you have skin tags and may recommend that you see your primary care doctor. Several skin tags on a person’s skin If you dislike the way a skin tag looks, your dermatologist can also remove it. However, you’ll likely pay the cost. Insurance providers consider removing a skin growth for looks alone a cosmetic treatment. Insurance rarely covers the cost of cosmetic treatments. How does a dermatologist remove skin tags? Your dermatologist can quickly and safely remove one or more skin tags during an office visit, and usually without the need for a follow-up appointment. The treatment that your dermatologist uses will depend on the size of the skin tag, where it appears on your body, and other considerations. Your dermatologist may use: Cryosurgery: During this treatment, your dermatologist applies an extremely cold substance like liquid nitrogen to freeze and destroy the skin tag. Sometimes, freezing causes a blister or scab. When the blister or scab falls off, so will the skin tag. When using cryosurgery, your dermatologist may freeze only the bottom of the skin tag and then snip it off with a sterile surgical blade or scissors. Electrodesiccation: Your dermatologistuses a tiny needle to zap the skin tag, which destroys it.You’ll develop a scab on the treated skin that will heal in one to three weeks. Snip: Your dermatologist will numb the area, use sterile surgical scissors or a blade to remove the skin tag, and then apply a solution to stop the bleeding. After treatment, your dermatologist may give you aftercare instructions to follow. This may include removing the bandage, washing the area carefully, and covering it with a new bandage. Follow your aftercare instructions carefully to prevent problems like an infection. Products that you can use at-home to remove skin tags are not recommended The U.S. Food and Drug Administration (FDA) has not approved any of these products. Because of the harm these products can cause, the FDA warns people NOT to use them. To find out more, go to 5 reasons to see a dermatologist for mole, skin tag removal. Does wart remover work on skin tags? Given that some skin tags look like warts, it’s easy to think wart remover would work well. It doesn’t. Warts are hard and need strong medication. Skin tags are soft, so using a wart remover on them can damage your skin. You may develop scarring or irritated skin where you apply wart remover. Seeing a dermatologist can give you peace of mind Skin tags come in many shapes and sizes, so you may mistake a wart or even a skin cancer for a skin tag. Board-certified dermatologists know the difference between something small and something major. By seeing a dermatologist, you’ll find out what’s going on and that can bring peace of mind. Related AAD resources 5 reasons to see a dermatologist for mole, skin tag removal Images Image 1: Getty Images Image 2: Used with permission of the Journal of the American Academy of Dermatology. (J Am Acad Dermatol. 2019;81:1037-57.) References Belgam Syed SY, Lipoff JB, et al. “Acrochordon.” In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2023 Jan. Farshchian M, Kimyai-Asadi A et al. “Cryosnip for skin tag removal.” J Am Acad Dermatol. 2021 May 30:S0190-9622(21)01032-X. doi: 10.1016/j.jaad.2021.05.039. Epub ahead of print. Hirt PA, Castillo DE, et al. “Skin changes in the obese patient.” J Am Acad Dermatol. 2019 Nov;81(5):1037-57. Kutzner HH, Kamino H, et al. “Fibrous and fibrohistiocytic proliferations of the skin and tendons.” In: Bolognia JL, et al. Dermatology. (fourth edition). Mosby Elsevier, China, 2018: 2068-9. Schwartz, RA. “Acrochordon.” In:Medscape(Elston DM., Ed.) Last updated 10/26/2022. Last accessed 3/28/2023. Tucker, R. “Advice on how to treat skin tags.” The Pharm Jour. Published March 1, 2011. Last accessed March 23, 2023. U.S. Food and Drug Administration. “Products marketed for removing moles and other skin lesions can cause injuries, scarring.” Last updated 8/10/22. Last visited 3/30/23. https://www.aad.org/public/diseases/a-z/skin-tags
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document] EVIDENCE: Skin tags: Why they develop, and how to remove them Skin tags are harmless growths that can appear anywhere on your skin, but often develop on the neck, eyelids, or underarms. They may be the same color as your skin or darker. Some are pink. Others turn red when irritated. You may see one dangling from a stalk, while another is firmly fixed to the skin. With all this variation, there is one thing that acrochordons (medical name for skin tags) seem to have in common. Many people want to remove them. You only need to remove a skin tag if it becomes irritated, feels uncomfortable, or affects your eyesight. If one or more of your skin tags fits this description, contact a board-certified dermatologist because no one understands your skin better. Man placing his finger just below an irritated skin tag on his neck. The following explains how dermatologists remove skin tags. It also answers other questions that patients frequently ask their dermatologist. Why am I getting skin tags? These growths can appear anywhere on the skin, but they usually develop where skin has been rubbing against skin, jewelry, or clothing for some time. That’s why they usually occur in one or more of these areas: Breasts (beneath) Eyelids Groin Neck creases (or where clothing or jewelry rubs against the neck) Underarms Skin tags are also commonly found on the sides, abdomen, or back. Because they develop where skin rubs against skin, people who are overweight, pregnant, or have loose skin are more likely to get skin tags. You also have a higher risk of developing skin tags if you have diabetes, metabolic syndrome (high blood pressure, unhealthy blood sugar levels, extra fat around your waist, or unhealthy cholesterol levels), or a blood relative has skin tags. It’s important to keep in mind that these growths are harmless. Should I remove a skin tag? Because they’re harmless, a skin tag only needs to be removed if it: Becomes irritated or bleeds Develops on your eyelid and affects your eyesight Feels painful, especially when the pain comes on suddenly A skin tag can become irritated if it frequently rubs against jewelry, clothing, or a seat belt. Shaving can also irritate it, especially if you nick the skin tag. A dermatologist can remove these skin tags. Suddenly developing many skin tags while rare can be a sign that something is going on inside your body. If this happens, see a board-certified dermatologist, who can make sure you have skin tags and may recommend that you see your primary care doctor. Several skin tags on a person’s skin If you dislike the way a skin tag looks, your dermatologist can also remove it. However, you’ll likely pay the cost. Insurance providers consider removing a skin growth for looks alone a cosmetic treatment. Insurance rarely covers the cost of cosmetic treatments. How does a dermatologist remove skin tags? Your dermatologist can quickly and safely remove one or more skin tags during an office visit, and usually without the need for a follow-up appointment. The treatment that your dermatologist uses will depend on the size of the skin tag, where it appears on your body, and other considerations. Your dermatologist may use: Cryosurgery: During this treatment, your dermatologist applies an extremely cold substance like liquid nitrogen to freeze and destroy the skin tag. Sometimes, freezing causes a blister or scab. When the blister or scab falls off, so will the skin tag. When using cryosurgery, your dermatologist may freeze only the bottom of the skin tag and then snip it off with a sterile surgical blade or scissors. Electrodesiccation: Your dermatologistuses a tiny needle to zap the skin tag, which destroys it.You’ll develop a scab on the treated skin that will heal in one to three weeks. Snip: Your dermatologist will numb the area, use sterile surgical scissors or a blade to remove the skin tag, and then apply a solution to stop the bleeding. After treatment, your dermatologist may give you aftercare instructions to follow. This may include removing the bandage, washing the area carefully, and covering it with a new bandage. Follow your aftercare instructions carefully to prevent problems like an infection. Products that you can use at-home to remove skin tags are not recommended The U.S. Food and Drug Administration (FDA) has not approved any of these products. Because of the harm these products can cause, the FDA warns people NOT to use them. To find out more, go to 5 reasons to see a dermatologist for mole, skin tag removal. Does wart remover work on skin tags? Given that some skin tags look like warts, it’s easy to think wart remover would work well. It doesn’t. Warts are hard and need strong medication. Skin tags are soft, so using a wart remover on them can damage your skin. You may develop scarring or irritated skin where you apply wart remover. Seeing a dermatologist can give you peace of mind Skin tags come in many shapes and sizes, so you may mistake a wart or even a skin cancer for a skin tag. Board-certified dermatologists know the difference between something small and something major. By seeing a dermatologist, you’ll find out what’s going on and that can bring peace of mind. Related AAD resources 5 reasons to see a dermatologist for mole, skin tag removal Images Image 1: Getty Images Image 2: Used with permission of the Journal of the American Academy of Dermatology. (J Am Acad Dermatol. 2019;81:1037-57.) References Belgam Syed SY, Lipoff JB, et al. “Acrochordon.” In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2023 Jan. Farshchian M, Kimyai-Asadi A et al. “Cryosnip for skin tag removal.” J Am Acad Dermatol. 2021 May 30:S0190-9622(21)01032-X. doi: 10.1016/j.jaad.2021.05.039. Epub ahead of print. Hirt PA, Castillo DE, et al. “Skin changes in the obese patient.” J Am Acad Dermatol. 2019 Nov;81(5):1037-57. Kutzner HH, Kamino H, et al. “Fibrous and fibrohistiocytic proliferations of the skin and tendons.” In: Bolognia JL, et al. Dermatology. (fourth edition). Mosby Elsevier, China, 2018: 2068-9. Schwartz, RA. “Acrochordon.” In:Medscape(Elston DM., Ed.) Last updated 10/26/2022. Last accessed 3/28/2023. Tucker, R. “Advice on how to treat skin tags.” The Pharm Jour. Published March 1, 2011. Last accessed March 23, 2023. U.S. Food and Drug Administration. “Products marketed for removing moles and other skin lesions can cause injuries, scarring.” Last updated 8/10/22. Last visited 3/30/23. USER: Summarize this article in 400 words or list. Create bullet lists for what causes skin tags and how skin tags are created. Include a list of people most likely to get skin tags and why. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
20
35
1,064
null
360
Please answer questions using the text found in the prompt only. Do not use any external information whatsoever!
If I had just met a person of the advisory committee for the first time yesterday, would this count as a covered relationship?
Under section 502, the following scenario would also raise a potential appearance issue: where a person (or entity) with whom the advisory committee member has a “covered relationship” is or represents a “party to the matter” coming before the advisory committee. Both “covered relationship” and “party to the matter” are described below. “Covered Relationship”: Section 502(b)(1) provides that a member has a “covered relationship” with the following people and entities: (i) A person with whom the member has or is seeking a business, contractual, or other financial relationship other than a routine consumer transaction; 27 (ii) A person who is a member of her household or a relative with whom she has a close personal relationship; (iii) A person or entity for which the member has, within the last year,28 served as an employee, officer, director, consultant, agent, attorney, trustee, contractor, or general partner; (iv) A person or entity for which the member’s spouse, parent, or dependent child currently serves or is seeking to serve as an employee, officer, director, consultant, contractor, agent, attorney, trustee, or general partner; and (v) An organization, other than a political party, in which the member is an “active participant.” Mere membership in an organization, payment of dues, or the donation or solicitation of financial support does not, by itself, constitute active participation.
System Instructions: Please answer questions using the text found in the prompt only. Do not use any external information whatsoever! Prompt: If I had just met a person of the advisory committee for the first time yesterday, would this count as a covered relationship? Context Block: Under section 502, the following scenario would also raise a potential appearance issue: where a person (or entity) with whom the advisory committee member has a “covered relationship” is or represents a “party to the matter” coming before the advisory committee. Both “covered relationship” and “party to the matter” are described below. “Covered Relationship”: Section 502(b)(1) provides that a member has a “covered relationship” with the following people and entities: (i) A person with whom the member has or is seeking a business, contractual, or other financial relationship other than a routine consumer transaction; 27 (ii) A person who is a member of her household or a relative with whom she has a close personal relationship; (iii) A person or entity for which the member has, within the last year,28 served as an employee, officer, director, consultant, agent, attorney, trustee, contractor, or general partner; (iv) A person or entity for which the member’s spouse, parent, or dependent child currently serves or is seeking to serve as an employee, officer, director, consultant, contractor, agent, attorney, trustee, or general partner; and (v) An organization, other than a political party, in which the member is an “active participant.” Mere membership in an organization, payment of dues, or the donation or solicitation of financial support does not, by itself, constitute active participation.
Please answer questions using the text found in the prompt only. Do not use any external information whatsoever! EVIDENCE: Under section 502, the following scenario would also raise a potential appearance issue: where a person (or entity) with whom the advisory committee member has a “covered relationship” is or represents a “party to the matter” coming before the advisory committee. Both “covered relationship” and “party to the matter” are described below. “Covered Relationship”: Section 502(b)(1) provides that a member has a “covered relationship” with the following people and entities: (i) A person with whom the member has or is seeking a business, contractual, or other financial relationship other than a routine consumer transaction; 27 (ii) A person who is a member of her household or a relative with whom she has a close personal relationship; (iii) A person or entity for which the member has, within the last year,28 served as an employee, officer, director, consultant, agent, attorney, trustee, contractor, or general partner; (iv) A person or entity for which the member’s spouse, parent, or dependent child currently serves or is seeking to serve as an employee, officer, director, consultant, contractor, agent, attorney, trustee, or general partner; and (v) An organization, other than a political party, in which the member is an “active participant.” Mere membership in an organization, payment of dues, or the donation or solicitation of financial support does not, by itself, constitute active participation. USER: If I had just met a person of the advisory committee for the first time yesterday, would this count as a covered relationship? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
18
23
218
null
260
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document]
Examine the dual influence of technological innovation and political diplomacy in shaping the design and use of rooms s-116 and s-117 in the senate foreign relations Committee suite. In your response, connect the symbolic contribution of both robert and benjamin to at least two major us foreign policies debated in the room. Additionally, analyze how frescoes depicting their legaixes impacted both domestic and international legislative decisions; do not mention any historical event that occurred before 1840; and ensure that every sentence references a different artistic or architectural feature if the suite
The U.S. SenaTe Foreign relaTionS CommiTTee SUiTe The U.S. SenaTe Foreign relaTionS Committee The Senate Committee on Foreign Relations was established in 1816 as one of the original eleven permanent standing committees of the Senate. Throughout its history, the Foreign Relations Committee has been instrumental in developing The Senate Foreign Relations Committee Room (S-116) and influencing U.S. foreign policy. The committee tee considers, debates, and negotiates important treaties and legislation that support the national interest. It also holds jurisdiction over all democratic nominations. Through these powers, the committee has helped shape foreign policy of broad significance in matters of war and peace and international relations. The committee receives honored guests and con- ducts official business in its historic, two-room suite, located in the northeast corner of the Senate extension, built between 1851 and 1868. History Like many Senate spaces in the Capitol, S-116 and S-117 have served many different committees and offices. The first known use of S-116 was as the Senate folding room. It was here that publications were stored and where clerks prepared documents, bills, and speeches for distribution. Eventually this service was transferred to the Government printing ing Office. The Senate Committee on Patents, which held jurisdiction over patents and patent law, moved into S-116 in the 1870s. It was during the Patent Committee’s nearly two-decade occu- pancy of the room that artist Constantino Brumidi executed the fresco of American innovator Robert Fulton above the entrance to the committee room. Robert Fulton depicted in the lunette over S-116, then used by the Committee on Patents from 1872-1895 This follows a precedent applied throughout the Capitol’s corridors of relating the figurative murals to the achievements or work of the committee in the nearby room. The fresco above the door to S-116 depicts Fulton with his left hand resting on mechanical diagrams and his right hand gesturing ing towards a steamboat traveling the Hudson River—references to Fulton’s success at harnessing steam engine technology to create the first viable commercial steamboat service in the early 19th century. Brumidi incorporated emblems representing ing science, agriculture, navigation, and the arts on the ceiling in the spacious lobby outside S-116, known as the Patent Corridor. The original occupant of S-117 was the Senate Committee on Post-Offices and Post-Roads. To recognize this committee, Brumidi painted a fresco above the room’s entrance depicting Benjamin Franklin, the first postmaster general, surrounded by his inventions. The Retrenchment Committee using S-116 The two rooms, S-116 and S-117, continued to serve separate purposes until 1931. Occupants of the rooms reflected the concerns of a growing nation, including the committees on Retrenchmentment, Patents, Agriculture, Immigration, Territories, Female Suffrage, and Naval Affairs. In 1931, S-116 and S-117 became associated as a suite to accommodate the needs of the Naval Affairs Committee. Shortly after, in 1933, the Committee on Foreign Relations moved into the two-room suite, an assignment it retains to this day. While the Foreign Relations Com- mittee maintains several offices throughout the Capitol complex, the two rooms in the Capitol have become symbolic of the committee and its notable diplomatic work. Occupancy The following occupancy lists are compiled from a variety of resources, including architectural plans, guidebooks, contemporary records, and the annual directory of the United States Congress— first published in 1869. S–116 1869 Senate Folding Room 1870-1871 Committee on Retrenchment 1872-1881 Committee on Patents 1882-1884 Committee on Patents and Committee on Female Suffrage 1885-1895 Committee on Patents 1896-1914 Senate Post-Office 1915 Committee on Agriculture and Forestry 1916 Committee on Contingent Expenses 1917 Committee on Industrial Expositions 1918-1924 Committee on Immigration 1925 Committee on Territories 1926-1928 Committee on Territories and Insular Possessions 1929-1930 Committee on Printing 1931-1932 Committee on Naval Affairs 1933-present Committee on Foreign Relations S–117 1869-1895 Committee on Post-Offices and Post-Roads 1896-1901 Committee on Foreign Relations 1902-1909 Committee on Printing 1910-1914 Committee on Agriculture and Forestry 1915 Committee on the Library 1915-1917 Committee on Agriculture and Forestry 1918-1920 Committee on Census 1921-1923 Committee on Enrolled Bills 1924-1932 Committee on Naval Affairs 1933-present Committee on Foreign Relations Decorative Highlights The Senate extension and its companion House extension, designed by Thomas U. Walter and built by Montgomery C. Meigs, were meant to inspire visitors to reflect upon the Capitol as a symbol of American democracy and to showcase case the nation’s accomplishments, resources, and wealth. To this end, the extensions featured elaborate wall and ceiling murals, as well as the finest workmanship, the most exquisite building materials, and the latest technologies. Offices boasted high vaulted ceilings, ornamental cast-iron door and window frames, interior wood shutters, carved marble mantels, and marble baseboards, in addition to unique decorative details. Colorful floor tiles from Minton, Hollins & Company of England were used throughout to enliven spaces and add pattern. Accompanying this architectural grandeur, ventilation, heating, water closets, and gas lighting provided valuable comfort to the occupants. The central floor medallion in S-116 Natural light floods S-117 through the four generously ously proportioned windows. Today, the corner room’s windows offer views north to Union Station and east to the Supreme Court building. Ornamental bands of stylized leaves decorate the arched cast-iron window and door frames. The room retains its original marble mantel, crowned by a gilded Neoclassical Revival mirror whose frame features a prominent cornice with bead-and-reel and egg-and-dart ornaments, fluted pilasters, and acan—thus leaves that delicately wrap around the corners of the frame. Aside from the windows, the room is lit by a central crystal chandelier and four inverted crystal tal cone pendants The mantel and mirror in S-117 lights, and eight sconces installed on the window frames. The pendant lights were added in the 1960s and were designed to integrate with the committee room’s existing 19th-century chandelier and sconces. While not the original light fixtures, the chandelier and sconces are early 20th-century pieces reminiscent of new, vibrant chandeliers that were appearing in Senate spaces at the time. The newly constructed 1909 Senate Russell Office Buildinging’s committee rooms sparkled with silver and crystal fixtures, and in 1910, seven striking crystal chandeliers acquired from the White House were installed in the Capitol. These bright, vivid lights offered a welcome contrast to the darker, dull metal ones that hung in most extension spaces, so silver and crystal chandeliers were purchased for many Capitol offices. The painted decoration on the ceiling in S-117 emphasizes the architectural vault lines and dates from the turn of the 20th century, when a comprehensive decorative campaign for the Capitol was implemented. While some of the colors have changed with repainting over the years, the style of ornamentation holds true to the original design. Room S-116, which was first used as the folding ing room, was probably very plainly decorated. Initially, there were not even fireplaces. The two existing marble mantels were presumably installed in 1872, when the prominent Committee on Pensions was assigned the room. The Rococo Revival mirrors that rest on the mantels feature a central cartouche, scrolls, stylized leaves, and flowers. The decorative painted banding on the ceiling in S-116 probably dates to sometime after 1900 and is documented in images as early as 1946. The impressive, obround-shaped committee table fills the room and accommodates every current member of the committee. Each senator has an assigned seat, marked by an engraved brass name- plate. A bronze bust of Cordell Hull by George Conlon oversees the activities of the Foreign Rela- tions Committee from the central window well. Hull’s commemoration in the Foreign Relations Committee Room is fitting given his work as secretary of state, promoting peace and international alliances, and establishing the United Nations.
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> Examine the dual influence of technological innovation and political diplomacy in shaping the design and use of rooms s-116 and s-117 in the senate foreign relations Committee suite. In your response, connect the symbolic contribution of both robert and benjamin to at least two major us foreign policies debated in the room. Additionally, analyze how frescoes depicting their legaixes impacted both domestic and international legislative decisions; do not mention any historical event that occurred before 1840; and ensure that every sentence references a different artistic or architectural feature if the suite <TEXT> The U.S. SenaTe Foreign relaTionS CommiTTee SUiTe The U.S. SenaTe Foreign relaTionS Committee The Senate Committee on Foreign Relations was established in 1816 as one of the original eleven permanent standing committees of the Senate. Throughout its history, the Foreign Relations Committee has been instrumental in developing The Senate Foreign Relations Committee Room (S-116) and influencing U.S. foreign policy. The committee tee considers, debates, and negotiates important treaties and legislation that support the national interest. It also holds jurisdiction over all democratic nominations. Through these powers, the committee has helped shape foreign policy of broad significance in matters of war and peace and international relations. The committee receives honored guests and con- ducts official business in its historic, two-room suite, located in the northeast corner of the Senate extension, built between 1851 and 1868. History Like many Senate spaces in the Capitol, S-116 and S-117 have served many different committees and offices. The first known use of S-116 was as the Senate folding room. It was here that publications were stored and where clerks prepared documents, bills, and speeches for distribution. Eventually this service was transferred to the Government printing ing Office. The Senate Committee on Patents, which held jurisdiction over patents and patent law, moved into S-116 in the 1870s. It was during the Patent Committee’s nearly two-decade occu- pancy of the room that artist Constantino Brumidi executed the fresco of American innovator Robert Fulton above the entrance to the committee room. Robert Fulton depicted in the lunette over S-116, then used by the Committee on Patents from 1872-1895 This follows a precedent applied throughout the Capitol’s corridors of relating the figurative murals to the achievements or work of the committee in the nearby room. The fresco above the door to S-116 depicts Fulton with his left hand resting on mechanical diagrams and his right hand gesturing ing towards a steamboat traveling the Hudson River—references to Fulton’s success at harnessing steam engine technology to create the first viable commercial steamboat service in the early 19th century. Brumidi incorporated emblems representing ing science, agriculture, navigation, and the arts on the ceiling in the spacious lobby outside S-116, known as the Patent Corridor. The original occupant of S-117 was the Senate Committee on Post-Offices and Post-Roads. To recognize this committee, Brumidi painted a fresco above the room’s entrance depicting Benjamin Franklin, the first postmaster general, surrounded by his inventions. The Retrenchment Committee using S-116 The two rooms, S-116 and S-117, continued to serve separate purposes until 1931. Occupants of the rooms reflected the concerns of a growing nation, including the committees on Retrenchmentment, Patents, Agriculture, Immigration, Territories, Female Suffrage, and Naval Affairs. In 1931, S-116 and S-117 became associated as a suite to accommodate the needs of the Naval Affairs Committee. Shortly after, in 1933, the Committee on Foreign Relations moved into the two-room suite, an assignment it retains to this day. While the Foreign Relations Com- mittee maintains several offices throughout the Capitol complex, the two rooms in the Capitol have become symbolic of the committee and its notable diplomatic work. Occupancy The following occupancy lists are compiled from a variety of resources, including architectural plans, guidebooks, contemporary records, and the annual directory of the United States Congress— first published in 1869. S–116 1869 Senate Folding Room 1870-1871 Committee on Retrenchment 1872-1881 Committee on Patents 1882-1884 Committee on Patents and Committee on Female Suffrage 1885-1895 Committee on Patents 1896-1914 Senate Post-Office 1915 Committee on Agriculture and Forestry 1916 Committee on Contingent Expenses 1917 Committee on Industrial Expositions 1918-1924 Committee on Immigration 1925 Committee on Territories 1926-1928 Committee on Territories and Insular Possessions 1929-1930 Committee on Printing 1931-1932 Committee on Naval Affairs 1933-present Committee on Foreign Relations S–117 1869-1895 Committee on Post-Offices and Post-Roads 1896-1901 Committee on Foreign Relations 1902-1909 Committee on Printing 1910-1914 Committee on Agriculture and Forestry 1915 Committee on the Library 1915-1917 Committee on Agriculture and Forestry 1918-1920 Committee on Census 1921-1923 Committee on Enrolled Bills 1924-1932 Committee on Naval Affairs 1933-present Committee on Foreign Relations Decorative Highlights The Senate extension and its companion House extension, designed by Thomas U. Walter and built by Montgomery C. Meigs, were meant to inspire visitors to reflect upon the Capitol as a symbol of American democracy and to showcase case the nation’s accomplishments, resources, and wealth. To this end, the extensions featured elaborate wall and ceiling murals, as well as the finest workmanship, the most exquisite building materials, and the latest technologies. Offices boasted high vaulted ceilings, ornamental cast-iron door and window frames, interior wood shutters, carved marble mantels, and marble baseboards, in addition to unique decorative details. Colorful floor tiles from Minton, Hollins & Company of England were used throughout to enliven spaces and add pattern. Accompanying this architectural grandeur, ventilation, heating, water closets, and gas lighting provided valuable comfort to the occupants. The central floor medallion in S-116 Natural light floods S-117 through the four generously ously proportioned windows. Today, the corner room’s windows offer views north to Union Station and east to the Supreme Court building. Ornamental bands of stylized leaves decorate the arched cast-iron window and door frames. The room retains its original marble mantel, crowned by a gilded Neoclassical Revival mirror whose frame features a prominent cornice with bead-and-reel and egg-and-dart ornaments, fluted pilasters, and acan—thus leaves that delicately wrap around the corners of the frame. Aside from the windows, the room is lit by a central crystal chandelier and four inverted crystal tal cone pendants The mantel and mirror in S-117 lights, and eight sconces installed on the window frames. The pendant lights were added in the 1960s and were designed to integrate with the committee room’s existing 19th-century chandelier and sconces. While not the original light fixtures, the chandelier and sconces are early 20th-century pieces reminiscent of new, vibrant chandeliers that were appearing in Senate spaces at the time. The newly constructed 1909 Senate Russell Office Buildinging’s committee rooms sparkled with silver and crystal fixtures, and in 1910, seven striking crystal chandeliers acquired from the White House were installed in the Capitol. These bright, vivid lights offered a welcome contrast to the darker, dull metal ones that hung in most extension spaces, so silver and crystal chandeliers were purchased for many Capitol offices. The painted decoration on the ceiling in S-117 emphasizes the architectural vault lines and dates from the turn of the 20th century, when a comprehensive decorative campaign for the Capitol was implemented. While some of the colors have changed with repainting over the years, the style of ornamentation holds true to the original design. Room S-116, which was first used as the folding ing room, was probably very plainly decorated. Initially, there were not even fireplaces. The two existing marble mantels were presumably installed in 1872, when the prominent Committee on Pensions was assigned the room. The Rococo Revival mirrors that rest on the mantels feature a central cartouche, scrolls, stylized leaves, and flowers. The decorative painted banding on the ceiling in S-116 probably dates to sometime after 1900 and is documented in images as early as 1946. The impressive, obround-shaped committee table fills the room and accommodates every current member of the committee. Each senator has an assigned seat, marked by an engraved brass name- plate. A bronze bust of Cordell Hull by George Conlon oversees the activities of the Foreign Rela- tions Committee from the central window well. Hull’s commemoration in the Foreign Relations Committee Room is fitting given his work as secretary of state, promoting peace and international alliances, and establishing the United Nations. https://www.senate.gov/art-artifacts/publications/pdf/room-foreign-relations.pdf
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document] EVIDENCE: The U.S. SenaTe Foreign relaTionS CommiTTee SUiTe The U.S. SenaTe Foreign relaTionS Committee The Senate Committee on Foreign Relations was established in 1816 as one of the original eleven permanent standing committees of the Senate. Throughout its history, the Foreign Relations Committee has been instrumental in developing The Senate Foreign Relations Committee Room (S-116) and influencing U.S. foreign policy. The committee tee considers, debates, and negotiates important treaties and legislation that support the national interest. It also holds jurisdiction over all democratic nominations. Through these powers, the committee has helped shape foreign policy of broad significance in matters of war and peace and international relations. The committee receives honored guests and con- ducts official business in its historic, two-room suite, located in the northeast corner of the Senate extension, built between 1851 and 1868. History Like many Senate spaces in the Capitol, S-116 and S-117 have served many different committees and offices. The first known use of S-116 was as the Senate folding room. It was here that publications were stored and where clerks prepared documents, bills, and speeches for distribution. Eventually this service was transferred to the Government printing ing Office. The Senate Committee on Patents, which held jurisdiction over patents and patent law, moved into S-116 in the 1870s. It was during the Patent Committee’s nearly two-decade occu- pancy of the room that artist Constantino Brumidi executed the fresco of American innovator Robert Fulton above the entrance to the committee room. Robert Fulton depicted in the lunette over S-116, then used by the Committee on Patents from 1872-1895 This follows a precedent applied throughout the Capitol’s corridors of relating the figurative murals to the achievements or work of the committee in the nearby room. The fresco above the door to S-116 depicts Fulton with his left hand resting on mechanical diagrams and his right hand gesturing ing towards a steamboat traveling the Hudson River—references to Fulton’s success at harnessing steam engine technology to create the first viable commercial steamboat service in the early 19th century. Brumidi incorporated emblems representing ing science, agriculture, navigation, and the arts on the ceiling in the spacious lobby outside S-116, known as the Patent Corridor. The original occupant of S-117 was the Senate Committee on Post-Offices and Post-Roads. To recognize this committee, Brumidi painted a fresco above the room’s entrance depicting Benjamin Franklin, the first postmaster general, surrounded by his inventions. The Retrenchment Committee using S-116 The two rooms, S-116 and S-117, continued to serve separate purposes until 1931. Occupants of the rooms reflected the concerns of a growing nation, including the committees on Retrenchmentment, Patents, Agriculture, Immigration, Territories, Female Suffrage, and Naval Affairs. In 1931, S-116 and S-117 became associated as a suite to accommodate the needs of the Naval Affairs Committee. Shortly after, in 1933, the Committee on Foreign Relations moved into the two-room suite, an assignment it retains to this day. While the Foreign Relations Com- mittee maintains several offices throughout the Capitol complex, the two rooms in the Capitol have become symbolic of the committee and its notable diplomatic work. Occupancy The following occupancy lists are compiled from a variety of resources, including architectural plans, guidebooks, contemporary records, and the annual directory of the United States Congress— first published in 1869. S–116 1869 Senate Folding Room 1870-1871 Committee on Retrenchment 1872-1881 Committee on Patents 1882-1884 Committee on Patents and Committee on Female Suffrage 1885-1895 Committee on Patents 1896-1914 Senate Post-Office 1915 Committee on Agriculture and Forestry 1916 Committee on Contingent Expenses 1917 Committee on Industrial Expositions 1918-1924 Committee on Immigration 1925 Committee on Territories 1926-1928 Committee on Territories and Insular Possessions 1929-1930 Committee on Printing 1931-1932 Committee on Naval Affairs 1933-present Committee on Foreign Relations S–117 1869-1895 Committee on Post-Offices and Post-Roads 1896-1901 Committee on Foreign Relations 1902-1909 Committee on Printing 1910-1914 Committee on Agriculture and Forestry 1915 Committee on the Library 1915-1917 Committee on Agriculture and Forestry 1918-1920 Committee on Census 1921-1923 Committee on Enrolled Bills 1924-1932 Committee on Naval Affairs 1933-present Committee on Foreign Relations Decorative Highlights The Senate extension and its companion House extension, designed by Thomas U. Walter and built by Montgomery C. Meigs, were meant to inspire visitors to reflect upon the Capitol as a symbol of American democracy and to showcase case the nation’s accomplishments, resources, and wealth. To this end, the extensions featured elaborate wall and ceiling murals, as well as the finest workmanship, the most exquisite building materials, and the latest technologies. Offices boasted high vaulted ceilings, ornamental cast-iron door and window frames, interior wood shutters, carved marble mantels, and marble baseboards, in addition to unique decorative details. Colorful floor tiles from Minton, Hollins & Company of England were used throughout to enliven spaces and add pattern. Accompanying this architectural grandeur, ventilation, heating, water closets, and gas lighting provided valuable comfort to the occupants. The central floor medallion in S-116 Natural light floods S-117 through the four generously ously proportioned windows. Today, the corner room’s windows offer views north to Union Station and east to the Supreme Court building. Ornamental bands of stylized leaves decorate the arched cast-iron window and door frames. The room retains its original marble mantel, crowned by a gilded Neoclassical Revival mirror whose frame features a prominent cornice with bead-and-reel and egg-and-dart ornaments, fluted pilasters, and acan—thus leaves that delicately wrap around the corners of the frame. Aside from the windows, the room is lit by a central crystal chandelier and four inverted crystal tal cone pendants The mantel and mirror in S-117 lights, and eight sconces installed on the window frames. The pendant lights were added in the 1960s and were designed to integrate with the committee room’s existing 19th-century chandelier and sconces. While not the original light fixtures, the chandelier and sconces are early 20th-century pieces reminiscent of new, vibrant chandeliers that were appearing in Senate spaces at the time. The newly constructed 1909 Senate Russell Office Buildinging’s committee rooms sparkled with silver and crystal fixtures, and in 1910, seven striking crystal chandeliers acquired from the White House were installed in the Capitol. These bright, vivid lights offered a welcome contrast to the darker, dull metal ones that hung in most extension spaces, so silver and crystal chandeliers were purchased for many Capitol offices. The painted decoration on the ceiling in S-117 emphasizes the architectural vault lines and dates from the turn of the 20th century, when a comprehensive decorative campaign for the Capitol was implemented. While some of the colors have changed with repainting over the years, the style of ornamentation holds true to the original design. Room S-116, which was first used as the folding ing room, was probably very plainly decorated. Initially, there were not even fireplaces. The two existing marble mantels were presumably installed in 1872, when the prominent Committee on Pensions was assigned the room. The Rococo Revival mirrors that rest on the mantels feature a central cartouche, scrolls, stylized leaves, and flowers. The decorative painted banding on the ceiling in S-116 probably dates to sometime after 1900 and is documented in images as early as 1946. The impressive, obround-shaped committee table fills the room and accommodates every current member of the committee. Each senator has an assigned seat, marked by an engraved brass name- plate. A bronze bust of Cordell Hull by George Conlon oversees the activities of the Foreign Rela- tions Committee from the central window well. Hull’s commemoration in the Foreign Relations Committee Room is fitting given his work as secretary of state, promoting peace and international alliances, and establishing the United Nations. USER: Examine the dual influence of technological innovation and political diplomacy in shaping the design and use of rooms s-116 and s-117 in the senate foreign relations Committee suite. In your response, connect the symbolic contribution of both robert and benjamin to at least two major us foreign policies debated in the room. Additionally, analyze how frescoes depicting their legaixes impacted both domestic and international legislative decisions; do not mention any historical event that occurred before 1840; and ensure that every sentence references a different artistic or architectural feature if the suite Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
20
91
1,256
null
462
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document]
Assess how quantum computing will impact existing cryptographic protocols as well as recommend ways the effectiveness of the blockchain system can be guaranteed in the post-quantum world? Analyze the prospects of the new quantum-resistant algorithms; Review the scalability challenges of QKD and explain the policy implications on natural security
The Challenges of Quantum Computing in Cryptography While quantum computing offers many benefits, it also presents several challenges. One of the most significant challenges is the threat of quantum attacks on current encryption algorithms. As mentioned earlier, Shor's algorithm can break RSA encryption, which is widely used to secure data. Any data encrypted using RSA encryption is vulnerable to quantum attacks [9]. 1. Error Correction: The effects of noise and decoherence on quantum computers make them extremely prone to mistakes. Implementing trustworthy quantum cryptography systems to overcome these mistakes can be extremely difficult. 2. Scalability: Because quantum computing is still in its infancy, the number of qubits that present quantum computing systems can support is constrained. Due to this, scaling quantum cryptography systems to handle bigger data volumes and applications is challenging. Another challenge is developing new encryption algorithms resistant to quantum attacks. This is because current encryption algorithms that are secure against classical attacks may not be secure against quantum attacks. Therefore, researchers are actively developing new quantum-resistant encryption algorithms that can be used in the post-quantum era. D. Policy Implications of Quantum Computing in Cryptography As quantum computing advances, policymakers must carefully consider the national security and critical infrastructure implications. Encryption algorithms are essential for securing military communications, financial transactions, and government data, making it crucial to assess the impact of quantum computing on current encryption standards and develop strategies to address any vulnerabilities. The National Institute of Standards and Technology (NIST) has initiated a standardization process for post- quantum cryptography to address this issue. The goal is to create a portfolio of quantum-resistant algorithms that can be widely implemented in the coming years. This process involves a public competition in which researchers submit their proposed encryption algorithms and undergo a rigorous evaluation. NIST will then select the most promising algorithms for standardization. In addition to the need for quantum-resistant encryption, policymakers must also consider the potential for quantum computing to be used for offensive purposes. A quantum computer could break into secure systems and access sensitive data, which could be detrimental to national security. Therefore, governments must establish policies and regulations to prevent the misuse of quantum computing technology and safeguard against potential threats. E. Social Implications of Quantum Computing in Cryptography The impact of quantum computing on cryptography goes beyond policy and security concerns. It also has social implications that must be considered. For example, if quantum computing can break current encryption algorithms, it could significantly impact individual privacy. Personal information such as medical records, financial data, and online communications could be compromised. Furthermore, developing new quantum-resistant encryption algorithms will require significant investment and research, which could limit access to these technologies. This could widen the digital divide and create disparities in access to secure communication channels, particularly for marginalized communities. In conclusion, quantum computing has the potential to revolutionize cryptography, but it also presents significant challenges. While new cryptographic techniques such as QKD and quantum signature schemes can enhance security, quantum computing can also be used for offensive purposes. Therefore, policymakers and researchers must collaborate to address these challenges and ensure critical infrastructure security and individual privacy in the post-quantum era. III. CRYPTOGRAPHY AND QUANTUM COMPUTING A. Quantum Key Distribution QKD is a method of securely sharing keys between two parties based on the principles of quantum mechanics. This approach takes advantage of the fact that any attempt to intercept the keys will introduce detectable errors. QKD is considered an unconditionally secure key distribution method, making it ideal for sensitive applications. Although QKD is still in the experimental stage, it has shown promising results and is being studied extensively by researchers worldwide. One of the challenges in implementing QKD is the issue of scalability. Current QKD systems are limited regarding the distance they can distribute keys and the number of users they can support [11]. Researchers are exploring new technologies such as quantum repeaters, quantum memories, and quantum routers to overcome this challenge. These technologies will enable the distribution of keys over longer distances and the support of more users, making QKD a viable option for a wide range of applications. B. Quantum-Resistant Cryptography Quantum-resistant cryptography refers to cryptographic techniques designed to be secure against attacks by quantum computers. Quantum computers pose a threat to current cryptographic algorithms like RSA and Elliptic Curve 3 Cryptography (ECC). Hence, new cryptographic techniques that can resist quantum attacks are necessary. Lattice-based cryptography is one of the most promising candidates for post-quantum cryptography and is under extensive research [11]. Code-based cryptography is another well-established approach that has been around for a while. These approaches are believed to provide high security against quantum attacks. However, one of the challenges in developing post- quantum cryptographic algorithms is ensuring that they are efficient and practical for real-world applications. Many of the current post-quantum cryptographic algorithms are computationally intensive, which could make them impractical for use in resource-constrained environments like mobile devices and the Internet of Things (IoT) [13]. To address this challenge, researchers are exploring new approaches to post-quantum cryptography that are efficient and practical while maintaining the security of sensitive information. C. Cryptographic Protocols for Quantum Computing Cryptographic protocols use cryptographic techniques to secure quantum computing systems. These protocols are designed to protect quantum computers from attacks, prevent the tampering of quantum information, and ensure the integrity of quantum cryptographic keys [2]. Examples of cryptographic protocols for quantum computing include quantum secret sharing [12], quantum oblivious transfer, and quantum homomorphic encryption. These protocols are essential for the secure operation of quantum computing systems, and they are being studied extensively by researchers worldwide. Another challenge in developing cryptographic protocols for quantum computing is ensuring they resist attacks by quantum computers. Many current cryptographic protocols are vulnerable to attacks by quantum computers, which could compromise the security of quantum information [2]. To address this issue, researchers are developing new cryptographic protocols resistant to attacks by quantum computers, ensuring that quantum computing systems remain secure. These new cryptographic protocols are being studied extensively by researchers worldwide and can potentially revolutionize how we secure information in the quantum computing era. D. Quantum Cryptography Standards Quantum cryptography standards refer to guidelines defining the requirements for implementing quantum cryptography. These standards ensure that quantum cryptography is implemented securely, reliably, and efficiently [6]. There are several standards for quantum cryptography, including the European Telecommunications Standards Institute (ETSI) and the National Institute of Standards and Technology (NIST) [7]. The previous two organizations work consistently on quantum cryptography standards, as they are developing guidelines and recommendations for implementing quantum cryptography, including post-quantum cryptographic algorithms [8]. Developing quantum cryptography standards will facilitate the adoption of quantum cryptography and ensure that it is implemented securely and efficiently [6]. The development of quantum cryptography standards is essential for the widespread adoption of quantum cryptographic systems. It will ensure that these systems are interoperable and compatible with existing cryptographic protocols. Measures will also help establish trust in quantum cryptographic systems by providing a framework for evaluating and certifying these systems. However, developing standards for quantum cryptography is a complex and challenging task. It requires the collaboration of experts from various fields, including quantum physics, computer science, cryptography and standards development. As quantum cryptographic systems continue to evolve and become more sophisticated, the development of standards will become increasingly important to ensure their security and reliability. E. Quantum Computing and Blockchain Quantum computing has the potential to disrupt the security of blockchain systems. Blockchain is a decentralized, tamper-proof database that records transactions securely and transparently. However, the security of blockchain systems depends on the underlying cryptographic algorithms, which are vulnerable to attacks by quantum computers. To address this issue, researchers are developing post-quantum cryptographic algorithms that can be used to secure blockchain systems.
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. Assess how quantum computing will impact existing cryptographic protocols as well as recommend ways the effectiveness of the blockchain system can be guaranteed in the post-quantum world? Analyze the prospects of the new quantum-resistant algorithms; Review the scalability challenges of QKD and explain the policy implications on natural security The Challenges of Quantum Computing in Cryptography While quantum computing offers many benefits, it also presents several challenges. One of the most significant challenges is the threat of quantum attacks on current encryption algorithms. As mentioned earlier, Shor's algorithm can break RSA encryption, which is widely used to secure data. Any data encrypted using RSA encryption is vulnerable to quantum attacks [9]. 1. Error Correction: The effects of noise and decoherence on quantum computers make them extremely prone to mistakes. Implementing trustworthy quantum cryptography systems to overcome these mistakes can be extremely difficult. 2. Scalability: Because quantum computing is still in its infancy, the number of qubits that present quantum computing systems can support is constrained. Due to this, scaling quantum cryptography systems to handle bigger data volumes and applications is challenging. Another challenge is developing new encryption algorithms resistant to quantum attacks. This is because current encryption algorithms that are secure against classical attacks may not be secure against quantum attacks. Therefore, researchers are actively developing new quantum-resistant encryption algorithms that can be used in the post-quantum era. D. Policy Implications of Quantum Computing in Cryptography As quantum computing advances, policymakers must carefully consider the national security and critical infrastructure implications. Encryption algorithms are essential for securing military communications, financial transactions, and government data, making it crucial to assess the impact of quantum computing on current encryption standards and develop strategies to address any vulnerabilities. The National Institute of Standards and Technology (NIST) has initiated a standardization process for post- quantum cryptography to address this issue. The goal is to create a portfolio of quantum-resistant algorithms that can be widely implemented in the coming years. This process involves a public competition in which researchers submit their proposed encryption algorithms and undergo a rigorous evaluation. NIST will then select the most promising algorithms for standardization. In addition to the need for quantum-resistant encryption, policymakers must also consider the potential for quantum computing to be used for offensive purposes. A quantum computer could break into secure systems and access sensitive data, which could be detrimental to national security. Therefore, governments must establish policies and regulations to prevent the misuse of quantum computing technology and safeguard against potential threats. E. Social Implications of Quantum Computing in Cryptography The impact of quantum computing on cryptography goes beyond policy and security concerns. It also has social implications that must be considered. For example, if quantum computing can break current encryption algorithms, it could significantly impact individual privacy. Personal information such as medical records, financial data, and online communications could be compromised. Furthermore, developing new quantum-resistant encryption algorithms will require significant investment and research, which could limit access to these technologies. This could widen the digital divide and create disparities in access to secure communication channels, particularly for marginalized communities. In conclusion, quantum computing has the potential to revolutionize cryptography, but it also presents significant challenges. While new cryptographic techniques such as QKD and quantum signature schemes can enhance security, quantum computing can also be used for offensive purposes. Therefore, policymakers and researchers must collaborate to address these challenges and ensure critical infrastructure security and individual privacy in the post-quantum era. III. CRYPTOGRAPHY AND QUANTUM COMPUTING A. Quantum Key Distribution QKD is a method of securely sharing keys between two parties based on the principles of quantum mechanics. This approach takes advantage of the fact that any attempt to intercept the keys will introduce detectable errors. QKD is considered an unconditionally secure key distribution method, making it ideal for sensitive applications. Although QKD is still in the experimental stage, it has shown promising results and is being studied extensively by researchers worldwide. One of the challenges in implementing QKD is the issue of scalability. Current QKD systems are limited regarding the distance they can distribute keys and the number of users they can support [11]. Researchers are exploring new technologies such as quantum repeaters, quantum memories, and quantum routers to overcome this challenge. These technologies will enable the distribution of keys over longer distances and the support of more users, making QKD a viable option for a wide range of applications. B. Quantum-Resistant Cryptography Quantum-resistant cryptography refers to cryptographic techniques designed to be secure against attacks by quantum computers. Quantum computers pose a threat to current cryptographic algorithms like RSA and Elliptic Curve 3 Cryptography (ECC). Hence, new cryptographic techniques that can resist quantum attacks are necessary. Lattice-based cryptography is one of the most promising candidates for post-quantum cryptography and is under extensive research [11]. Code-based cryptography is another well-established approach that has been around for a while. These approaches are believed to provide high security against quantum attacks. However, one of the challenges in developing post- quantum cryptographic algorithms is ensuring that they are efficient and practical for real-world applications. Many of the current post-quantum cryptographic algorithms are computationally intensive, which could make them impractical for use in resource-constrained environments like mobile devices and the Internet of Things (IoT) [13]. To address this challenge, researchers are exploring new approaches to post-quantum cryptography that are efficient and practical while maintaining the security of sensitive information. C. Cryptographic Protocols for Quantum Computing Cryptographic protocols use cryptographic techniques to secure quantum computing systems. These protocols are designed to protect quantum computers from attacks, prevent the tampering of quantum information, and ensure the integrity of quantum cryptographic keys [2]. Examples of cryptographic protocols for quantum computing include quantum secret sharing [12], quantum oblivious transfer, and quantum homomorphic encryption. These protocols are essential for the secure operation of quantum computing systems, and they are being studied extensively by researchers worldwide. Another challenge in developing cryptographic protocols for quantum computing is ensuring they resist attacks by quantum computers. Many current cryptographic protocols are vulnerable to attacks by quantum computers, which could compromise the security of quantum information [2]. To address this issue, researchers are developing new cryptographic protocols resistant to attacks by quantum computers, ensuring that quantum computing systems remain secure. These new cryptographic protocols are being studied extensively by researchers worldwide and can potentially revolutionize how we secure information in the quantum computing era. D. Quantum Cryptography Standards Quantum cryptography standards refer to guidelines defining the requirements for implementing quantum cryptography. These standards ensure that quantum cryptography is implemented securely, reliably, and efficiently [6]. There are several standards for quantum cryptography, including the European Telecommunications Standards Institute (ETSI) and the National Institute of Standards and Technology (NIST) [7]. The previous two organizations work consistently on quantum cryptography standards, as they are developing guidelines and recommendations for implementing quantum cryptography, including post-quantum cryptographic algorithms [8]. Developing quantum cryptography standards will facilitate the adoption of quantum cryptography and ensure that it is implemented securely and efficiently [6]. The development of quantum cryptography standards is essential for the widespread adoption of quantum cryptographic systems. It will ensure that these systems are interoperable and compatible with existing cryptographic protocols. Measures will also help establish trust in quantum cryptographic systems by providing a framework for evaluating and certifying these systems. However, developing standards for quantum cryptography is a complex and challenging task. It requires the collaboration of experts from various fields, including quantum physics, computer science, cryptography and standards development. As quantum cryptographic systems continue to evolve and become more sophisticated, the development of standards will become increasingly important to ensure their security and reliability. E. Quantum Computing and Blockchain Quantum computing has the potential to disrupt the security of blockchain systems. Blockchain is a decentralized, tamper-proof database that records transactions securely and transparently. However, the security of blockchain systems depends on the underlying cryptographic algorithms, which are vulnerable to attacks by quantum computers. To address this issue, researchers are developing post-quantum cryptographic algorithms that can be used to secure blockchain systems. https://american-cse.org/csci2023-ieee/pdfs/CSCI2023-47UoKEqjHou6fHnm3C9aVb/615100a490/615100a490.pdf
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document] EVIDENCE: The Challenges of Quantum Computing in Cryptography While quantum computing offers many benefits, it also presents several challenges. One of the most significant challenges is the threat of quantum attacks on current encryption algorithms. As mentioned earlier, Shor's algorithm can break RSA encryption, which is widely used to secure data. Any data encrypted using RSA encryption is vulnerable to quantum attacks [9]. 1. Error Correction: The effects of noise and decoherence on quantum computers make them extremely prone to mistakes. Implementing trustworthy quantum cryptography systems to overcome these mistakes can be extremely difficult. 2. Scalability: Because quantum computing is still in its infancy, the number of qubits that present quantum computing systems can support is constrained. Due to this, scaling quantum cryptography systems to handle bigger data volumes and applications is challenging. Another challenge is developing new encryption algorithms resistant to quantum attacks. This is because current encryption algorithms that are secure against classical attacks may not be secure against quantum attacks. Therefore, researchers are actively developing new quantum-resistant encryption algorithms that can be used in the post-quantum era. D. Policy Implications of Quantum Computing in Cryptography As quantum computing advances, policymakers must carefully consider the national security and critical infrastructure implications. Encryption algorithms are essential for securing military communications, financial transactions, and government data, making it crucial to assess the impact of quantum computing on current encryption standards and develop strategies to address any vulnerabilities. The National Institute of Standards and Technology (NIST) has initiated a standardization process for post- quantum cryptography to address this issue. The goal is to create a portfolio of quantum-resistant algorithms that can be widely implemented in the coming years. This process involves a public competition in which researchers submit their proposed encryption algorithms and undergo a rigorous evaluation. NIST will then select the most promising algorithms for standardization. In addition to the need for quantum-resistant encryption, policymakers must also consider the potential for quantum computing to be used for offensive purposes. A quantum computer could break into secure systems and access sensitive data, which could be detrimental to national security. Therefore, governments must establish policies and regulations to prevent the misuse of quantum computing technology and safeguard against potential threats. E. Social Implications of Quantum Computing in Cryptography The impact of quantum computing on cryptography goes beyond policy and security concerns. It also has social implications that must be considered. For example, if quantum computing can break current encryption algorithms, it could significantly impact individual privacy. Personal information such as medical records, financial data, and online communications could be compromised. Furthermore, developing new quantum-resistant encryption algorithms will require significant investment and research, which could limit access to these technologies. This could widen the digital divide and create disparities in access to secure communication channels, particularly for marginalized communities. In conclusion, quantum computing has the potential to revolutionize cryptography, but it also presents significant challenges. While new cryptographic techniques such as QKD and quantum signature schemes can enhance security, quantum computing can also be used for offensive purposes. Therefore, policymakers and researchers must collaborate to address these challenges and ensure critical infrastructure security and individual privacy in the post-quantum era. III. CRYPTOGRAPHY AND QUANTUM COMPUTING A. Quantum Key Distribution QKD is a method of securely sharing keys between two parties based on the principles of quantum mechanics. This approach takes advantage of the fact that any attempt to intercept the keys will introduce detectable errors. QKD is considered an unconditionally secure key distribution method, making it ideal for sensitive applications. Although QKD is still in the experimental stage, it has shown promising results and is being studied extensively by researchers worldwide. One of the challenges in implementing QKD is the issue of scalability. Current QKD systems are limited regarding the distance they can distribute keys and the number of users they can support [11]. Researchers are exploring new technologies such as quantum repeaters, quantum memories, and quantum routers to overcome this challenge. These technologies will enable the distribution of keys over longer distances and the support of more users, making QKD a viable option for a wide range of applications. B. Quantum-Resistant Cryptography Quantum-resistant cryptography refers to cryptographic techniques designed to be secure against attacks by quantum computers. Quantum computers pose a threat to current cryptographic algorithms like RSA and Elliptic Curve 3 Cryptography (ECC). Hence, new cryptographic techniques that can resist quantum attacks are necessary. Lattice-based cryptography is one of the most promising candidates for post-quantum cryptography and is under extensive research [11]. Code-based cryptography is another well-established approach that has been around for a while. These approaches are believed to provide high security against quantum attacks. However, one of the challenges in developing post- quantum cryptographic algorithms is ensuring that they are efficient and practical for real-world applications. Many of the current post-quantum cryptographic algorithms are computationally intensive, which could make them impractical for use in resource-constrained environments like mobile devices and the Internet of Things (IoT) [13]. To address this challenge, researchers are exploring new approaches to post-quantum cryptography that are efficient and practical while maintaining the security of sensitive information. C. Cryptographic Protocols for Quantum Computing Cryptographic protocols use cryptographic techniques to secure quantum computing systems. These protocols are designed to protect quantum computers from attacks, prevent the tampering of quantum information, and ensure the integrity of quantum cryptographic keys [2]. Examples of cryptographic protocols for quantum computing include quantum secret sharing [12], quantum oblivious transfer, and quantum homomorphic encryption. These protocols are essential for the secure operation of quantum computing systems, and they are being studied extensively by researchers worldwide. Another challenge in developing cryptographic protocols for quantum computing is ensuring they resist attacks by quantum computers. Many current cryptographic protocols are vulnerable to attacks by quantum computers, which could compromise the security of quantum information [2]. To address this issue, researchers are developing new cryptographic protocols resistant to attacks by quantum computers, ensuring that quantum computing systems remain secure. These new cryptographic protocols are being studied extensively by researchers worldwide and can potentially revolutionize how we secure information in the quantum computing era. D. Quantum Cryptography Standards Quantum cryptography standards refer to guidelines defining the requirements for implementing quantum cryptography. These standards ensure that quantum cryptography is implemented securely, reliably, and efficiently [6]. There are several standards for quantum cryptography, including the European Telecommunications Standards Institute (ETSI) and the National Institute of Standards and Technology (NIST) [7]. The previous two organizations work consistently on quantum cryptography standards, as they are developing guidelines and recommendations for implementing quantum cryptography, including post-quantum cryptographic algorithms [8]. Developing quantum cryptography standards will facilitate the adoption of quantum cryptography and ensure that it is implemented securely and efficiently [6]. The development of quantum cryptography standards is essential for the widespread adoption of quantum cryptographic systems. It will ensure that these systems are interoperable and compatible with existing cryptographic protocols. Measures will also help establish trust in quantum cryptographic systems by providing a framework for evaluating and certifying these systems. However, developing standards for quantum cryptography is a complex and challenging task. It requires the collaboration of experts from various fields, including quantum physics, computer science, cryptography and standards development. As quantum cryptographic systems continue to evolve and become more sophisticated, the development of standards will become increasingly important to ensure their security and reliability. E. Quantum Computing and Blockchain Quantum computing has the potential to disrupt the security of blockchain systems. Blockchain is a decentralized, tamper-proof database that records transactions securely and transparently. However, the security of blockchain systems depends on the underlying cryptographic algorithms, which are vulnerable to attacks by quantum computers. To address this issue, researchers are developing post-quantum cryptographic algorithms that can be used to secure blockchain systems. USER: Assess how quantum computing will impact existing cryptographic protocols as well as recommend ways the effectiveness of the blockchain system can be guaranteed in the post-quantum world? Analyze the prospects of the new quantum-resistant algorithms; Review the scalability challenges of QKD and explain the policy implications on natural security Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
24
49
1,288
null
684
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document]
Hey, I'm working on a paper about the use of low-dose aspirin during pregnancy, and I'm trying to clarify something. In the leaflet, it says aspirin reduces the risk of pre-eclampsia and smaller babies, but I'm confused about how it affects placental blood flow versus its impact on potential bleeding during labor. Wouldn't increased blood flow increase bleeding risks? Also, how does aspirin interact with indigestion remedies, and does that complicate its safety for someone with both pregnancy and digestive issues?
You have been given this information leaflet as you have been advised to take low dose aspirin, 150mg once a day from 12 to 36 weeks of your pregnancy. What is aspirin? Aspirin is known as an NSAID (a non-steroidal anti-inflammatory drug). Aspirin is often used to treat pain, fever, inflammation or prevent clot formation. There is evidence that taking low dose aspirin once a day can help increase the function and blood flow of your placenta (afterbirth) which provides your baby with oxygen and nutrients during your pregnancy to help them grow. Why have I been advised to take aspirin? Not everyone is recommended to take aspirin in pregnancy. You have been advised to take a low dose of aspirin during your pregnancy to reduce the risk of: • developing hypertension (high blood pressure) and pre-eclampsia (high blood pressure and protein in your urine) • giving birth to your baby prematurely (before 37 weeks) • your baby being smaller than expected Your midwife or obstetrician (a doctor who specialises in the care of pregnant women) may recommend that you take low dose aspirin to reduce the risk of hypertension (high blood pressure) if one of the following apply to you: • you had hypertension (high blood pressure) during a previous pregnancy • you have chronic kidney disease • you have an auto-immune disease (for example, lupus or antiphospholipid syndrome) • you have Type 1 or 2 diabetes • you have chronic hypertension (high blood pressure before pregnancy) • you have previously given birth to a baby who was smaller than expected • you have low Pregnancy Associated Plasma Protein (PAPP-A) screening blood test • you are aged 40 years or older Low dose aspirin may also be recommended if two or more of the following apply to you: • this is your first pregnancy • there are more than 10 years between this pregnancy and the birth of your last baby • your BMI is 35 or more at your booking appointment • there is a family history of pre-eclampsia in a first degree relative • this is a multiple pregnancy (for example, twins or triplets) You may also be advised to take low dose aspirin if you have a slightly higher chance of having a baby which may be smaller than expected. Or there were any concerns about how your placenta was working in a previous pregnancy; this will be discussed with you. Page 2 of 3 How and when do I take aspirin? You should take 150mg (2 x75mg tablets) once a day from 12 weeks until 36 weeks of your pregnancy. It is best to take in the evening either with or just after food. Please do not worry if you forget to take a tablet, just take one when you remember, however make sure you only take 150mg once a day. If you think you may be in labour, you can stop taking your aspirin until this is confirmed. It will not increase your risk of bleeding during your labour. Is low dose aspirin safe to take in pregnancy? Low dose aspirin is not known to be harmful to you or your baby during pregnancy. In fact it is known to reduce the risk of harm by reducing the risk of high blood pressure, pre-eclampsia, smaller babies and stillbirth. However, aspirin can affect (and be affected by) other medications, including ‘over the Counter’ medicines and herbal remedies. Please discuss any other medications you are taking with your midwife, GP or obstetrician. Side effects Taking low dose aspirin can cause mild indigestion. If you take your aspirin either with or just after food, it will be less likely to upset your stomach. Avoid taking aspirin on an empty stomach. If you also take indigestion remedies, take them at least two hours before or after you take your aspirin. There is no evidence to suggest low dose aspirin causes any increase in bleeding during pregnancy or at the time of birth. If you have any questions or concerns about taking low dose aspirin please speak to your obstetrician, GP or midwife. Allergies Please tell your obstetrician, midwife or GP if you are allergic to aspirin (or other NSAIDS), or you have severe asthma, chronic kidney problems, stomach ulcers or have been previously advised not to take aspirin or other NSAIDs. As with any medicine, you should seek urgent medical assistance if you experience serious side effects such as wheezing, swelling of the lips, face or body, rashes or other indications of an allergic reaction. What can I do to help? If you smoke it is very important that you stop as it can affect placental (afterbirth) function and your baby’s growth. Please contact your community or continuity team midwife who can refer you to smoking cessation; you can also self-refer at One You East Sussex Sources of information If you would like more information about taking low dose aspirin in pregnancy, your midwife or obstetrician will be happy to answer your questions and advise you
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> Hey, I'm working on a paper about the use of low-dose aspirin during pregnancy, and I'm trying to clarify something. In the leaflet, it says aspirin reduces the risk of pre-eclampsia and smaller babies, but I'm confused about how it affects placental blood flow versus its impact on potential bleeding during labor. Wouldn't increased blood flow increase bleeding risks? Also, how does aspirin interact with indigestion remedies, and does that complicate its safety for someone with both pregnancy and digestive issues? <TEXT> You have been given this information leaflet as you have been advised to take low dose aspirin, 150mg once a day from 12 to 36 weeks of your pregnancy. What is aspirin? Aspirin is known as an NSAID (a non-steroidal anti-inflammatory drug). Aspirin is often used to treat pain, fever, inflammation or prevent clot formation. There is evidence that taking low dose aspirin once a day can help increase the function and blood flow of your placenta (afterbirth) which provides your baby with oxygen and nutrients during your pregnancy to help them grow. Why have I been advised to take aspirin? Not everyone is recommended to take aspirin in pregnancy. You have been advised to take a low dose of aspirin during your pregnancy to reduce the risk of: • developing hypertension (high blood pressure) and pre-eclampsia (high blood pressure and protein in your urine) • giving birth to your baby prematurely (before 37 weeks) • your baby being smaller than expected Your midwife or obstetrician (a doctor who specialises in the care of pregnant women) may recommend that you take low dose aspirin to reduce the risk of hypertension (high blood pressure) if one of the following apply to you: • you had hypertension (high blood pressure) during a previous pregnancy • you have chronic kidney disease • you have an auto-immune disease (for example, lupus or antiphospholipid syndrome) • you have Type 1 or 2 diabetes • you have chronic hypertension (high blood pressure before pregnancy) • you have previously given birth to a baby who was smaller than expected • you have low Pregnancy Associated Plasma Protein (PAPP-A) screening blood test • you are aged 40 years or older Low dose aspirin may also be recommended if two or more of the following apply to you: • this is your first pregnancy • there are more than 10 years between this pregnancy and the birth of your last baby • your BMI is 35 or more at your booking appointment • there is a family history of pre-eclampsia in a first degree relative • this is a multiple pregnancy (for example, twins or triplets) You may also be advised to take low dose aspirin if you have a slightly higher chance of having a baby which may be smaller than expected. Or there were any concerns about how your placenta was working in a previous pregnancy; this will be discussed with you. Page 2 of 3 How and when do I take aspirin? You should take 150mg (2 x75mg tablets) once a day from 12 weeks until 36 weeks of your pregnancy. It is best to take in the evening either with or just after food. Please do not worry if you forget to take a tablet, just take one when you remember, however make sure you only take 150mg once a day. If you think you may be in labour, you can stop taking your aspirin until this is confirmed. It will not increase your risk of bleeding during your labour. Is low dose aspirin safe to take in pregnancy? Low dose aspirin is not known to be harmful to you or your baby during pregnancy. In fact it is known to reduce the risk of harm by reducing the risk of high blood pressure, pre-eclampsia, smaller babies and stillbirth. However, aspirin can affect (and be affected by) other medications, including ‘over the Counter’ medicines and herbal remedies. Please discuss any other medications you are taking with your midwife, GP or obstetrician. Side effects Taking low dose aspirin can cause mild indigestion. If you take your aspirin either with or just after food, it will be less likely to upset your stomach. Avoid taking aspirin on an empty stomach. If you also take indigestion remedies, take them at least two hours before or after you take your aspirin. There is no evidence to suggest low dose aspirin causes any increase in bleeding during pregnancy or at the time of birth. If you have any questions or concerns about taking low dose aspirin please speak to your obstetrician, GP or midwife. Allergies Please tell your obstetrician, midwife or GP if you are allergic to aspirin (or other NSAIDS), or you have severe asthma, chronic kidney problems, stomach ulcers or have been previously advised not to take aspirin or other NSAIDs. As with any medicine, you should seek urgent medical assistance if you experience serious side effects such as wheezing, swelling of the lips, face or body, rashes or other indications of an allergic reaction. What can I do to help? If you smoke it is very important that you stop as it can affect placental (afterbirth) function and your baby’s growth. Please contact your community or continuity team midwife who can refer you to smoking cessation; you can also self-refer at One You East Sussex Sources of information If you would like more information about taking low dose aspirin in pregnancy, your midwife or obstetrician will be happy to answer your questions and advise you https://www.esht.nhs.uk/wp-content/uploads/2021/06/0925.pdf
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document] EVIDENCE: You have been given this information leaflet as you have been advised to take low dose aspirin, 150mg once a day from 12 to 36 weeks of your pregnancy. What is aspirin? Aspirin is known as an NSAID (a non-steroidal anti-inflammatory drug). Aspirin is often used to treat pain, fever, inflammation or prevent clot formation. There is evidence that taking low dose aspirin once a day can help increase the function and blood flow of your placenta (afterbirth) which provides your baby with oxygen and nutrients during your pregnancy to help them grow. Why have I been advised to take aspirin? Not everyone is recommended to take aspirin in pregnancy. You have been advised to take a low dose of aspirin during your pregnancy to reduce the risk of: • developing hypertension (high blood pressure) and pre-eclampsia (high blood pressure and protein in your urine) • giving birth to your baby prematurely (before 37 weeks) • your baby being smaller than expected Your midwife or obstetrician (a doctor who specialises in the care of pregnant women) may recommend that you take low dose aspirin to reduce the risk of hypertension (high blood pressure) if one of the following apply to you: • you had hypertension (high blood pressure) during a previous pregnancy • you have chronic kidney disease • you have an auto-immune disease (for example, lupus or antiphospholipid syndrome) • you have Type 1 or 2 diabetes • you have chronic hypertension (high blood pressure before pregnancy) • you have previously given birth to a baby who was smaller than expected • you have low Pregnancy Associated Plasma Protein (PAPP-A) screening blood test • you are aged 40 years or older Low dose aspirin may also be recommended if two or more of the following apply to you: • this is your first pregnancy • there are more than 10 years between this pregnancy and the birth of your last baby • your BMI is 35 or more at your booking appointment • there is a family history of pre-eclampsia in a first degree relative • this is a multiple pregnancy (for example, twins or triplets) You may also be advised to take low dose aspirin if you have a slightly higher chance of having a baby which may be smaller than expected. Or there were any concerns about how your placenta was working in a previous pregnancy; this will be discussed with you. Page 2 of 3 How and when do I take aspirin? You should take 150mg (2 x75mg tablets) once a day from 12 weeks until 36 weeks of your pregnancy. It is best to take in the evening either with or just after food. Please do not worry if you forget to take a tablet, just take one when you remember, however make sure you only take 150mg once a day. If you think you may be in labour, you can stop taking your aspirin until this is confirmed. It will not increase your risk of bleeding during your labour. Is low dose aspirin safe to take in pregnancy? Low dose aspirin is not known to be harmful to you or your baby during pregnancy. In fact it is known to reduce the risk of harm by reducing the risk of high blood pressure, pre-eclampsia, smaller babies and stillbirth. However, aspirin can affect (and be affected by) other medications, including ‘over the Counter’ medicines and herbal remedies. Please discuss any other medications you are taking with your midwife, GP or obstetrician. Side effects Taking low dose aspirin can cause mild indigestion. If you take your aspirin either with or just after food, it will be less likely to upset your stomach. Avoid taking aspirin on an empty stomach. If you also take indigestion remedies, take them at least two hours before or after you take your aspirin. There is no evidence to suggest low dose aspirin causes any increase in bleeding during pregnancy or at the time of birth. If you have any questions or concerns about taking low dose aspirin please speak to your obstetrician, GP or midwife. Allergies Please tell your obstetrician, midwife or GP if you are allergic to aspirin (or other NSAIDS), or you have severe asthma, chronic kidney problems, stomach ulcers or have been previously advised not to take aspirin or other NSAIDs. As with any medicine, you should seek urgent medical assistance if you experience serious side effects such as wheezing, swelling of the lips, face or body, rashes or other indications of an allergic reaction. What can I do to help? If you smoke it is very important that you stop as it can affect placental (afterbirth) function and your baby’s growth. Please contact your community or continuity team midwife who can refer you to smoking cessation; you can also self-refer at One You East Sussex Sources of information If you would like more information about taking low dose aspirin in pregnancy, your midwife or obstetrician will be happy to answer your questions and advise you USER: Hey, I'm working on a paper about the use of low-dose aspirin during pregnancy, and I'm trying to clarify something. In the leaflet, it says aspirin reduces the risk of pre-eclampsia and smaller babies, but I'm confused about how it affects placental blood flow versus its impact on potential bleeding during labor. Wouldn't increased blood flow increase bleeding risks? Also, how does aspirin interact with indigestion remedies, and does that complicate its safety for someone with both pregnancy and digestive issues? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
20
81
838
null
719
Create a short paragraph response to the question using clear, precise vocabulary. This should only rely on information contained in the text.
What are the primary positive aspects reviewers refer to?
Talking 2-XL Robot by Tiger Electronics User Reviews 1. Reviewer: Leslie Cain Rating: 5.0 stars Date: June 7, 2016 Verified Purchase Review: Great toy. Still a hit with the next generation of kids as well. Arrived on time and in perfect working condition. One person found this helpful. 2. Reviewer: Artfan1166 Rating: 5.0 stars Date: May 11, 2015 Verified Purchase Review: It was perfect, exactly what I had hoped it would be! 3. Reviewer: Richard K. Rating: 5.0 stars Date: January 5, 2015 Verified Purchase Review: Just what I expected.
Create a short paragraph response to the question using clear, precise vocabulary. This should only rely on information contained in the text. What are the primary positive aspects reviewers refer to? Talking 2-XL Robot by Tiger Electronics User Reviews 1. Reviewer: Leslie Cain Rating: 5.0 stars Date: June 7, 2016 Verified Purchase Review: Great toy. Still a hit with the next generation of kids as well. Arrived on time and in perfect working condition. One person found this helpful. 2. Reviewer: Artfan1166 Rating: 5.0 stars Date: May 11, 2015 Verified Purchase Review: It was perfect, exactly what I had hoped it would be! 3. Reviewer: Richard K. Rating: 5.0 stars Date: January 5, 2015 Verified Purchase Review: Just what I expected.
Create a short paragraph response to the question using clear, precise vocabulary. This should only rely on information contained in the text. EVIDENCE: Talking 2-XL Robot by Tiger Electronics User Reviews 1. Reviewer: Leslie Cain Rating: 5.0 stars Date: June 7, 2016 Verified Purchase Review: Great toy. Still a hit with the next generation of kids as well. Arrived on time and in perfect working condition. One person found this helpful. 2. Reviewer: Artfan1166 Rating: 5.0 stars Date: May 11, 2015 Verified Purchase Review: It was perfect, exactly what I had hoped it would be! 3. Reviewer: Richard K. Rating: 5.0 stars Date: January 5, 2015 Verified Purchase Review: Just what I expected. USER: What are the primary positive aspects reviewers refer to? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
22
9
90
null
340
Respond to this prompt using only the information contained in the context as you are not an expert in this subject matter.
What does the context suggest are potential promising areas of research going forward?
Another approach commonly brought up by patients on LT4 with persistent complaints is the use of a combination therapy including LT4 and T3. This regimen was addressed by 14 randomized trials of the combination therapy that did not demonstrate benefit,37,44 and 5 other studies67–71 that reported some benefit.40 However, the study protocols differed in terms of design, including variable use of crossover or parallel groups, blinding, the ratio of T4 to T3 dosage, treatment duration as well as definitions of primary and secondary outcomes. In addition, some studies were subject to carryover effects, overtreatment, and limited inclusion of men and older age groups, underpowered sample size, short duration and once daily T3 dosing. Consistently, 5 meta-analyses or reviews also suggested no clear advantage of the combination therapy.37,72–75 Importantly, potential long-term risks of T3 addition, such as cardiac arrhythmias, or decreased bone mineral density were not fully investigated. Therefore, Guidelines of the American Thyroid Association concluded that there is insufficient evidence to recommend the combination therapy. However, if such a therapy is chosen, it should resemble physiology, that is, the physiological molar T4 to T3 ratio of 14:1 to 15:1,37 and synthetic T4 to T3 conversion factor 3:1.76 Sustained release T3 formulations under development may help achieving physiological goals. Interestingly, a benefit of a therapy containing T3 was shown in a subgroup analysis of patients who remained the most symptomatic while taking LT4. Therefore, this might be the group of patients that may need to be targeted in future, well designed and appropriately powered studies on the combination therapies.77 The subset of patients potentially benefiting from the combination therapy is likely to have a pathophysiological explanation, as it was shown that lower T3 levels during monotherapy with LT4 were associated with the presence of Thr92Ala polymorphism of deiodinase type 2 (DIO2 ) gene.78 Genotyping for the presence of Thr92Ala polymorphism in patients treated for hypothyroidism revealed that Ala/Ala homozygotes had worse quality of life scores while taking LT4.79 In addition, another small study showed that patients with both Thr92Ala polymorphism and a polymorphism in one of the thyroid hormone transporters (MTC10 ) preferred the combination therapy with both LT4 and T3.80 However, other studies did not confirm these findings.81–83 Hence, only the results from a new, prospective, well-designed, adequately powered study of the effects of DIO2 and MTC10 polymorphisms on response to therapy can assess if this genetic background could be a marker guiding either a monotherapy or the combination therapy in overtly hypothyroid patients. The role of surgery for HT has been traditionally limited to the patients presenting with either pain or compressive symptoms due to goiter or co-existing malignant thyroid nodules.84 However, it was recently hypothesized that thyroidectomy might be a therapeutic modality used to reduce TPOAbs titers, as the presence of such antibodies is associated with lower quality of life even in euthyroid individuals. Consequently, a clinical trial addressed this concept, randomizing highly positive TPOAb patients with continued symptoms while receiving LT4 to either thyroidectomy or continued medical management. In those who underwent thyroidectomy, TPOAbs significantly declined, quality of life and fatigue improved, and the effect was sustained at 12 to 18 month landmarks.85 Hashimoto thyroiditis and thyroid nodules. Based on evaluation of pathological specimens, the average prevalence of papillary thyroid cancer in patients with HT was around 27%, with an associated increased risk ratio of 1.59, as compared with the general population.86, 87 A recent meta-analysis that combined the studies analyzing cytological and pathological specimens derived from patients with HT concluded that this association is based on low-to-moderate quality evidence.88 Apart from papillary thyroid cancer, a non-Hodgkin primary thyroid lymphoma was strongly associated with HT, with a risk of about 60 times higher than in the general population.32 Thyroid lymphoma accounts for approximately 5% of all thyroid neoplasms. Diagnosis of thyroid lymphoma is important to be established, as it changes the first line therapy from surgery, that is routinely implemented for malignant thyroid nodules, to appropriately targeted chemotherapy for lymphoproliferative disorders. Therapy of thyroid lymphoma and malignant thyroid nodules is beyond the scope of this review, but can be found in the respective guidelines.89 Hashimoto thyroiditis and pregnancy The prevalence of TPOAbs in pregnant women is estimated to be 5%–14% and TgAbs are seen in 3%–18% of pregnant female individuals.90 The presence of these Abs indicating thyroid autoimmunity, is associated with a 2 to 4-fold increase in the risk of recurrent miscarriages91,92 and 2 to 3- fold increased risk of preterm birth.91,93,94 The mechanisms behind these adverse pregnancy outcomes in TPOAb positive euthyroid women are unclear but some authors postulate that TPOAbs might be markers for other forms of autoimmunity that target the placental-fetal unit.95 However, thyroid autoimmunity seems to have an additive or synergistic effect on miscarriage 93 and prematurity 96 risk in women with maternal subclinical hypothyroidism. A recent meta-analysis including 19 cohort studies enrolling 47 045 pregnant women showed almost 3-fold increased risk of preterm birth in women with subclinical hypothyroidism and 1.5-fold increased risk of preterm birth in women with isolated hypothyroxinemia.94 Another meta-analysis of 26 studies found significant associations between maternal subclinical hypothyroidism or hypothyroxinemia and lower child IQ, language delay or global developmental delay as compared with children of euthyroid women.97 Overt hypothyroidism was associated with increased rates of gestational hypertension including preeclampsia and eclampsia, gestational diabetes, placental abruption, postpartum hemorrhage, preterm delivery, low birthweight, infant intensive care unit admissions, fetal death, and neurodevelopmental delays in the offspring.98,99,100 Therefore, overt hypothyroidism should be treated to prevent adverse effects on pregnancy and child developmental outcomes and should be started before conception to achieve biochemical euthyroidism.26 Therapy with LT4 improved success rate of in vitro fertilization in TPOAbs positive women with TSH above 2.5 mIU/ml.26 Importantly, women treated for hypothyroidism typically require a 20% to 30% increase in their LT4 dose, which usually translates into addition of 2 pills per week early in the first trimester.26 The physiological explanation for increased thyroid hormone requirements is based upon several factors including increased hepatic thyroxine binding globulin synthesis and enhanced metabolism of thyroid hormone through its inactivation by the placental type 3 DIO.26,101 The use of T3 or T4+T3 combination therapy is not indicated in pregnancy, as liothyronine does not cross the blood-brain barrier to the fetal brain.102 LT4 replacement therapy should be monitored monthly, as over- and undertreatment lead to adverse pregnancy outcomes.26 The suggested target TSH is within the lower half of the trimester-specific reference range or below 2.5 mIU/ml, if the trimester-specific ranges are not available.26 Regarding maternal subclinical hypothyroidism, the 2017 American Thyroid Association guidelines recommend utilizing TPOAb status along with serum levels of TSH to guide treatment decisions (TABLE 2).26 LT4 therapy is not recommended for isolated hypothyroxinemia.26 A 2021 systematic review and meta-analysis of 6 randomized controlled trials assessing the effect of LT4 treatment in euthyroid women with thyroid autoimmunity did not find any significant differences in the relative risk of miscarriage and preterm delivery, or outcomes with live birth. Therefore, no strong recommendations regarding the therapy in such scenarios could be made, but consideration on a case-by-case basis might be implemented (TABLE 2).103 Areas of research There are promising new models being developed to study the pathophysiology of thyroid disease, as functional thyroid follicles from embryonic or pluripotent stem cells were established in animal models.104,105 This potentially allows for studying mechanisms of autoimmunity that could guide prevention of the disease progression to overt hypothyroidism in predisposed individuals. Stem cells could be also used in regenerative medicine to replace those destroyed by the autoimmune processes in the thyroid gland. A better understanding of the response to therapy with thyroid hormones might be achieved from studies focusing on transcriptome profiling of expression of genes responsive to thyroid hormone action. This could help titrating thyroid hormone replacement therapy. New preparations of sustained release T3 have successfully passed phase 1 clinical trials and may add to our armamentarium for HT therapy once necessary efficacy trials are completed.
Respond to this prompt using only the information contained in the context as you are not an expert in this subject matter. What does the context suggest are potential promising areas of research going forward? Another approach commonly brought up by patients on LT4 with persistent complaints is the use of a combination therapy including LT4 and T3. This regimen was addressed by 14 randomized trials of the combination therapy that did not demonstrate benefit,37,44 and 5 other studies67–71 that reported some benefit.40 However, the study protocols differed in terms of design, including variable use of crossover or parallel groups, blinding, the ratio of T4 to T3 dosage, treatment duration as well as definitions of primary and secondary outcomes. In addition, some studies were subject to carryover effects, overtreatment, and limited inclusion of men and older age groups, underpowered sample size, short duration and once daily T3 dosing. Consistently, 5 meta-analyses or reviews also suggested no clear advantage of the combination therapy.37,72–75 Importantly, potential long-term risks of T3 addition, such as cardiac arrhythmias, or decreased bone mineral density were not fully investigated. Therefore, Guidelines of the American Thyroid Association concluded that there is insufficient evidence to recommend the combination therapy. However, if such a therapy is chosen, it should resemble physiology, that is, the physiological molar T4 to T3 ratio of 14:1 to 15:1,37 and synthetic T4 to T3 conversion factor 3:1.76 Sustained release T3 formulations under development may help achieving physiological goals. Interestingly, a benefit of a therapy containing T3 was shown in a subgroup analysis of patients who remained the most symptomatic while taking LT4. Therefore, this might be the group of patients that may need to be targeted in future, well designed and appropriately powered studies on the combination therapies.77 The subset of patients potentially benefiting from the combination therapy is likely to have a pathophysiological explanation, as it was shown that lower T3 levels during monotherapy with LT4 were associated with the presence of Thr92Ala polymorphism of deiodinase type 2 (DIO2 ) gene.78 Genotyping for the presence of Thr92Ala polymorphism in patients treated for hypothyroidism revealed that Ala/Ala homozygotes had worse quality of life scores while taking LT4.79 In addition, another small study showed that patients with both Thr92Ala polymorphism and a polymorphism in one of the thyroid hormone transporters (MTC10 ) preferred the combination therapy with both LT4 and T3.80 However, other studies did not confirm these findings.81–83 Hence, only the results from a new, prospective, well-designed, adequately powered study of the effects of DIO2 and MTC10 polymorphisms on response to therapy can assess if this genetic background could be a marker guiding either a monotherapy or the combination therapy in overtly hypothyroid patients. The role of surgery for HT has been traditionally limited to the patients presenting with either pain or compressive symptoms due to goiter or co-existing malignant thyroid nodules.84 However, it was recently hypothesized that thyroidectomy might be a therapeutic modality used to reduce TPOAbs titers, as the presence of such antibodies is associated with lower quality of life even in euthyroid individuals. Consequently, a clinical trial addressed this concept, randomizing highly positive TPOAb patients with continued symptoms while receiving LT4 to either thyroidectomy or continued medical management. In those who underwent thyroidectomy, TPOAbs significantly declined, quality of life and fatigue improved, and the effect was sustained at 12 to 18 month landmarks.85 Hashimoto thyroiditis and thyroid nodules. Based on evaluation of pathological specimens, the average prevalence of papillary thyroid cancer in patients with HT was around 27%, with an associated increased risk ratio of 1.59, as compared with the general population.86, 87 A recent meta-analysis that combined the studies analyzing cytological and pathological specimens derived from patients with HT concluded that this association is based on low-to-moderate quality evidence.88 Apart from papillary thyroid cancer, a non-Hodgkin primary thyroid lymphoma was strongly associated with HT, with a risk of about 60 times higher than in the general population.32 Thyroid lymphoma accounts for approximately 5% of all thyroid neoplasms. Diagnosis of thyroid lymphoma is important to be established, as it changes the first line therapy from surgery, that is routinely implemented for malignant thyroid nodules, to appropriately targeted chemotherapy for lymphoproliferative disorders. Therapy of thyroid lymphoma and malignant thyroid nodules is beyond the scope of this review, but can be found in the respective guidelines.89 Hashimoto thyroiditis and pregnancy The prevalence of TPOAbs in pregnant women is estimated to be 5%–14% and TgAbs are seen in 3%–18% of pregnant female individuals.90 The presence of these Abs indicating thyroid autoimmunity, is associated with a 2 to 4-fold increase in the risk of recurrent miscarriages91,92 and 2 to 3- fold increased risk of preterm birth.91,93,94 The mechanisms behind these adverse pregnancy outcomes in TPOAb positive euthyroid women are unclear but some authors postulate that TPOAbs might be markers for other forms of autoimmunity that target the placental-fetal unit.95 However, thyroid autoimmunity seems to have an additive or synergistic effect on miscarriage 93 and prematurity 96 risk in women with maternal subclinical hypothyroidism. A recent meta-analysis including 19 cohort studies enrolling 47 045 pregnant women showed almost 3-fold increased risk of preterm birth in women with subclinical hypothyroidism and 1.5-fold increased risk of preterm birth in women with isolated hypothyroxinemia.94 Another meta-analysis of 26 studies found significant associations between maternal subclinical hypothyroidism or hypothyroxinemia and lower child IQ, language delay or global developmental delay as compared with children of euthyroid women.97 Overt hypothyroidism was associated with increased rates of gestational hypertension including preeclampsia and eclampsia, gestational diabetes, placental abruption, postpartum hemorrhage, preterm delivery, low birthweight, infant intensive care unit admissions, fetal death, and neurodevelopmental delays in the offspring.98,99,100 Therefore, overt hypothyroidism should be treated to prevent adverse effects on pregnancy and child developmental outcomes and should be started before conception to achieve biochemical euthyroidism.26 Therapy with LT4 improved success rate of in vitro fertilization in TPOAbs positive women with TSH above 2.5 mIU/ml.26 Importantly, women treated for hypothyroidism typically require a 20% to 30% increase in their LT4 dose, which usually translates into addition of 2 pills per week early in the first trimester.26 The physiological explanation for increased thyroid hormone requirements is based upon several factors including increased hepatic thyroxine binding globulin synthesis and enhanced metabolism of thyroid hormone through its inactivation by the placental type 3 DIO.26,101 The use of T3 or T4+T3 combination therapy is not indicated in pregnancy, as liothyronine does not cross the blood-brain barrier to the fetal brain.102 LT4 replacement therapy should be monitored monthly, as over- and undertreatment lead to adverse pregnancy outcomes.26 The suggested target TSH is within the lower half of the trimester-specific reference range or below 2.5 mIU/ml, if the trimester-specific ranges are not available.26 Regarding maternal subclinical hypothyroidism, the 2017 American Thyroid Association guidelines recommend utilizing TPOAb status along with serum levels of TSH to guide treatment decisions (TABLE 2).26 LT4 therapy is not recommended for isolated hypothyroxinemia.26 A 2021 systematic review and meta-analysis of 6 randomized controlled trials assessing the effect of LT4 treatment in euthyroid women with thyroid autoimmunity did not find any significant differences in the relative risk of miscarriage and preterm delivery, or outcomes with live birth. Therefore, no strong recommendations regarding the therapy in such scenarios could be made, but consideration on a case-by-case basis might be implemented (TABLE 2).103 Areas of research There are promising new models being developed to study the pathophysiology of thyroid disease, as functional thyroid follicles from embryonic or pluripotent stem cells were established in animal models.104,105 This potentially allows for studying mechanisms of autoimmunity that could guide prevention of the disease progression to overt hypothyroidism in predisposed individuals. Stem cells could be also used in regenerative medicine to replace those destroyed by the autoimmune processes in the thyroid gland. A better understanding of the response to therapy with thyroid hormones might be achieved from studies focusing on transcriptome profiling of expression of genes responsive to thyroid hormone action. This could help titrating thyroid hormone replacement therapy. New preparations of sustained release T3 have successfully passed phase 1 clinical trials and may add to our armamentarium for HT therapy once necessary efficacy trials are completed.
Respond to this prompt using only the information contained in the context as you are not an expert in this subject matter. EVIDENCE: Another approach commonly brought up by patients on LT4 with persistent complaints is the use of a combination therapy including LT4 and T3. This regimen was addressed by 14 randomized trials of the combination therapy that did not demonstrate benefit,37,44 and 5 other studies67–71 that reported some benefit.40 However, the study protocols differed in terms of design, including variable use of crossover or parallel groups, blinding, the ratio of T4 to T3 dosage, treatment duration as well as definitions of primary and secondary outcomes. In addition, some studies were subject to carryover effects, overtreatment, and limited inclusion of men and older age groups, underpowered sample size, short duration and once daily T3 dosing. Consistently, 5 meta-analyses or reviews also suggested no clear advantage of the combination therapy.37,72–75 Importantly, potential long-term risks of T3 addition, such as cardiac arrhythmias, or decreased bone mineral density were not fully investigated. Therefore, Guidelines of the American Thyroid Association concluded that there is insufficient evidence to recommend the combination therapy. However, if such a therapy is chosen, it should resemble physiology, that is, the physiological molar T4 to T3 ratio of 14:1 to 15:1,37 and synthetic T4 to T3 conversion factor 3:1.76 Sustained release T3 formulations under development may help achieving physiological goals. Interestingly, a benefit of a therapy containing T3 was shown in a subgroup analysis of patients who remained the most symptomatic while taking LT4. Therefore, this might be the group of patients that may need to be targeted in future, well designed and appropriately powered studies on the combination therapies.77 The subset of patients potentially benefiting from the combination therapy is likely to have a pathophysiological explanation, as it was shown that lower T3 levels during monotherapy with LT4 were associated with the presence of Thr92Ala polymorphism of deiodinase type 2 (DIO2 ) gene.78 Genotyping for the presence of Thr92Ala polymorphism in patients treated for hypothyroidism revealed that Ala/Ala homozygotes had worse quality of life scores while taking LT4.79 In addition, another small study showed that patients with both Thr92Ala polymorphism and a polymorphism in one of the thyroid hormone transporters (MTC10 ) preferred the combination therapy with both LT4 and T3.80 However, other studies did not confirm these findings.81–83 Hence, only the results from a new, prospective, well-designed, adequately powered study of the effects of DIO2 and MTC10 polymorphisms on response to therapy can assess if this genetic background could be a marker guiding either a monotherapy or the combination therapy in overtly hypothyroid patients. The role of surgery for HT has been traditionally limited to the patients presenting with either pain or compressive symptoms due to goiter or co-existing malignant thyroid nodules.84 However, it was recently hypothesized that thyroidectomy might be a therapeutic modality used to reduce TPOAbs titers, as the presence of such antibodies is associated with lower quality of life even in euthyroid individuals. Consequently, a clinical trial addressed this concept, randomizing highly positive TPOAb patients with continued symptoms while receiving LT4 to either thyroidectomy or continued medical management. In those who underwent thyroidectomy, TPOAbs significantly declined, quality of life and fatigue improved, and the effect was sustained at 12 to 18 month landmarks.85 Hashimoto thyroiditis and thyroid nodules. Based on evaluation of pathological specimens, the average prevalence of papillary thyroid cancer in patients with HT was around 27%, with an associated increased risk ratio of 1.59, as compared with the general population.86, 87 A recent meta-analysis that combined the studies analyzing cytological and pathological specimens derived from patients with HT concluded that this association is based on low-to-moderate quality evidence.88 Apart from papillary thyroid cancer, a non-Hodgkin primary thyroid lymphoma was strongly associated with HT, with a risk of about 60 times higher than in the general population.32 Thyroid lymphoma accounts for approximately 5% of all thyroid neoplasms. Diagnosis of thyroid lymphoma is important to be established, as it changes the first line therapy from surgery, that is routinely implemented for malignant thyroid nodules, to appropriately targeted chemotherapy for lymphoproliferative disorders. Therapy of thyroid lymphoma and malignant thyroid nodules is beyond the scope of this review, but can be found in the respective guidelines.89 Hashimoto thyroiditis and pregnancy The prevalence of TPOAbs in pregnant women is estimated to be 5%–14% and TgAbs are seen in 3%–18% of pregnant female individuals.90 The presence of these Abs indicating thyroid autoimmunity, is associated with a 2 to 4-fold increase in the risk of recurrent miscarriages91,92 and 2 to 3- fold increased risk of preterm birth.91,93,94 The mechanisms behind these adverse pregnancy outcomes in TPOAb positive euthyroid women are unclear but some authors postulate that TPOAbs might be markers for other forms of autoimmunity that target the placental-fetal unit.95 However, thyroid autoimmunity seems to have an additive or synergistic effect on miscarriage 93 and prematurity 96 risk in women with maternal subclinical hypothyroidism. A recent meta-analysis including 19 cohort studies enrolling 47 045 pregnant women showed almost 3-fold increased risk of preterm birth in women with subclinical hypothyroidism and 1.5-fold increased risk of preterm birth in women with isolated hypothyroxinemia.94 Another meta-analysis of 26 studies found significant associations between maternal subclinical hypothyroidism or hypothyroxinemia and lower child IQ, language delay or global developmental delay as compared with children of euthyroid women.97 Overt hypothyroidism was associated with increased rates of gestational hypertension including preeclampsia and eclampsia, gestational diabetes, placental abruption, postpartum hemorrhage, preterm delivery, low birthweight, infant intensive care unit admissions, fetal death, and neurodevelopmental delays in the offspring.98,99,100 Therefore, overt hypothyroidism should be treated to prevent adverse effects on pregnancy and child developmental outcomes and should be started before conception to achieve biochemical euthyroidism.26 Therapy with LT4 improved success rate of in vitro fertilization in TPOAbs positive women with TSH above 2.5 mIU/ml.26 Importantly, women treated for hypothyroidism typically require a 20% to 30% increase in their LT4 dose, which usually translates into addition of 2 pills per week early in the first trimester.26 The physiological explanation for increased thyroid hormone requirements is based upon several factors including increased hepatic thyroxine binding globulin synthesis and enhanced metabolism of thyroid hormone through its inactivation by the placental type 3 DIO.26,101 The use of T3 or T4+T3 combination therapy is not indicated in pregnancy, as liothyronine does not cross the blood-brain barrier to the fetal brain.102 LT4 replacement therapy should be monitored monthly, as over- and undertreatment lead to adverse pregnancy outcomes.26 The suggested target TSH is within the lower half of the trimester-specific reference range or below 2.5 mIU/ml, if the trimester-specific ranges are not available.26 Regarding maternal subclinical hypothyroidism, the 2017 American Thyroid Association guidelines recommend utilizing TPOAb status along with serum levels of TSH to guide treatment decisions (TABLE 2).26 LT4 therapy is not recommended for isolated hypothyroxinemia.26 A 2021 systematic review and meta-analysis of 6 randomized controlled trials assessing the effect of LT4 treatment in euthyroid women with thyroid autoimmunity did not find any significant differences in the relative risk of miscarriage and preterm delivery, or outcomes with live birth. Therefore, no strong recommendations regarding the therapy in such scenarios could be made, but consideration on a case-by-case basis might be implemented (TABLE 2).103 Areas of research There are promising new models being developed to study the pathophysiology of thyroid disease, as functional thyroid follicles from embryonic or pluripotent stem cells were established in animal models.104,105 This potentially allows for studying mechanisms of autoimmunity that could guide prevention of the disease progression to overt hypothyroidism in predisposed individuals. Stem cells could be also used in regenerative medicine to replace those destroyed by the autoimmune processes in the thyroid gland. A better understanding of the response to therapy with thyroid hormones might be achieved from studies focusing on transcriptome profiling of expression of genes responsive to thyroid hormone action. This could help titrating thyroid hormone replacement therapy. New preparations of sustained release T3 have successfully passed phase 1 clinical trials and may add to our armamentarium for HT therapy once necessary efficacy trials are completed. USER: What does the context suggest are potential promising areas of research going forward? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
22
13
1,325
null
224
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document]
Our social security numbers were stolen in a cyber-hack and I'm freaking out. We have accounts for the whole family, including our kids. We are pretty broke, but I'm worried they will open credit cards or something. What are some ways I can protect our credit because I can't decide between a credit freeze, lock or fraud alerts, what are the pros and cons for all three? I normally do all our banking online and have already changed all our passwords just in case but need some reassurance we will be ok.
A credit freeze is a free service, guaranteed under federal law, that can protect you from credit fraud by limiting most access to your credit report until you lift it, or "thaw" your report. When your credit report is frozen, any lender who asks to evaluate your report for purposes of issuing a loan or other credit is denied access to the report. A security freeze won't affect your credit scores, but it will prevent lenders evaluating credit applications from obtaining your credit scores. A security freeze prevents criminals from opening new credit accounts in your name, but it also blocks your legitimate credit applications. So, if you have a credit freeze in place, you'll need to thaw your credit reports before applying for a new loan, credit card or other consumer credit. Once your application is processed, you can reinstate the freeze. Alternatively, you can use a temporary thaw to lift the freeze for a set window of time, such as one day or one week, after which the freeze will be reinstated. Credit freezes must be activated and lifted separately at each of the national credit bureaus. Procedures differ somewhat with each bureau, but all three enable requesting and lifting of security freezes online, by phone and via postal mail. There is never a fee for applying or removing a credit freeze. The law also allows you to establish and freeze credit reports for your minor children, to avoid misuse of their personal information. To place a credit freeze, you must provide details and proof of your identity and address, per instructions provided by the respective credit bureau. These typically include: Full name Date of birth All addresses you've used in the past two years Social Security number One copy of a government-issued identification, such as a driver's license or state ID card A recent copy of a utility bill, bank or insurance statement or similar, as proof of address How to Freeze Your Credit at Each Credit Bureau Experian TransUnion Equifax Online 888-EXPERIAN Experian Security Freeze P.O. Box 9554 Allen, TX 75013 Online 800-916-8800 TransUnion P.O. Box 160 Woodlyn, PA 19094 Online 888-298-0045 Equifax Information Services LLC P.O. Box 105788 Atlanta, GA 30348-5788 You can remove a credit freeze using the same channels you use to set up a freeze. When lifting a credit freeze, you have the option of permanently unfreezing your credit, or lifting the freeze temporarily by indicating a length of time (one day or one week, for example) you want the freeze to be suspended. Policies vary by bureau, so make sure you understand what your options are before you begin the process. When you request a credit thaw by phone, your freeze will be lifted within one hour. If you use a credit bureau website or phone app to turn off a credit freeze, the process is virtually instantaneous. If you mail your request, the freeze will be lifted within three days of the credit bureau receiving your request. A credit lock lets you restrict and grant access to your credit reports essentially the same way a credit freeze does, but usually with extra features. Credit lock services provided by each credit bureau may differ in cost and functionality. For instance, in addition to enabling you to turn access to your Experian credit report on and off instantly, CreditLock from Experian also notifies you when anyone requests access to your locked credit report. This can help you spot unauthorized activity in your name. Experian CreditLock is available as part of a premium identity protection subscription for $24.99 per month, which also includes services such as: Monthly privacy scans and help getting information removed from covered people search sites Alerts to credit activity on your credit reports at all three national credit bureaus Quarterly FICO® Scores☉ based on your credit reports at all three national bureaus Daily FICO® Scores based on your Experian credit report Alerts when your personal data appears on the dark web Alerts to potential takeovers of your financial accounts Alerts when your Social Security number appears online Up to $1 million in identity theft insurance coverage Dedicated fraud resolution support Lost wallet assistance Equifax provides credit locks free to consumers through Lock & Alert, which also notifies users when anyone requests a credit check on a locked Equifax credit report. TransUnion includes credit file locking for both your TransUnion and Equifax credit files in its subscription service, which costs $29.95 per month. Each credit bureau requires entering your Social Security number and identity verification as part of its credit lock setup process. How to Lock Your Credit Enabling credit lock requires signing up for the service with the relevant credit bureau. Controls differ somewhat at each credit bureau, but each is a virtual on-off toggle that locks and unlocks the applicable credit report. The process for unlocking your credit report at each credit bureau is the reverse of the procedure for locking it: Using the credit bureau's app or website, locate the credit lock function and click the virtual switch to toggle from the locked setting to the unlocked If you're concerned your identity may have been stolen, or if you've discovered fraudulent activity on your credit reports, applying a security freeze or credit lock to your credit report will effectively prevent lenders from processing new credit applications filed in your name. Because they are free and guaranteed by law, credit freezes may be the better choice for consumers who don't want to pay for the service and don't want any extra features. When freezing or unfreezing your credit online, the process is essentially as quick and easy to use as a credit lock. A credit freeze is also the only option available if you want to restrict access to your minor children's credit information. Credit lock might be a better option if you're interested in the additional identity theft protection services included in the accompanying subscriptions. These might be particularly helpful in addressing the consequences of identity theft, and helping prevent additional fraud. When weighing the benefits of credit freeze and credit lock, it may also be worth considering a third option, the fraud alert. You have the right to place a fraud alert on your credit report, and doing so instructs lenders to verify your identity before processing a credit application in your name. This allows you to apply for loans and other forms of credit without having to remember to thaw or unlock your credit reports. What's more, placing a fraud alert with any one credit bureau automatically applies the alert to all three credit bureaus. You'll have to remove each separately, however. Learn more >> Fraud Alert vs. Credit Freeze: What's the Difference? The Bottom Line Credit freezes and credit locks offer valuable protection for your credit reports, as either a precaution or a response to criminal abuse of your credit. When managed online, both freezes and locks are quick and easy to use, and highly effective at blocking credit applications in your name—both fraudulent and legitimate ones. If you decide to apply a credit lock or security freeze, remember to disable it before you apply for a new loan or credit account.
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. Our social security numbers were stolen in a cyber-hack and I'm freaking out. We have accounts for the whole family, including our kids. We are pretty broke, but I'm worried they will open credit cards or something. What are some ways I can protect our credit because I can't decide between a credit freeze, lock or fraud alerts, what are the pros and cons for all three? I normally do all our banking online and have already changed all our passwords just in case but need some reassurance we will be ok. A credit freeze is a free service, guaranteed under federal law, that can protect you from credit fraud by limiting most access to your credit report until you lift it, or "thaw" your report. When your credit report is frozen, any lender who asks to evaluate your report for purposes of issuing a loan or other credit is denied access to the report. A security freeze won't affect your credit scores, but it will prevent lenders evaluating credit applications from obtaining your credit scores. A security freeze prevents criminals from opening new credit accounts in your name, but it also blocks your legitimate credit applications. So, if you have a credit freeze in place, you'll need to thaw your credit reports before applying for a new loan, credit card or other consumer credit. Once your application is processed, you can reinstate the freeze. Alternatively, you can use a temporary thaw to lift the freeze for a set window of time, such as one day or one week, after which the freeze will be reinstated. Credit freezes must be activated and lifted separately at each of the national credit bureaus. Procedures differ somewhat with each bureau, but all three enable requesting and lifting of security freezes online, by phone and via postal mail. There is never a fee for applying or removing a credit freeze. The law also allows you to establish and freeze credit reports for your minor children, to avoid misuse of their personal information. To place a credit freeze, you must provide details and proof of your identity and address, per instructions provided by the respective credit bureau. These typically include: Full name Date of birth All addresses you've used in the past two years Social Security number One copy of a government-issued identification, such as a driver's license or state ID card A recent copy of a utility bill, bank or insurance statement or similar, as proof of address How to Freeze Your Credit at Each Credit Bureau Experian TransUnion Equifax Online 888-EXPERIAN Experian Security Freeze P.O. Box 9554 Allen, TX 75013 Online 800-916-8800 TransUnion P.O. Box 160 Woodlyn, PA 19094 Online 888-298-0045 Equifax Information Services LLC P.O. Box 105788 Atlanta, GA 30348-5788 You can remove a credit freeze using the same channels you use to set up a freeze. When lifting a credit freeze, you have the option of permanently unfreezing your credit, or lifting the freeze temporarily by indicating a length of time (one day or one week, for example) you want the freeze to be suspended. Policies vary by bureau, so make sure you understand what your options are before you begin the process. When you request a credit thaw by phone, your freeze will be lifted within one hour. If you use a credit bureau website or phone app to turn off a credit freeze, the process is virtually instantaneous. If you mail your request, the freeze will be lifted within three days of the credit bureau receiving your request. A credit lock lets you restrict and grant access to your credit reports essentially the same way a credit freeze does, but usually with extra features. Credit lock services provided by each credit bureau may differ in cost and functionality. For instance, in addition to enabling you to turn access to your Experian credit report on and off instantly, CreditLock from Experian also notifies you when anyone requests access to your locked credit report. This can help you spot unauthorized activity in your name. Experian CreditLock is available as part of a premium identity protection subscription for $24.99 per month, which also includes services such as: Monthly privacy scans and help getting information removed from covered people search sites Alerts to credit activity on your credit reports at all three national credit bureaus Quarterly FICO® Scores☉ based on your credit reports at all three national bureaus Daily FICO® Scores based on your Experian credit report Alerts when your personal data appears on the dark web Alerts to potential takeovers of your financial accounts Alerts when your Social Security number appears online Up to $1 million in identity theft insurance coverage Dedicated fraud resolution support Lost wallet assistance Equifax provides credit locks free to consumers through Lock & Alert, which also notifies users when anyone requests a credit check on a locked Equifax credit report. TransUnion includes credit file locking for both your TransUnion and Equifax credit files in its subscription service, which costs $29.95 per month. Each credit bureau requires entering your Social Security number and identity verification as part of its credit lock setup process. How to Lock Your Credit Enabling credit lock requires signing up for the service with the relevant credit bureau. Controls differ somewhat at each credit bureau, but each is a virtual on-off toggle that locks and unlocks the applicable credit report. The process for unlocking your credit report at each credit bureau is the reverse of the procedure for locking it: Using the credit bureau's app or website, locate the credit lock function and click the virtual switch to toggle from the locked setting to the unlocked If you're concerned your identity may have been stolen, or if you've discovered fraudulent activity on your credit reports, applying a security freeze or credit lock to your credit report will effectively prevent lenders from processing new credit applications filed in your name. Because they are free and guaranteed by law, credit freezes may be the better choice for consumers who don't want to pay for the service and don't want any extra features. When freezing or unfreezing your credit online, the process is essentially as quick and easy to use as a credit lock. A credit freeze is also the only option available if you want to restrict access to your minor children's credit information. Credit lock might be a better option if you're interested in the additional identity theft protection services included in the accompanying subscriptions. These might be particularly helpful in addressing the consequences of identity theft, and helping prevent additional fraud. When weighing the benefits of credit freeze and credit lock, it may also be worth considering a third option, the fraud alert. You have the right to place a fraud alert on your credit report, and doing so instructs lenders to verify your identity before processing a credit application in your name. This allows you to apply for loans and other forms of credit without having to remember to thaw or unlock your credit reports. What's more, placing a fraud alert with any one credit bureau automatically applies the alert to all three credit bureaus. You'll have to remove each separately, however. Learn more >> Fraud Alert vs. Credit Freeze: What's the Difference? The Bottom Line Credit freezes and credit locks offer valuable protection for your credit reports, as either a precaution or a response to criminal abuse of your credit. When managed online, both freezes and locks are quick and easy to use, and highly effective at blocking credit applications in your name—both fraudulent and legitimate ones. If you decide to apply a credit lock or security freeze, remember to disable it before you apply for a new loan or credit account. https://www.experian.com/blogs/ask-experian/whats-the-difference-between-credit-freeze-and-a-credit-lock/
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document] EVIDENCE: A credit freeze is a free service, guaranteed under federal law, that can protect you from credit fraud by limiting most access to your credit report until you lift it, or "thaw" your report. When your credit report is frozen, any lender who asks to evaluate your report for purposes of issuing a loan or other credit is denied access to the report. A security freeze won't affect your credit scores, but it will prevent lenders evaluating credit applications from obtaining your credit scores. A security freeze prevents criminals from opening new credit accounts in your name, but it also blocks your legitimate credit applications. So, if you have a credit freeze in place, you'll need to thaw your credit reports before applying for a new loan, credit card or other consumer credit. Once your application is processed, you can reinstate the freeze. Alternatively, you can use a temporary thaw to lift the freeze for a set window of time, such as one day or one week, after which the freeze will be reinstated. Credit freezes must be activated and lifted separately at each of the national credit bureaus. Procedures differ somewhat with each bureau, but all three enable requesting and lifting of security freezes online, by phone and via postal mail. There is never a fee for applying or removing a credit freeze. The law also allows you to establish and freeze credit reports for your minor children, to avoid misuse of their personal information. To place a credit freeze, you must provide details and proof of your identity and address, per instructions provided by the respective credit bureau. These typically include: Full name Date of birth All addresses you've used in the past two years Social Security number One copy of a government-issued identification, such as a driver's license or state ID card A recent copy of a utility bill, bank or insurance statement or similar, as proof of address How to Freeze Your Credit at Each Credit Bureau Experian TransUnion Equifax Online 888-EXPERIAN Experian Security Freeze P.O. Box 9554 Allen, TX 75013 Online 800-916-8800 TransUnion P.O. Box 160 Woodlyn, PA 19094 Online 888-298-0045 Equifax Information Services LLC P.O. Box 105788 Atlanta, GA 30348-5788 You can remove a credit freeze using the same channels you use to set up a freeze. When lifting a credit freeze, you have the option of permanently unfreezing your credit, or lifting the freeze temporarily by indicating a length of time (one day or one week, for example) you want the freeze to be suspended. Policies vary by bureau, so make sure you understand what your options are before you begin the process. When you request a credit thaw by phone, your freeze will be lifted within one hour. If you use a credit bureau website or phone app to turn off a credit freeze, the process is virtually instantaneous. If you mail your request, the freeze will be lifted within three days of the credit bureau receiving your request. A credit lock lets you restrict and grant access to your credit reports essentially the same way a credit freeze does, but usually with extra features. Credit lock services provided by each credit bureau may differ in cost and functionality. For instance, in addition to enabling you to turn access to your Experian credit report on and off instantly, CreditLock from Experian also notifies you when anyone requests access to your locked credit report. This can help you spot unauthorized activity in your name. Experian CreditLock is available as part of a premium identity protection subscription for $24.99 per month, which also includes services such as: Monthly privacy scans and help getting information removed from covered people search sites Alerts to credit activity on your credit reports at all three national credit bureaus Quarterly FICO® Scores☉ based on your credit reports at all three national bureaus Daily FICO® Scores based on your Experian credit report Alerts when your personal data appears on the dark web Alerts to potential takeovers of your financial accounts Alerts when your Social Security number appears online Up to $1 million in identity theft insurance coverage Dedicated fraud resolution support Lost wallet assistance Equifax provides credit locks free to consumers through Lock & Alert, which also notifies users when anyone requests a credit check on a locked Equifax credit report. TransUnion includes credit file locking for both your TransUnion and Equifax credit files in its subscription service, which costs $29.95 per month. Each credit bureau requires entering your Social Security number and identity verification as part of its credit lock setup process. How to Lock Your Credit Enabling credit lock requires signing up for the service with the relevant credit bureau. Controls differ somewhat at each credit bureau, but each is a virtual on-off toggle that locks and unlocks the applicable credit report. The process for unlocking your credit report at each credit bureau is the reverse of the procedure for locking it: Using the credit bureau's app or website, locate the credit lock function and click the virtual switch to toggle from the locked setting to the unlocked If you're concerned your identity may have been stolen, or if you've discovered fraudulent activity on your credit reports, applying a security freeze or credit lock to your credit report will effectively prevent lenders from processing new credit applications filed in your name. Because they are free and guaranteed by law, credit freezes may be the better choice for consumers who don't want to pay for the service and don't want any extra features. When freezing or unfreezing your credit online, the process is essentially as quick and easy to use as a credit lock. A credit freeze is also the only option available if you want to restrict access to your minor children's credit information. Credit lock might be a better option if you're interested in the additional identity theft protection services included in the accompanying subscriptions. These might be particularly helpful in addressing the consequences of identity theft, and helping prevent additional fraud. When weighing the benefits of credit freeze and credit lock, it may also be worth considering a third option, the fraud alert. You have the right to place a fraud alert on your credit report, and doing so instructs lenders to verify your identity before processing a credit application in your name. This allows you to apply for loans and other forms of credit without having to remember to thaw or unlock your credit reports. What's more, placing a fraud alert with any one credit bureau automatically applies the alert to all three credit bureaus. You'll have to remove each separately, however. Learn more >> Fraud Alert vs. Credit Freeze: What's the Difference? The Bottom Line Credit freezes and credit locks offer valuable protection for your credit reports, as either a precaution or a response to criminal abuse of your credit. When managed online, both freezes and locks are quick and easy to use, and highly effective at blocking credit applications in your name—both fraudulent and legitimate ones. If you decide to apply a credit lock or security freeze, remember to disable it before you apply for a new loan or credit account. USER: Our social security numbers were stolen in a cyber-hack and I'm freaking out. We have accounts for the whole family, including our kids. We are pretty broke, but I'm worried they will open credit cards or something. What are some ways I can protect our credit because I can't decide between a credit freeze, lock or fraud alerts, what are the pros and cons for all three? I normally do all our banking online and have already changed all our passwords just in case but need some reassurance we will be ok. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
24
92
1,198
null
147
Please only use the provided information to answer the question. Do not use any external knowledge or prior knowledge. The answer should be extracted based on the text only.
Based on the provided text, can you provide the steps taken when leave is needed for FMLA?
Employees must provide a 30-day advance notice to employers when the need for leave is foreseeable based on an expected birth or a scheduled medical treatment.52 When the need for leave is not foreseeable (e.g., hospitalization resulting from an automobile accident) or when leave is needed to address a qualifying military exigency, notice must be given “as soon as practicable.”53 In some cases, an employer may delay approval of FMLA leave when advance notice requirements are not met. Compliance with Employers’ Policy for Requesting Leave In general, employers may condition FMLA leave approval upon an employee’s adherence to the employer’s policy for requesting leave.54 For example, if established in employer policy, an employer may require written request for leave, or require the employee to call-in prior to an absence when using intermittent leave. There are limits, however, on when employer policy can be used to deny or delay FMLA leave. Employers may not apply a longer notice period than the 30-day notice provided in the act (e.g., the employer cannot require a 45-day notice). An FMLA leave request that does not meet employer policy may not be denied if unusual circumstances prevent the employee from following employer policy (e.g., emergency medical treatment is required). Scheduling Planned Medical Treatment and Leave When the need for FMLA leave is based on a planned medical treatment and is foreseeable, the employee must make a reasonable effort to schedule the treatment so as not to disrupt unduly the business operations.55 Plans made between the employer and employee regarding scheduling of leave and the timing of planned medical treatment are subject to the approval of the employee’s health care provider. Employer Rights to Require Certification In some instances, employers may require that an employee’s request for FMLA leave be supported by medical certification (e.g., that a serious health condition exists) or other certification (e.g., to determine active duty status of a military member).56 Employers must notify employees each time certification is required, and inform employees of the anticipated consequences should the employee fail to provide certification (e.g., denial of leave). Medical Certification of a Serious Health Condition An employer may require an employee requesting leave for a serious health condition—his or her own, or that of a family member—to provide medical certification verifying that such a condition exists, and related information.57 A new certification of a serious health condition can be required every 12 months.
System Instruction: Please only use the provided information to answer the question. Do not use any external knowledge or prior knowledge. The answer should be extracted based on the text only. Question: Based on the provided text, can you provide the steps taken when leave is needed for FMLA? Context block: Employees must provide a 30-day advance notice to employers when the need for leave is foreseeable based on an expected birth or a scheduled medical treatment.52 When the need for leave is not foreseeable (e.g., hospitalization resulting from an automobile accident) or when leave is needed to address a qualifying military exigency, notice must be given “as soon as practicable.”53 In some cases, an employer may delay approval of FMLA leave when advance notice requirements are not met. Compliance with Employers’ Policy for Requesting Leave In general, employers may condition FMLA leave approval upon an employee’s adherence to the employer’s policy for requesting leave.54 For example, if established in employer policy, an employer may require written request for leave, or require the employee to call-in prior to an absence when using intermittent leave. There are limits, however, on when employer policy can be used to deny or delay FMLA leave. Employers may not apply a longer notice period than the 30-day notice provided in the act (e.g., the employer cannot require a 45-day notice). An FMLA leave request that does not meet employer policy may not be denied if unusual circumstances prevent the employee from following employer policy (e.g., emergency medical treatment is required). Scheduling Planned Medical Treatment and Leave When the need for FMLA leave is based on a planned medical treatment and is foreseeable, the employee must make a reasonable effort to schedule the treatment so as not to disrupt unduly the business operations.55 Plans made between the employer and employee regarding scheduling of leave and the timing of planned medical treatment are subject to the approval of the employee’s health care provider. Employer Rights to Require Certification In some instances, employers may require that an employee’s request for FMLA leave be supported by medical certification (e.g., that a serious health condition exists) or other certification (e.g., to determine active duty status of a military member).56 Employers must notify employees each time certification is required, and inform employees of the anticipated consequences should the employee fail to provide certification (e.g., denial of leave). Medical Certification of a Serious Health Condition An employer may require an employee requesting leave for a serious health condition—his or her own, or that of a family member—to provide medical certification verifying that such a condition exists, and related information.57 A new certification of a serious health condition can be required every 12 months.
Please only use the provided information to answer the question. Do not use any external knowledge or prior knowledge. The answer should be extracted based on the text only. EVIDENCE: Employees must provide a 30-day advance notice to employers when the need for leave is foreseeable based on an expected birth or a scheduled medical treatment.52 When the need for leave is not foreseeable (e.g., hospitalization resulting from an automobile accident) or when leave is needed to address a qualifying military exigency, notice must be given “as soon as practicable.”53 In some cases, an employer may delay approval of FMLA leave when advance notice requirements are not met. Compliance with Employers’ Policy for Requesting Leave In general, employers may condition FMLA leave approval upon an employee’s adherence to the employer’s policy for requesting leave.54 For example, if established in employer policy, an employer may require written request for leave, or require the employee to call-in prior to an absence when using intermittent leave. There are limits, however, on when employer policy can be used to deny or delay FMLA leave. Employers may not apply a longer notice period than the 30-day notice provided in the act (e.g., the employer cannot require a 45-day notice). An FMLA leave request that does not meet employer policy may not be denied if unusual circumstances prevent the employee from following employer policy (e.g., emergency medical treatment is required). Scheduling Planned Medical Treatment and Leave When the need for FMLA leave is based on a planned medical treatment and is foreseeable, the employee must make a reasonable effort to schedule the treatment so as not to disrupt unduly the business operations.55 Plans made between the employer and employee regarding scheduling of leave and the timing of planned medical treatment are subject to the approval of the employee’s health care provider. Employer Rights to Require Certification In some instances, employers may require that an employee’s request for FMLA leave be supported by medical certification (e.g., that a serious health condition exists) or other certification (e.g., to determine active duty status of a military member).56 Employers must notify employees each time certification is required, and inform employees of the anticipated consequences should the employee fail to provide certification (e.g., denial of leave). Medical Certification of a Serious Health Condition An employer may require an employee requesting leave for a serious health condition—his or her own, or that of a family member—to provide medical certification verifying that such a condition exists, and related information.57 A new certification of a serious health condition can be required every 12 months. USER: Based on the provided text, can you provide the steps taken when leave is needed for FMLA? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
29
17
400
null
164
Only answer by using the information in the context block below. Do not use external sources for your answer.
How does 10% of calories from fat benefit us?
The rationale for the Nutrition Spectrum Reversal Program guidelines can be stated briefly: 10% OF TOTAL CALORIES FROM FAT. The guideline of 10% of calories from fat provides sufficient nutrition, supports heart disease regression, and weight loss. It can be accomplished by eating a wide range of satisfying and pleasurable foods. Limiting dietary fat to 10% of total calories reduces consumption of all fats, which decreases blood cholesterol levels. It also typically reduces total calorie intake, because fat contains 9 calories per gram compared to 4 calories per gram in carbohydrates and protein. Reducing body weight reduces risk because obesity adds to the risk of heart disease. A nutrition program without added fats and high-fat foods (i.e. meat, fish, poultry, milk fat, oils, and high-fat plant foods) still contains about 10% of calories from fat. This comes from the naturally occurring fat in grain products and some vegetables and beans. Excessive food restrictions would be required for the nutrition program to go lower than 10% fat. The human body needs about 5% of calories from fat to obtain the essential fats for good health. Plus, there are no research studies that have evaluated or supported a fat intake below 10% fat. Diets with higher amounts of fat (20-30% fat) have not been associated with heart disease reversal. In addition, high-fat diets have been associated with an increased risk of some cancers, such as breast, colon, and prostate. All fats and oils contain three kinds of fat: saturated fat, monounsaturated fat, and polyunsaturated fat. These kinds of fats are present in different proportions in fats and oils, and they affect blood cholesterol levels differently. Typically, foods that are very high in saturated fat are solid at room temperature, and foods that are very low in saturated fat are liquid at room temperature.
Only answer by using the information in the context block below. Do not use external sources for your answer. How does 10% of calories from fat benefit us? [The rationale for the Nutrition Spectrum Reversal Program guidelines can be stated briefly: 10% OF TOTAL CALORIES FROM FAT. The guideline of 10% of calories from fat provides sufficient nutrition, supports heart disease regression, and weight loss. It can be accomplished by eating a wide range of satisfying and pleasurable foods. Limiting dietary fat to 10% of total calories reduces consumption of all fats, which decreases blood cholesterol levels. It also typically reduces total calorie intake, because fat contains 9 calories per gram compared to 4 calories per gram in carbohydrates and protein. Reducing body weight reduces risk because obesity adds to the risk of heart disease. A nutrition program without added fats and high-fat foods (i.e. meat, fish, poultry, milk fat, oils, and high-fat plant foods) still contains about 10% of calories from fat. This comes from the naturally occurring fat in grain products and some vegetables and beans. Excessive food restrictions would be required for the nutrition program to go lower than 10% fat. The human body needs about 5% of calories from fat to obtain the essential fats for good health. Plus, there are no research studies that have evaluated or supported a fat intake below 10% fat. Diets with higher amounts of fat (20-30% fat) have not been associated with heart disease reversal. In addition, high-fat diets have been associated with an increased risk of some cancers, such as breast, colon, and prostate. All fats and oils contain three kinds of fat: saturated fat, monounsaturated fat, and polyunsaturated fat. These kinds of fats are present in different proportions in fats and oils, and they affect blood cholesterol levels differently. Typically, foods that are very high in saturated fat are solid at room temperature, and foods that are very low in saturated fat are liquid at room temperature.]
Only answer by using the information in the context block below. Do not use external sources for your answer. EVIDENCE: The rationale for the Nutrition Spectrum Reversal Program guidelines can be stated briefly: 10% OF TOTAL CALORIES FROM FAT. The guideline of 10% of calories from fat provides sufficient nutrition, supports heart disease regression, and weight loss. It can be accomplished by eating a wide range of satisfying and pleasurable foods. Limiting dietary fat to 10% of total calories reduces consumption of all fats, which decreases blood cholesterol levels. It also typically reduces total calorie intake, because fat contains 9 calories per gram compared to 4 calories per gram in carbohydrates and protein. Reducing body weight reduces risk because obesity adds to the risk of heart disease. A nutrition program without added fats and high-fat foods (i.e. meat, fish, poultry, milk fat, oils, and high-fat plant foods) still contains about 10% of calories from fat. This comes from the naturally occurring fat in grain products and some vegetables and beans. Excessive food restrictions would be required for the nutrition program to go lower than 10% fat. The human body needs about 5% of calories from fat to obtain the essential fats for good health. Plus, there are no research studies that have evaluated or supported a fat intake below 10% fat. Diets with higher amounts of fat (20-30% fat) have not been associated with heart disease reversal. In addition, high-fat diets have been associated with an increased risk of some cancers, such as breast, colon, and prostate. All fats and oils contain three kinds of fat: saturated fat, monounsaturated fat, and polyunsaturated fat. These kinds of fats are present in different proportions in fats and oils, and they affect blood cholesterol levels differently. Typically, foods that are very high in saturated fat are solid at room temperature, and foods that are very low in saturated fat are liquid at room temperature. USER: How does 10% of calories from fat benefit us? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
19
9
301
null
704
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document]
Should internet providers be protected from all liability for information posted on their websites by third parties? Describe the pros and cons of keeping such protection in place, in a bullet list, and give a final judgment of which side is more persuasive.
DEPARTMENT OF JUSTICE’S REVIEW OF SECTION 230 OF THE COMMUNICATIONS DECENCY ACT OF 1996 Office of the Attorney General As part of the President's Executive Order on Preventing Online Censorship, and as a result of the Department's long standing review of Section 230, the Department has put together the following legislative package to reform Section 230. The proposal focuses on the two big areas of concern that were highlighted by victims, businesses, and other stakeholders in the conversations and meetings the Department held to discuss the issue. First, it addresses unclear and inconsistent moderation practices that limit speech and go beyond the text of the existing statute. Second, it addresses the proliferation of illicit and harmful content online that leaves victims without any civil recourse. Taken together, the Department's legislative package provides a clear path forward on modernizing Section 230 to encourage a safer and more open internet. Cover Letter: A letter to Congress explaining the need for Section 230 reform and how the Department proposes to reform it. Redline: A copy of the existing law with the Department's proposed changes in redline. Section by Section: An accompanying document to the redline that provides a detailed description and purpose for each edit to the existing statute. Read More As part of its broader review of market-leading online platforms, the U.S. Department of Justice analyzed Section 230 of the Communications Decency Act of 1996, which provides immunity to online platforms from civil liability based on third-party content and for the removal of content in certain circumstances. Congress originally enacted the statute to nurture a nascent industry while also incentivizing online platforms to remove content harmful to children. The combination of significant technological changes since 1996 and the expansive interpretation that courts have given Section 230, however, has left online platforms both immune for a wide array of illicit activity on their services and free to moderate content with little transparency or accountability. The Department of Justice has concluded that the time is ripe to realign the scope of Section 230 with the realities of the modern internet. Reform is important now more than ever. Every year, more citizens—including young children—are relying on the internet for everyday activities, while online criminal activity continues to grow. We must ensure that the internet is both an open and safe space for our society. Based on engagement with experts, industry, thought-leaders, lawmakers, and the public, the Department has identified a set of concrete reform proposals to provide stronger incentives for online platforms to address illicit material on their services, while continuing to foster innovation and free speech. Read the Department’s Key Takeaways. The Department's review of Section 230 arose in the context of our broader review of market-leading online platforms and their practices, announced in July 2019. While competition has been a core part of the Department’s review, we also recognize that not all concerns raised about online platforms (including internet-based businesses and social media platforms) fall squarely within the U.S. antitrust laws. Our review has therefore looked broadly at other legal and policy frameworks applicable to online platforms. One key part of that legal landscape is Section 230, which provides immunity to online platforms from civil liability based on third-party content as well as immunity for removal of content in certain circumstances. Drafted in the early years of internet commerce, Section 230 was enacted in response to a problem that incipient online platforms were facing. In the years leading up to Section 230, courts had held that an online platform that passively hosted third-party content was not liable as a publisher if any of that content was defamatory, but that a platform would be liable as a publisher for all its third-party content if it exercised discretion to remove any third-party material. Platforms therefore faced a dilemma: They could try to moderate third-party content but risk being held liable for any and all content posted by third parties, or choose not to moderate content to avoid liability but risk having their services overrun with obscene or unlawful content. Congress enacted Section 230 in part to resolve this quandary by providing immunity to online platforms both for third-party content on their services or for removal of certain categories of content. The statute was meant to nurture emerging internet businesses while also incentivizing them to regulate harmful online content. The internet has changed dramatically in the 25 years since Section 230’s enactment in ways that no one, including the drafters of Section 230, could have predicted. Several online platforms have transformed into some of the nation’s largest and most valuable companies, and today’s online services bear little resemblance to the rudimentary offerings in 1996. Platforms no longer function as simple forums for posting third-party content, but instead use sophisticated algorithms to promote content and connect users. Platforms also now offer an ever-expanding array of services, playing an increasingly essential role in how Americans communicate, access media, engage in commerce, and generally carry on their everyday lives. These developments have brought enormous benefits to society. But they have also had downsides. Criminals and other wrongdoers are increasingly turning to online platforms to engage in a host of unlawful activities, including child sexual exploitation, selling illicit drugs, cyberstalking, human trafficking, and terrorism. At the same time, courts have interpreted the scope of Section 230 immunity very broadly, diverging from its original purpose. This expansive statutory interpretation, combined with technological developments, has reduced the incentives of online platforms to address illicit activity on their services and, at the same time, left them free to moderate lawful content without transparency or accountability. The time has therefore come to realign the scope of Section 230 with the realities of the modern internet so that it continues to foster innovation and free speech but also provides stronger incentives for online platforms to address illicit material on their services. Much of the modern debate over Section 230 has been at opposite ends of the spectrum. Many have called for an outright repeal of the statute in light of the changed technological landscape and growing online harms. Others, meanwhile, have insisted that Section 230 be left alone and claimed that any reform will crumble the tech industry. Based on our analysis and external engagement, the Department believes there is productive middle ground and has identified a set of measured, yet concrete proposals that address many of the concerns raised about Section 230. A reassessment of America’s laws governing the internet could not be timelier. Citizens are relying on the internet more than ever for commerce, entertainment, education, employment, and public discourse. School closings in light of the COVID-19 pandemic mean that children are spending more time online, at times unsupervised, while more and more criminal activity is moving online. All of these factors make it imperative that we maintain the internet as an open and safe space.
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== Should internet providers be protected from all liability for information posted on their websites by third parties? Describe the pros and cons of keeping such protection in place, in a bullet list, and give a final judgment of which side is more persuasive. {passage 0} ========== DEPARTMENT OF JUSTICE’S REVIEW OF SECTION 230 OF THE COMMUNICATIONS DECENCY ACT OF 1996 Office of the Attorney General As part of the President's Executive Order on Preventing Online Censorship, and as a result of the Department's long standing review of Section 230, the Department has put together the following legislative package to reform Section 230. The proposal focuses on the two big areas of concern that were highlighted by victims, businesses, and other stakeholders in the conversations and meetings the Department held to discuss the issue. First, it addresses unclear and inconsistent moderation practices that limit speech and go beyond the text of the existing statute. Second, it addresses the proliferation of illicit and harmful content online that leaves victims without any civil recourse. Taken together, the Department's legislative package provides a clear path forward on modernizing Section 230 to encourage a safer and more open internet. Cover Letter: A letter to Congress explaining the need for Section 230 reform and how the Department proposes to reform it. Redline: A copy of the existing law with the Department's proposed changes in redline. Section by Section: An accompanying document to the redline that provides a detailed description and purpose for each edit to the existing statute. Read More As part of its broader review of market-leading online platforms, the U.S. Department of Justice analyzed Section 230 of the Communications Decency Act of 1996, which provides immunity to online platforms from civil liability based on third-party content and for the removal of content in certain circumstances. Congress originally enacted the statute to nurture a nascent industry while also incentivizing online platforms to remove content harmful to children. The combination of significant technological changes since 1996 and the expansive interpretation that courts have given Section 230, however, has left online platforms both immune for a wide array of illicit activity on their services and free to moderate content with little transparency or accountability. The Department of Justice has concluded that the time is ripe to realign the scope of Section 230 with the realities of the modern internet. Reform is important now more than ever. Every year, more citizens—including young children—are relying on the internet for everyday activities, while online criminal activity continues to grow. We must ensure that the internet is both an open and safe space for our society. Based on engagement with experts, industry, thought-leaders, lawmakers, and the public, the Department has identified a set of concrete reform proposals to provide stronger incentives for online platforms to address illicit material on their services, while continuing to foster innovation and free speech. Read the Department’s Key Takeaways. The Department's review of Section 230 arose in the context of our broader review of market-leading online platforms and their practices, announced in July 2019. While competition has been a core part of the Department’s review, we also recognize that not all concerns raised about online platforms (including internet-based businesses and social media platforms) fall squarely within the U.S. antitrust laws. Our review has therefore looked broadly at other legal and policy frameworks applicable to online platforms. One key part of that legal landscape is Section 230, which provides immunity to online platforms from civil liability based on third-party content as well as immunity for removal of content in certain circumstances. Drafted in the early years of internet commerce, Section 230 was enacted in response to a problem that incipient online platforms were facing. In the years leading up to Section 230, courts had held that an online platform that passively hosted third-party content was not liable as a publisher if any of that content was defamatory, but that a platform would be liable as a publisher for all its third-party content if it exercised discretion to remove any third-party material. Platforms therefore faced a dilemma: They could try to moderate third-party content but risk being held liable for any and all content posted by third parties, or choose not to moderate content to avoid liability but risk having their services overrun with obscene or unlawful content. Congress enacted Section 230 in part to resolve this quandary by providing immunity to online platforms both for third-party content on their services or for removal of certain categories of content. The statute was meant to nurture emerging internet businesses while also incentivizing them to regulate harmful online content. The internet has changed dramatically in the 25 years since Section 230’s enactment in ways that no one, including the drafters of Section 230, could have predicted. Several online platforms have transformed into some of the nation’s largest and most valuable companies, and today’s online services bear little resemblance to the rudimentary offerings in 1996. Platforms no longer function as simple forums for posting third-party content, but instead use sophisticated algorithms to promote content and connect users. Platforms also now offer an ever-expanding array of services, playing an increasingly essential role in how Americans communicate, access media, engage in commerce, and generally carry on their everyday lives. These developments have brought enormous benefits to society. But they have also had downsides. Criminals and other wrongdoers are increasingly turning to online platforms to engage in a host of unlawful activities, including child sexual exploitation, selling illicit drugs, cyberstalking, human trafficking, and terrorism. At the same time, courts have interpreted the scope of Section 230 immunity very broadly, diverging from its original purpose. This expansive statutory interpretation, combined with technological developments, has reduced the incentives of online platforms to address illicit activity on their services and, at the same time, left them free to moderate lawful content without transparency or accountability. The time has therefore come to realign the scope of Section 230 with the realities of the modern internet so that it continues to foster innovation and free speech but also provides stronger incentives for online platforms to address illicit material on their services. Much of the modern debate over Section 230 has been at opposite ends of the spectrum. Many have called for an outright repeal of the statute in light of the changed technological landscape and growing online harms. Others, meanwhile, have insisted that Section 230 be left alone and claimed that any reform will crumble the tech industry. Based on our analysis and external engagement, the Department believes there is productive middle ground and has identified a set of measured, yet concrete proposals that address many of the concerns raised about Section 230. A reassessment of America’s laws governing the internet could not be timelier. Citizens are relying on the internet more than ever for commerce, entertainment, education, employment, and public discourse. School closings in light of the COVID-19 pandemic mean that children are spending more time online, at times unsupervised, while more and more criminal activity is moving online. All of these factors make it imperative that we maintain the internet as an open and safe space. https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document] EVIDENCE: DEPARTMENT OF JUSTICE’S REVIEW OF SECTION 230 OF THE COMMUNICATIONS DECENCY ACT OF 1996 Office of the Attorney General As part of the President's Executive Order on Preventing Online Censorship, and as a result of the Department's long standing review of Section 230, the Department has put together the following legislative package to reform Section 230. The proposal focuses on the two big areas of concern that were highlighted by victims, businesses, and other stakeholders in the conversations and meetings the Department held to discuss the issue. First, it addresses unclear and inconsistent moderation practices that limit speech and go beyond the text of the existing statute. Second, it addresses the proliferation of illicit and harmful content online that leaves victims without any civil recourse. Taken together, the Department's legislative package provides a clear path forward on modernizing Section 230 to encourage a safer and more open internet. Cover Letter: A letter to Congress explaining the need for Section 230 reform and how the Department proposes to reform it. Redline: A copy of the existing law with the Department's proposed changes in redline. Section by Section: An accompanying document to the redline that provides a detailed description and purpose for each edit to the existing statute. Read More As part of its broader review of market-leading online platforms, the U.S. Department of Justice analyzed Section 230 of the Communications Decency Act of 1996, which provides immunity to online platforms from civil liability based on third-party content and for the removal of content in certain circumstances. Congress originally enacted the statute to nurture a nascent industry while also incentivizing online platforms to remove content harmful to children. The combination of significant technological changes since 1996 and the expansive interpretation that courts have given Section 230, however, has left online platforms both immune for a wide array of illicit activity on their services and free to moderate content with little transparency or accountability. The Department of Justice has concluded that the time is ripe to realign the scope of Section 230 with the realities of the modern internet. Reform is important now more than ever. Every year, more citizens—including young children—are relying on the internet for everyday activities, while online criminal activity continues to grow. We must ensure that the internet is both an open and safe space for our society. Based on engagement with experts, industry, thought-leaders, lawmakers, and the public, the Department has identified a set of concrete reform proposals to provide stronger incentives for online platforms to address illicit material on their services, while continuing to foster innovation and free speech. Read the Department’s Key Takeaways. The Department's review of Section 230 arose in the context of our broader review of market-leading online platforms and their practices, announced in July 2019. While competition has been a core part of the Department’s review, we also recognize that not all concerns raised about online platforms (including internet-based businesses and social media platforms) fall squarely within the U.S. antitrust laws. Our review has therefore looked broadly at other legal and policy frameworks applicable to online platforms. One key part of that legal landscape is Section 230, which provides immunity to online platforms from civil liability based on third-party content as well as immunity for removal of content in certain circumstances. Drafted in the early years of internet commerce, Section 230 was enacted in response to a problem that incipient online platforms were facing. In the years leading up to Section 230, courts had held that an online platform that passively hosted third-party content was not liable as a publisher if any of that content was defamatory, but that a platform would be liable as a publisher for all its third-party content if it exercised discretion to remove any third-party material. Platforms therefore faced a dilemma: They could try to moderate third-party content but risk being held liable for any and all content posted by third parties, or choose not to moderate content to avoid liability but risk having their services overrun with obscene or unlawful content. Congress enacted Section 230 in part to resolve this quandary by providing immunity to online platforms both for third-party content on their services or for removal of certain categories of content. The statute was meant to nurture emerging internet businesses while also incentivizing them to regulate harmful online content. The internet has changed dramatically in the 25 years since Section 230’s enactment in ways that no one, including the drafters of Section 230, could have predicted. Several online platforms have transformed into some of the nation’s largest and most valuable companies, and today’s online services bear little resemblance to the rudimentary offerings in 1996. Platforms no longer function as simple forums for posting third-party content, but instead use sophisticated algorithms to promote content and connect users. Platforms also now offer an ever-expanding array of services, playing an increasingly essential role in how Americans communicate, access media, engage in commerce, and generally carry on their everyday lives. These developments have brought enormous benefits to society. But they have also had downsides. Criminals and other wrongdoers are increasingly turning to online platforms to engage in a host of unlawful activities, including child sexual exploitation, selling illicit drugs, cyberstalking, human trafficking, and terrorism. At the same time, courts have interpreted the scope of Section 230 immunity very broadly, diverging from its original purpose. This expansive statutory interpretation, combined with technological developments, has reduced the incentives of online platforms to address illicit activity on their services and, at the same time, left them free to moderate lawful content without transparency or accountability. The time has therefore come to realign the scope of Section 230 with the realities of the modern internet so that it continues to foster innovation and free speech but also provides stronger incentives for online platforms to address illicit material on their services. Much of the modern debate over Section 230 has been at opposite ends of the spectrum. Many have called for an outright repeal of the statute in light of the changed technological landscape and growing online harms. Others, meanwhile, have insisted that Section 230 be left alone and claimed that any reform will crumble the tech industry. Based on our analysis and external engagement, the Department believes there is productive middle ground and has identified a set of measured, yet concrete proposals that address many of the concerns raised about Section 230. A reassessment of America’s laws governing the internet could not be timelier. Citizens are relying on the internet more than ever for commerce, entertainment, education, employment, and public discourse. School closings in light of the COVID-19 pandemic mean that children are spending more time online, at times unsupervised, while more and more criminal activity is moving online. All of these factors make it imperative that we maintain the internet as an open and safe space. USER: Should internet providers be protected from all liability for information posted on their websites by third parties? Describe the pros and cons of keeping such protection in place, in a bullet list, and give a final judgment of which side is more persuasive. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
43
1,143
null
544
Respond using only the provided text without any external information.
Explain how retinol helps skin in a month.
1. Retinol alleviates aging related conditions of human skin The skin phenotypes of each individual enrolled were collected at five time points. Principal coordinates analysis (PCoA) revealed that participants experienced an overall change in skin phenotype after application of retinol. The Adonis test demonstrated significant separation from Day 0 for Day 21 (adj.p=0.010) and Day 28 (adj.p=0.017), indicating an altered phenotype compared to the baseline (Fig.S1A). Moreover, more than half (eight out of fourteen) of the phenotypic measurements, including water content in the stratum corneum (WCSC), transepidermal water loss (TEWL), pH value, percentage of the red area, and various wrinkle parameters (number, length, area, and volume), exhibited significant changes compared to the baseline (Fig.1). The topical use of the retinol resulted in enhanced water retention capability of the skin, with a 31.4% increase in WCSC (adj.p=0.050, Wilcoxon paired test) and an 18.4% decrease in TEWL at day 28 (adj.p=0.004, adj.p=0.050, Wilcoxon paired test), respectively. Notably, WCSC on day 28 showed a significant increase not only compared to the baseline but also compared to days 7 and 14 (adj.p=0.004, adj.p=0.050, Wilcoxon paired test), suggesting that the stratum corneum's water retention ability improves with prolonged retinol use. The decrease in TWEL indicated that retinol markedly improved and repaired the skin barrier, helping to retain skin moisture and prevent water loss. Retinol also demonstrated its effectiveness in sedative and anti-inflammatory skincare properties, as evidenced by a significant reduction in the size of red areas: a decrease of 11.6% on day 21 and 13.2% on day 28 compared to the baseline level (Table S2). The pH values of the skin cheek surface exhibited a decline from 7.21 (average value on Day 0) to 6.72 (average value on Day 28) while using retinol, indicating the gradual formation of a weakly acidic environment on the facial skin, suggesting that retinol can modulate and maintain acid-base balance of skin. Meanwhile, multiple wrinkle-related indicators, including the wrinkle number, length, area, and volume at the corners of the eyes, displayed significant reductions starting from Day 7 compared to the pre-retinol use conditions (Table S2). Specifically, the wrinkle number decreased significantly on Day 7, 21 and 28, with the most substantial reduction observed on Day 21 and 28, resulting in a 27.8% decrease relative to baseline. The wrinkle length significantly decreased at all four sampling time points compared to baseline. By Day 28, the average wrinkle length decreased from 68.7μm to 42μm, representing a reduction of 38.8% compared to baseline. Wrinkle area exhibited significant reductions on Day 7 and 21, with a notable decrease of 34.8% observed on Day 21. Finally, the wrinkle volume exhibited a significant decrease of 26.9% at Day 7. These findings highlight the potent anti-aging properties of retinol and its efficacy in improving facial wrinkle conditions. (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. bioRxiv preprint doi: https://doi.org/10.1101/2024.06.26.600860; this version posted June 27, 2024. The copyright holder for this preprint Figure 1 The temporal variation of skin phenotypic traits Statistical significance levels of data relative to baseline changes were assessed using the Wilcoxon paired test and adjusted using the Benjamini & Hochberg (BH) method. Displayed results are adjusted p values. Unmarked indicates nonsignificance, * indicates adj.p ≤ 0.1, ** indicates adj.p ≤ 0.05. Data are shown in Table S2. 2. Retinol reshapes human skin microbiome microecology. The application of the retinol had a dramatic impact on the restructuring of skin microbiome microecology (Fig.S1C). Species-level alpha diversity (Shannon diversity index and species evenness index) was significantly lower on day 7 compared to day 0 (Fig.2B, p=0.031, paired Wilcoxon test). This decrease in diversity could be attributed to an imbalance in the relative distribution of certain species within the microbial community. Specifically, there was a significant decrease in the relative abundance of Corynebacterium accolens, a skin bacterium ranked among the top 20 abundant species, on day 7 compared to the baseline (Fig. S1B). This reduction may have allowed other species to occupy a relatively larger ecological niche, resulting in a decline in microbial diversity. Notably, opportunistic pathogens such as Stenotrophomonas maltophilia, Acinetobacter johnsonii, Pseudomonas sp., and Sphingomonas hankookensis showed significant decreases in their relative abundances at three consecutive time points compared to the baseline (Fig.2C). This suggests that the retinol-containing skincare product possesses antimicrobial properties and can reduce the colonization of pathogenic bacteria on the skin surface. We also noticed an increase in the relative abundance of Neisseriales species incertae sedis (Fig.S1B) and Corynebacterium jeddahense (Fig.2C). However, their specific functions remain unclear. (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. bioRxiv preprint doi: https://doi.org/10.1101/2024.06.26.600860; this version posted June 27, 2024. The copyright holder for this preprint Figure 2 Effects of retinol on the structure of skin microbiome. (A) Gene pathway enrichment analysis based on ReporterScore. A ReporterScore with an absolute value greater than 1.64 indicates significant enrichment of the gene pathway, with positive or negative signs denoting upregulation or downregulation compared to the control group (Day 0, baseline). (B) Skin microbiome Alpha diversity indexes. (C) Intergroup differential species abundance changes. Statistical significance levels of data relative to baseline changes were assessed using the Wilcoxon paired test. Displayed results are p values. Unmarked indicates non-significance, * indicates p ≤ 0.05, ** indicates p≤ 0.01. Retinol also exerts an impact on the functionality of the skin microbiome. Starting from Day 14, several microbial gene pathways displayed altered regulation levels (Fig. 2A). Notably, the thiamine (Vitamin B1) metabolism gene pathway was enriched on both Day 14 and Day 21, marked by a peak in the abundance of a thiamine metabolite, biotin thiamine, on Day 14, which significantly increased compared to Day 0 (Fig. S2A). Metabolomic data also showed that the microbial and host thiamine metabolic pathway was significantly (p=0.026) enriched on Day 14, marked by increased intensities of pyruvic acid and L-Tyrosine (Fig. S2A). All the three (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. bioRxiv preprint doi: https://doi.org/10.1101/2024.06.26.600860; this version posted June 27, 2024. The copyright holder for this preprint substances are biosynthetic precursors of thiamin. Thiamin has a variety of benefits for the skin, including increasing the expression of collagen, promoting skin cell growth and repair, and maintaining skin elasticity. Our findings suggest that retinol helps to promote the synthesis and utilization of thiamine by the skin microbiome, thereby enhancing the function and health of the skin barrier. Furthermore, the riboflavin metabolism pathway showed enrichment on Day 14 (Fig. 2A), accompanied by a notable decrease in the abundance of riboflavin on Day 14, while the intensity of riboflavin 5’-phosphate sodium, a bio-active form of riboflavin, exhibited an increase relative to baseline on Day 21 and Day 28(Fig. S2A). Riboflavin phosphate sodium salt form is an essential micronutrient, and plays an important role in the health of the skin, mucous membranes, and eyes. Retinol makes microorganisms more inclined to utilize or convert riboflavin into the active form, riboflavin 5’-phosphate sodium. Additionally, certain gene pathways demonstrated decreased expression levels, including biofilm formation of Vibrio cholerae and flagellar assembly were down-regulated on Day 21. And the bacterial secretion system and O-antigen nucleotide sugar biosynthesis, both were consistently down-regulated during the last three time points(Fig. 2A). The down-regulation implies that retinol may possess antimicrobial and anti-inflammatory capabilities by inhibiting bacterial metabolic activity, secretion of bacterial products, and influencing the integrity of bacterial structure. 3. Retinol stimulates skin microbiota’s secretion of diverse beneficial metabolites for synergistic anti-aging effects. We further utilized MetOrigin to perform tracing analysis of metabolites (based on the databases of MetOrigin tracking to determine whether they originated from the host, microorganisms, or co-metabolism) and metabolic pathway enrichment analysis. Of note, nicotinate and nicotinamide metabolism pathway (hsa00760) enriched in the host on Day 21 , and supported by increment of N1-methyl-4-pyridone-3-carboxamide which is associated with this pathway In the microbe, degradation of flavonoids (ko00946), phenylalanine, tyrosine and tryptophan biosynthesis (ko00400), and biosynthesis of various plant secondary metabolites (ko00999) were up-regulated (Fig. 3A). Based on the databases of MetOrigin, we found a variety of microbial-origin metabolites significantly related to the enrichment of the above pathways, some of which have been reported or experimentally verified to be beneficial to the skin. Maesopsin and apigenin were related to ko00946, where apigenin was known as an anti-tumor substance that is particularly helpful in preventing and reversing the formation of abnormal skin49–52. Quinic acid, 3-dehydroquinic acid and protocatechuic acid were related to ko00400, where quinic acid was reported to have an antiphotoaging effect by protecting human dermal fibroblasts53,54 and protocatechuic acid was demonstrated to have anti-oxidate and anti-aging effects by inducing dermal fibroblasts to synthesis type-1 collagen55–57. In ko00999, (+)- pinoresinol and secoisolariciresinol were significantly related, the former was reported to stimulate keratinocyte proliferation58,59 and the latter was reported to suppress atopic dermatitis in the mouse when administered orally60
Explain how retinol helps skin in a month. Respond using only the provided text without any external information. 1. Retinol alleviates aging related conditions of human skin The skin phenotypes of each individual enrolled were collected at five time points. Principal coordinates analysis (PCoA) revealed that participants experienced an overall change in skin phenotype after application of retinol. The Adonis test demonstrated significant separation from Day 0 for Day 21 (adj.p=0.010) and Day 28 (adj.p=0.017), indicating an altered phenotype compared to the baseline (Fig.S1A). Moreover, more than half (eight out of fourteen) of the phenotypic measurements, including water content in the stratum corneum (WCSC), transepidermal water loss (TEWL), pH value, percentage of the red area, and various wrinkle parameters (number, length, area, and volume), exhibited significant changes compared to the baseline (Fig.1). The topical use of the retinol resulted in enhanced water retention capability of the skin, with a 31.4% increase in WCSC (adj.p=0.050, Wilcoxon paired test) and an 18.4% decrease in TEWL at day 28 (adj.p=0.004, adj.p=0.050, Wilcoxon paired test), respectively. Notably, WCSC on day 28 showed a significant increase not only compared to the baseline but also compared to days 7 and 14 (adj.p=0.004, adj.p=0.050, Wilcoxon paired test), suggesting that the stratum corneum's water retention ability improves with prolonged retinol use. The decrease in TWEL indicated that retinol markedly improved and repaired the skin barrier, helping to retain skin moisture and prevent water loss. Retinol also demonstrated its effectiveness in sedative and anti-inflammatory skincare properties, as evidenced by a significant reduction in the size of red areas: a decrease of 11.6% on day 21 and 13.2% on day 28 compared to the baseline level (Table S2). The pH values of the skin cheek surface exhibited a decline from 7.21 (average value on Day 0) to 6.72 (average value on Day 28) while using retinol, indicating the gradual formation of a weakly acidic environment on the facial skin, suggesting that retinol can modulate and maintain acid-base balance of skin. Meanwhile, multiple wrinkle-related indicators, including the wrinkle number, length, area, and volume at the corners of the eyes, displayed significant reductions starting from Day 7 compared to the pre-retinol use conditions (Table S2). Specifically, the wrinkle number decreased significantly on Day 7, 21 and 28, with the most substantial reduction observed on Day 21 and 28, resulting in a 27.8% decrease relative to baseline. The wrinkle length significantly decreased at all four sampling time points compared to baseline. By Day 28, the average wrinkle length decreased from 68.7μm to 42μm, representing a reduction of 38.8% compared to baseline. Wrinkle area exhibited significant reductions on Day 7 and 21, with a notable decrease of 34.8% observed on Day 21. Finally, the wrinkle volume exhibited a significant decrease of 26.9% at Day 7. These findings highlight the potent anti-aging properties of retinol and its efficacy in improving facial wrinkle conditions. (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. bioRxiv preprint doi: https://doi.org/10.1101/2024.06.26.600860; this version posted June 27, 2024. The copyright holder for this preprint Figure 1 The temporal variation of skin phenotypic traits Statistical significance levels of data relative to baseline changes were assessed using the Wilcoxon paired test and adjusted using the Benjamini & Hochberg (BH) method. Displayed results are adjusted p values. Unmarked indicates nonsignificance, * indicates adj.p ≤ 0.1, ** indicates adj.p ≤ 0.05. Data are shown in Table S2. 2. Retinol reshapes human skin microbiome microecology. The application of the retinol had a dramatic impact on the restructuring of skin microbiome microecology (Fig.S1C). Species-level alpha diversity (Shannon diversity index and species evenness index) was significantly lower on day 7 compared to day 0 (Fig.2B, p=0.031, paired Wilcoxon test). This decrease in diversity could be attributed to an imbalance in the relative distribution of certain species within the microbial community. Specifically, there was a significant decrease in the relative abundance of Corynebacterium accolens, a skin bacterium ranked among the top 20 abundant species, on day 7 compared to the baseline (Fig. S1B). This reduction may have allowed other species to occupy a relatively larger ecological niche, resulting in a decline in microbial diversity. Notably, opportunistic pathogens such as Stenotrophomonas maltophilia, Acinetobacter johnsonii, Pseudomonas sp., and Sphingomonas hankookensis showed significant decreases in their relative abundances at three consecutive time points compared to the baseline (Fig.2C). This suggests that the retinol-containing skincare product possesses antimicrobial properties and can reduce the colonization of pathogenic bacteria on the skin surface. We also noticed an increase in the relative abundance of Neisseriales species incertae sedis (Fig.S1B) and Corynebacterium jeddahense (Fig.2C). However, their specific functions remain unclear. (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. bioRxiv preprint doi: https://doi.org/10.1101/2024.06.26.600860; this version posted June 27, 2024. The copyright holder for this preprint Figure 2 Effects of retinol on the structure of skin microbiome. (A) Gene pathway enrichment analysis based on ReporterScore. A ReporterScore with an absolute value greater than 1.64 indicates significant enrichment of the gene pathway, with positive or negative signs denoting upregulation or downregulation compared to the control group (Day 0, baseline). (B) Skin microbiome Alpha diversity indexes. (C) Intergroup differential species abundance changes. Statistical significance levels of data relative to baseline changes were assessed using the Wilcoxon paired test. Displayed results are p values. Unmarked indicates non-significance, * indicates p ≤ 0.05, ** indicates p≤ 0.01. Retinol also exerts an impact on the functionality of the skin microbiome. Starting from Day 14, several microbial gene pathways displayed altered regulation levels (Fig. 2A). Notably, the thiamine (Vitamin B1) metabolism gene pathway was enriched on both Day 14 and Day 21, marked by a peak in the abundance of a thiamine metabolite, biotin thiamine, on Day 14, which significantly increased compared to Day 0 (Fig. S2A). Metabolomic data also showed that the microbial and host thiamine metabolic pathway was significantly (p=0.026) enriched on Day 14, marked by increased intensities of pyruvic acid and L-Tyrosine (Fig. S2A). All the three (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. bioRxiv preprint doi: https://doi.org/10.1101/2024.06.26.600860; this version posted June 27, 2024. The copyright holder for this preprint substances are biosynthetic precursors of thiamin. Thiamin has a variety of benefits for the skin, including increasing the expression of collagen, promoting skin cell growth and repair, and maintaining skin elasticity. Our findings suggest that retinol helps to promote the synthesis and utilization of thiamine by the skin microbiome, thereby enhancing the function and health of the skin barrier. Furthermore, the riboflavin metabolism pathway showed enrichment on Day 14 (Fig. 2A), accompanied by a notable decrease in the abundance of riboflavin on Day 14, while the intensity of riboflavin 5’-phosphate sodium, a bio-active form of riboflavin, exhibited an increase relative to baseline on Day 21 and Day 28(Fig. S2A). Riboflavin phosphate sodium salt form is an essential micronutrient, and plays an important role in the health of the skin, mucous membranes, and eyes. Retinol makes microorganisms more inclined to utilize or convert riboflavin into the active form, riboflavin 5’-phosphate sodium. Additionally, certain gene pathways demonstrated decreased expression levels, including biofilm formation of Vibrio cholerae and flagellar assembly were down-regulated on Day 21. And the bacterial secretion system and O-antigen nucleotide sugar biosynthesis, both were consistently down-regulated during the last three time points(Fig. 2A). The down-regulation implies that retinol may possess antimicrobial and anti-inflammatory capabilities by inhibiting bacterial metabolic activity, secretion of bacterial products, and influencing the integrity of bacterial structure. 3. Retinol stimulates skin microbiota’s secretion of diverse beneficial metabolites for synergistic anti-aging effects. We further utilized MetOrigin to perform tracing analysis of metabolites (based on the databases of MetOrigin tracking to determine whether they originated from the host, microorganisms, or co-metabolism) and metabolic pathway enrichment analysis. Of note, nicotinate and nicotinamide metabolism pathway (hsa00760) enriched in the host on Day 21 , and supported by increment of N1-methyl-4-pyridone-3-carboxamide which is associated with this pathway In the microbe, degradation of flavonoids (ko00946), phenylalanine, tyrosine and tryptophan biosynthesis (ko00400), and biosynthesis of various plant secondary metabolites (ko00999) were up-regulated (Fig. 3A). Based on the databases of MetOrigin, we found a variety of microbial-origin metabolites significantly related to the enrichment of the above pathways, some of which have been reported or experimentally verified to be beneficial to the skin. Maesopsin and apigenin were related to ko00946, where apigenin was known as an anti-tumor substance that is particularly helpful in preventing and reversing the formation of abnormal skin49–52. Quinic acid, 3-dehydroquinic acid and protocatechuic acid were related to ko00400, where quinic acid was reported to have an antiphotoaging effect by protecting human dermal fibroblasts53,54 and protocatechuic acid was demonstrated to have anti-oxidate and anti-aging effects by inducing dermal fibroblasts to synthesis type-1 collagen55–57. In ko00999, (+)- pinoresinol and secoisolariciresinol were significantly related, the former was reported to stimulate keratinocyte proliferation58,59 and the latter was reported to suppress atopic dermatitis in the mouse when administered orally60
Respond using only the provided text without any external information. EVIDENCE: 1. Retinol alleviates aging related conditions of human skin The skin phenotypes of each individual enrolled were collected at five time points. Principal coordinates analysis (PCoA) revealed that participants experienced an overall change in skin phenotype after application of retinol. The Adonis test demonstrated significant separation from Day 0 for Day 21 (adj.p=0.010) and Day 28 (adj.p=0.017), indicating an altered phenotype compared to the baseline (Fig.S1A). Moreover, more than half (eight out of fourteen) of the phenotypic measurements, including water content in the stratum corneum (WCSC), transepidermal water loss (TEWL), pH value, percentage of the red area, and various wrinkle parameters (number, length, area, and volume), exhibited significant changes compared to the baseline (Fig.1). The topical use of the retinol resulted in enhanced water retention capability of the skin, with a 31.4% increase in WCSC (adj.p=0.050, Wilcoxon paired test) and an 18.4% decrease in TEWL at day 28 (adj.p=0.004, adj.p=0.050, Wilcoxon paired test), respectively. Notably, WCSC on day 28 showed a significant increase not only compared to the baseline but also compared to days 7 and 14 (adj.p=0.004, adj.p=0.050, Wilcoxon paired test), suggesting that the stratum corneum's water retention ability improves with prolonged retinol use. The decrease in TWEL indicated that retinol markedly improved and repaired the skin barrier, helping to retain skin moisture and prevent water loss. Retinol also demonstrated its effectiveness in sedative and anti-inflammatory skincare properties, as evidenced by a significant reduction in the size of red areas: a decrease of 11.6% on day 21 and 13.2% on day 28 compared to the baseline level (Table S2). The pH values of the skin cheek surface exhibited a decline from 7.21 (average value on Day 0) to 6.72 (average value on Day 28) while using retinol, indicating the gradual formation of a weakly acidic environment on the facial skin, suggesting that retinol can modulate and maintain acid-base balance of skin. Meanwhile, multiple wrinkle-related indicators, including the wrinkle number, length, area, and volume at the corners of the eyes, displayed significant reductions starting from Day 7 compared to the pre-retinol use conditions (Table S2). Specifically, the wrinkle number decreased significantly on Day 7, 21 and 28, with the most substantial reduction observed on Day 21 and 28, resulting in a 27.8% decrease relative to baseline. The wrinkle length significantly decreased at all four sampling time points compared to baseline. By Day 28, the average wrinkle length decreased from 68.7μm to 42μm, representing a reduction of 38.8% compared to baseline. Wrinkle area exhibited significant reductions on Day 7 and 21, with a notable decrease of 34.8% observed on Day 21. Finally, the wrinkle volume exhibited a significant decrease of 26.9% at Day 7. These findings highlight the potent anti-aging properties of retinol and its efficacy in improving facial wrinkle conditions. (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. bioRxiv preprint doi: https://doi.org/10.1101/2024.06.26.600860; this version posted June 27, 2024. The copyright holder for this preprint Figure 1 The temporal variation of skin phenotypic traits Statistical significance levels of data relative to baseline changes were assessed using the Wilcoxon paired test and adjusted using the Benjamini & Hochberg (BH) method. Displayed results are adjusted p values. Unmarked indicates nonsignificance, * indicates adj.p ≤ 0.1, ** indicates adj.p ≤ 0.05. Data are shown in Table S2. 2. Retinol reshapes human skin microbiome microecology. The application of the retinol had a dramatic impact on the restructuring of skin microbiome microecology (Fig.S1C). Species-level alpha diversity (Shannon diversity index and species evenness index) was significantly lower on day 7 compared to day 0 (Fig.2B, p=0.031, paired Wilcoxon test). This decrease in diversity could be attributed to an imbalance in the relative distribution of certain species within the microbial community. Specifically, there was a significant decrease in the relative abundance of Corynebacterium accolens, a skin bacterium ranked among the top 20 abundant species, on day 7 compared to the baseline (Fig. S1B). This reduction may have allowed other species to occupy a relatively larger ecological niche, resulting in a decline in microbial diversity. Notably, opportunistic pathogens such as Stenotrophomonas maltophilia, Acinetobacter johnsonii, Pseudomonas sp., and Sphingomonas hankookensis showed significant decreases in their relative abundances at three consecutive time points compared to the baseline (Fig.2C). This suggests that the retinol-containing skincare product possesses antimicrobial properties and can reduce the colonization of pathogenic bacteria on the skin surface. We also noticed an increase in the relative abundance of Neisseriales species incertae sedis (Fig.S1B) and Corynebacterium jeddahense (Fig.2C). However, their specific functions remain unclear. (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. bioRxiv preprint doi: https://doi.org/10.1101/2024.06.26.600860; this version posted June 27, 2024. The copyright holder for this preprint Figure 2 Effects of retinol on the structure of skin microbiome. (A) Gene pathway enrichment analysis based on ReporterScore. A ReporterScore with an absolute value greater than 1.64 indicates significant enrichment of the gene pathway, with positive or negative signs denoting upregulation or downregulation compared to the control group (Day 0, baseline). (B) Skin microbiome Alpha diversity indexes. (C) Intergroup differential species abundance changes. Statistical significance levels of data relative to baseline changes were assessed using the Wilcoxon paired test. Displayed results are p values. Unmarked indicates non-significance, * indicates p ≤ 0.05, ** indicates p≤ 0.01. Retinol also exerts an impact on the functionality of the skin microbiome. Starting from Day 14, several microbial gene pathways displayed altered regulation levels (Fig. 2A). Notably, the thiamine (Vitamin B1) metabolism gene pathway was enriched on both Day 14 and Day 21, marked by a peak in the abundance of a thiamine metabolite, biotin thiamine, on Day 14, which significantly increased compared to Day 0 (Fig. S2A). Metabolomic data also showed that the microbial and host thiamine metabolic pathway was significantly (p=0.026) enriched on Day 14, marked by increased intensities of pyruvic acid and L-Tyrosine (Fig. S2A). All the three (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. bioRxiv preprint doi: https://doi.org/10.1101/2024.06.26.600860; this version posted June 27, 2024. The copyright holder for this preprint substances are biosynthetic precursors of thiamin. Thiamin has a variety of benefits for the skin, including increasing the expression of collagen, promoting skin cell growth and repair, and maintaining skin elasticity. Our findings suggest that retinol helps to promote the synthesis and utilization of thiamine by the skin microbiome, thereby enhancing the function and health of the skin barrier. Furthermore, the riboflavin metabolism pathway showed enrichment on Day 14 (Fig. 2A), accompanied by a notable decrease in the abundance of riboflavin on Day 14, while the intensity of riboflavin 5’-phosphate sodium, a bio-active form of riboflavin, exhibited an increase relative to baseline on Day 21 and Day 28(Fig. S2A). Riboflavin phosphate sodium salt form is an essential micronutrient, and plays an important role in the health of the skin, mucous membranes, and eyes. Retinol makes microorganisms more inclined to utilize or convert riboflavin into the active form, riboflavin 5’-phosphate sodium. Additionally, certain gene pathways demonstrated decreased expression levels, including biofilm formation of Vibrio cholerae and flagellar assembly were down-regulated on Day 21. And the bacterial secretion system and O-antigen nucleotide sugar biosynthesis, both were consistently down-regulated during the last three time points(Fig. 2A). The down-regulation implies that retinol may possess antimicrobial and anti-inflammatory capabilities by inhibiting bacterial metabolic activity, secretion of bacterial products, and influencing the integrity of bacterial structure. 3. Retinol stimulates skin microbiota’s secretion of diverse beneficial metabolites for synergistic anti-aging effects. We further utilized MetOrigin to perform tracing analysis of metabolites (based on the databases of MetOrigin tracking to determine whether they originated from the host, microorganisms, or co-metabolism) and metabolic pathway enrichment analysis. Of note, nicotinate and nicotinamide metabolism pathway (hsa00760) enriched in the host on Day 21 , and supported by increment of N1-methyl-4-pyridone-3-carboxamide which is associated with this pathway In the microbe, degradation of flavonoids (ko00946), phenylalanine, tyrosine and tryptophan biosynthesis (ko00400), and biosynthesis of various plant secondary metabolites (ko00999) were up-regulated (Fig. 3A). Based on the databases of MetOrigin, we found a variety of microbial-origin metabolites significantly related to the enrichment of the above pathways, some of which have been reported or experimentally verified to be beneficial to the skin. Maesopsin and apigenin were related to ko00946, where apigenin was known as an anti-tumor substance that is particularly helpful in preventing and reversing the formation of abnormal skin49–52. Quinic acid, 3-dehydroquinic acid and protocatechuic acid were related to ko00400, where quinic acid was reported to have an antiphotoaging effect by protecting human dermal fibroblasts53,54 and protocatechuic acid was demonstrated to have anti-oxidate and anti-aging effects by inducing dermal fibroblasts to synthesis type-1 collagen55–57. In ko00999, (+)- pinoresinol and secoisolariciresinol were significantly related, the former was reported to stimulate keratinocyte proliferation58,59 and the latter was reported to suppress atopic dermatitis in the mouse when administered orally60 USER: Explain how retinol helps skin in a month. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
true
10
8
1,482
null
836
You can only respond to the prompt using the information in the context block and no other sources.
Based on this text, what are the primary differences between customer surveys and customer feedback collected from other sources?
The most direct method for measuring retail shoppability is to talk with customers. This can be as simple as asking shoppers what they like and dislike about the store, or a more structured questionnaire where people evaluate the quality of the shopping experience on several dimensions. Focus group studies can be a useful first step to identify problems with the shopping environment, but surveys are a better tool for the ongoing measurement and evaluation of store performance. A survey might ask shoppers to rate the store on specific features, including the breadth and depth of product assortments, the perception of product quality and value, the attractiveness of displays and merchandising, the ease of navigation, the level of shopping convenience, the availability of product information, the frequency of out-of-stocks, the quality of service, and the speed of checkout; as well as collecting more general reactions, such as overall enjoyment of the shopping experience, satisfaction with the products purchased, likelihood of recommending the store, and intention to return in the future. One common technique for collecting this information is the exit interview, where customers are asked to share their reactions after completing the shopping trip. The survey can be conducted immediately following the store visit — shoppers are intercepted and questioned as they leave the store — or at a later time using the customer’s phone number or e-mail address captured at checkout. Alternatively, the retailer can print the telephone number or web address of an automated survey on the sales receipt, along with an incentive to complete the interview. Another popular data-collection method is the critical incident technique. Shoppers are contacted at home and asked to remember the last time they went shopping for a particular product. Consumers recall the positive and negative aspects of the shopping experience and offer suggestions for improvement. Customer feedback can be collected on an ongoing basis from several other sources. Customer comments, complaints and suggestions at an in-store service desk or remote call center may suggest difficulties with service quality and other aspects of the shopping experience. Superquinn actively solicits such feedback from customers by rewarding each shopper with 100 SuperClub loyalty program points each time they report a company goof. Product returns and exchanges highlight potential problems in product quality and customer education. Customer inquiries, both in the store and through a toll-free number, can signal demand for new products. Employees are also a valuable source of information on what customers want, how they shop, and the obstacles they encounter. Survey research offers several benefits. It provides quick and inexpensive consumer feedback. It generates diagnostic information that can help guide improvements in the shopping experience. It allows the researcher to evaluate the importance of shopping factors for specific consumer segments and product categories. Store ratings can be benchmarked against competitors and tracked over time to evaluate performance. Surveys also have limitations that should be kept in mind. Consumers may not notice or report poor performance because it is what they have come to expect. Consumer memory is limited, so shoppers may not recall shelf arrangements, merchandising, and promotions, even though these variables affect their behavior. Consumers are only knowledgeable about the specific stores and categories they shop, so some ratings may not be reliable. Finally, it can be difficult to relate survey results to more objective measures of store performance.
You can only respond to the prompt using the information in the context block and no other sources. Based on this text, what are the primary differences between customer surveys and customer feedback collected from other sources? The most direct method for measuring retail shoppability is to talk with customers. This can be as simple as asking shoppers what they like and dislike about the store, or a more structured questionnaire where people evaluate the quality of the shopping experience on several dimensions. Focus group studies can be a useful first step to identify problems with the shopping environment, but surveys are a better tool for the ongoing measurement and evaluation of store performance. A survey might ask shoppers to rate the store on specific features, including the breadth and depth of product assortments, the perception of product quality and value, the attractiveness of displays and merchandising, the ease of navigation, the level of shopping convenience, the availability of product information, the frequency of out-of-stocks, the quality of service, and the speed of checkout; as well as collecting more general reactions, such as overall enjoyment of the shopping experience, satisfaction with the products purchased, likelihood of recommending the store, and intention to return in the future. One common technique for collecting this information is the exit interview, where customers are asked to share their reactions after completing the shopping trip. The survey can be conducted immediately following the store visit — shoppers are intercepted and questioned as they leave the store — or at a later time using the customer’s phone number or e-mail address captured at checkout. Alternatively, the retailer can print the telephone number or web address of an automated survey on the sales receipt, along with an incentive to complete the interview. Another popular data-collection method is the critical incident technique. Shoppers are contacted at home and asked to remember the last time they went shopping for a particular product. Consumers recall the positive and negative aspects of the shopping experience and offer suggestions for improvement. Customer feedback can be collected on an ongoing basis from several other sources. Customer comments, complaints and suggestions at an in-store service desk or remote call center may suggest difficulties with service quality and other aspects of the shopping experience. Superquinn actively solicits such feedback from customers by rewarding each shopper with 100 SuperClub loyalty program points each time they report a company goof. Product returns and exchanges highlight potential problems in product quality and customer education. Customer inquiries, both in the store and through a toll-free number, can signal demand for new products. Employees are also a valuable source of information on what customers want, how they shop, and the obstacles they encounter. Survey research offers several benefits. It provides quick and inexpensive consumer feedback. It generates diagnostic information that can help guide improvements in the shopping experience. It allows the researcher to evaluate the importance of shopping factors for specific consumer segments and product categories. Store ratings can be benchmarked against competitors and tracked over time to evaluate performance. Surveys also have limitations that should be kept in mind. Consumers may not notice or report poor performance because it is what they have come to expect. Consumer memory is limited, so shoppers may not recall shelf arrangements, merchandising, and promotions, even though these variables affect their behavior. Consumers are only knowledgeable about the specific stores and categories they shop, so some ratings may not be reliable. Finally, it can be difficult to relate survey results to more objective measures of store performance.
You can only respond to the prompt using the information in the context block and no other sources. EVIDENCE: The most direct method for measuring retail shoppability is to talk with customers. This can be as simple as asking shoppers what they like and dislike about the store, or a more structured questionnaire where people evaluate the quality of the shopping experience on several dimensions. Focus group studies can be a useful first step to identify problems with the shopping environment, but surveys are a better tool for the ongoing measurement and evaluation of store performance. A survey might ask shoppers to rate the store on specific features, including the breadth and depth of product assortments, the perception of product quality and value, the attractiveness of displays and merchandising, the ease of navigation, the level of shopping convenience, the availability of product information, the frequency of out-of-stocks, the quality of service, and the speed of checkout; as well as collecting more general reactions, such as overall enjoyment of the shopping experience, satisfaction with the products purchased, likelihood of recommending the store, and intention to return in the future. One common technique for collecting this information is the exit interview, where customers are asked to share their reactions after completing the shopping trip. The survey can be conducted immediately following the store visit — shoppers are intercepted and questioned as they leave the store — or at a later time using the customer’s phone number or e-mail address captured at checkout. Alternatively, the retailer can print the telephone number or web address of an automated survey on the sales receipt, along with an incentive to complete the interview. Another popular data-collection method is the critical incident technique. Shoppers are contacted at home and asked to remember the last time they went shopping for a particular product. Consumers recall the positive and negative aspects of the shopping experience and offer suggestions for improvement. Customer feedback can be collected on an ongoing basis from several other sources. Customer comments, complaints and suggestions at an in-store service desk or remote call center may suggest difficulties with service quality and other aspects of the shopping experience. Superquinn actively solicits such feedback from customers by rewarding each shopper with 100 SuperClub loyalty program points each time they report a company goof. Product returns and exchanges highlight potential problems in product quality and customer education. Customer inquiries, both in the store and through a toll-free number, can signal demand for new products. Employees are also a valuable source of information on what customers want, how they shop, and the obstacles they encounter. Survey research offers several benefits. It provides quick and inexpensive consumer feedback. It generates diagnostic information that can help guide improvements in the shopping experience. It allows the researcher to evaluate the importance of shopping factors for specific consumer segments and product categories. Store ratings can be benchmarked against competitors and tracked over time to evaluate performance. Surveys also have limitations that should be kept in mind. Consumers may not notice or report poor performance because it is what they have come to expect. Consumer memory is limited, so shoppers may not recall shelf arrangements, merchandising, and promotions, even though these variables affect their behavior. Consumers are only knowledgeable about the specific stores and categories they shop, so some ratings may not be reliable. Finally, it can be difficult to relate survey results to more objective measures of store performance. USER: Based on this text, what are the primary differences between customer surveys and customer feedback collected from other sources? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
18
19
556
null
379
Respond using only the information contained in the text. The response must be no more than 250 words.
According to the document, what are some limitations of big data sets when conducting research?
Collectively, this research suggests that big data offers both new potential discriminatory harms and new potential solutions to discriminatory harms. To maximize the benefits and limit the harms, companies should consider the questions raised by research in this area. These questions include the following: 1. How representative is your data set? Workshop participants and researchers note that the data sets, on which all big data analysis relies, may be missing information about certain populations, e.g., individuals who are more careful about revealing information about themselves, who are less involved in the formal economy, who have unequal access or less fluency in technology resulting in a digital divide148 or data desert,149 or whose behaviors are simply not observed because they are believed to be less profitable constituencies.150 Recent examples demonstrate the impact of missing information about particular populations on data analytics. For example, Hurricane Sandy generated more than twenty million tweets between October 27 and November 1, 2012.151 If organizations were to use this data to determine where services should be deployed, the people who needed services the most may not have received them. The greatest number of tweets about Hurricane Sandy came from Manhattan, creating the illusion that Manhattan was the hub of the disaster. Very few messages originated from more severely affected locations, such as Breezy Point, Coney Island, and Rockaway—areas with lower levels of smartphone ownership and Twitter usage. As extended power blackouts drained batteries and limited cellular access, even fewer tweets came from the worst hit areas. As one researcher noted, “data are assumed to accurately reflect the social world, but there are significant gaps, with little or no signal coming from particular communities.”152 Organizations have developed ways to overcome this issue. For example, the city of Boston developed an application called Street Bump that utilizes smartphone features such as GPS feeds to collect and report to the city information about road conditions, including potholes. However, after the release of the application, the Street Bump team recognized that because lower income individuals may be less likely to carry smartphones, the data was likely not fully representative of all road conditions. If the city had continued relying on the biased data, it might have skewed road services to higher income neighborhoods. The team addressed this problem by issuing its application to city workers who service the whole city and supplementing the data with that from the public.153 This example demonstrates why it is important to consider the digital divide and other issues of underrepresentation and overrepresentation in data inputs before launching a product or service in order to avoid skewed and potentially unfair ramifications. 2. Does your data model account for biases? While large data sets can give insight into previously intractable challenges, hidden biases at both the collection and analytics stages of big data’s life cycle could lead to disparate impact.154 Researchers have noted that big data analytics “can reproduce existing patterns of discrimination, inherit the prejudice of prior decision-makers, or simply reflect the widespread biases that persist in society.”155 For example, if an employer uses big data analytics to synthesize information gathered on successful existing employees to define a “good employee candidate,” the employer could risk incorporating previous discrimination in employment decisions into new employment decisions.156 Even prior to the widespread use of big data, there is some evidence of the use of data leading to the reproduction of existing biases. For example, one researcher has noted that a hospital developed a computer model to help identify “good medical school applicants” based on performance levels of previous and existing students, but, in doing so, the model reproduced prejudices in prior admission decisions.157 Companies can also design big data algorithms that learn from human behavior; these algorithms may “learn” to generate biased results. For example, one academic found that Reuters and Google queries for names identified by researchers to be associated with African-Americans were more likely to return advertisements for arrest records than for names identified by researchers to be associated with white Americans.158 The academic concluded that determining why this discrimination was occurring was beyyond the scope of her research, but reasoned that search engines’ algorithms may learn to prioritize arrest record ads for searches of names associated with African-Americans if people click on such ads more frequently than other ads.159 This could reinforce the display of such ads and perpetuate the cycle. Companies should therefore think carefully about how the data sets and the algorithms they use have been generated. Indeed, if they identify potential biases in the creation of these data sets or the algorithms, companies should develop strategies to overcome them. As noted above, Google changed its interview and hiring process to ask more behavioral questions and to focus less on academic grades after discovering that replicating its existing definitions of a “good employee” was resulting in a homogeneous tech workforce.160 More broadly, companies are starting to recognize that if their big data algorithms only consider applicants from “top tier” colleges to help them make hiring decisions, they may be incorporating previous biases in college admission decisions.161 As in the examples discussed above, companies should develop ways to use big data to expand the pool of qualified applicants they will consider.162 3. How accurate are your predictions based on big data? Some researchers have also found that big data analysis does not give sufficient attention to traditional applied statistics issues, thus leading to incorrect results and predictions.163 They note that while big data is very good at detecting correlations, it does not explain which correlations are meaningful.164 A prime example that demonstrates the limitations of big data analytics is Google Flu Trends, a machinelearning algorithm for predicting the number of flu cases based on Google search terms. To predict the spread of influenza across the United States, the Google team analyzed the top fifty million search terms for indications that the flu had broken out in particular locations. While, at first, the algorithms appeared to create accurate predictions of where the flu was more prevalent, it generated highly inaccurate estimates over time.165 This could be because the algorithm failed to take into account certain variables. For example, the algorithm may not have taken into account that people would be more likely to search for flu-related terms if the local news ran a story on a flu outbreak, even if the outbreak occurred halfway around the world. As one researcher has noted, Google Flu Trends demonstrates that a “theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down.”166 As another example, workshop participants discussed the fact that lenders can improve access to credit by using non-traditional indicators, e.g., rental or utility bill payment history.167 Consumers, however, have the right to withhold rent if their landlord does not provide heat or basic sanitation services. In these instances, simply compiling rental payment history would not necessarily demonstrate whether the person is a good credit risk.168 In some cases, these sources of inaccuracies are unlikely to have significant negative effects on consumers. For example, it may be that big data analytics shows that 30 percent of consumers who buy diapers will respond to an ad for baby formula. That response rate may be enough for a marketer to find it worthwhile to send buyers of diapers an advertisement for baby formula. The 70 percent of consumers who buy diapers but are not interested in formula can disregard the ad or discard it at little cost. Similarly, consumers who are interested in formula and who do not buy diapers are unlikely to be substantially harmed because they did not get the ad. On the other hand, if big data analytics are used as the basis for access to credit, housing, or other similar benefits, the potential effects on consumers from inaccuracies could be substantial.169 For example, suppose big data analytics predict that people who do not participate in social media are 30 percent more likely to be identity thieves, leading a fraud detection tool to flag such people as “risky.” Suppose further that a wireless company uses this tool and requires “risky” people to submit additional documentation before they can obtain a cell phone contract. These people may not be able to obtain the contract if they do not have the required documentation.
Respond using only the information contained in the text. The response must be no more than 250 words. Collectively, this research suggests that big data offers both new potential discriminatory harms and new potential solutions to discriminatory harms. To maximize the benefits and limit the harms, companies should consider the questions raised by research in this area. These questions include the following: 1. How representative is your data set? Workshop participants and researchers note that the data sets, on which all big data analysis relies, may be missing information about certain populations, e.g., individuals who are more careful about revealing information about themselves, who are less involved in the formal economy, who have unequal access or less fluency in technology resulting in a digital divide148 or data desert,149 or whose behaviors are simply not observed because they are believed to be less profitable constituencies.150 Recent examples demonstrate the impact of missing information about particular populations on data analytics. For example, Hurricane Sandy generated more than twenty million tweets between October 27 and November 1, 2012.151 If organizations were to use this data to determine where services should be deployed, the people who needed services the most may not have received them. The greatest number of tweets about Hurricane Sandy came from Manhattan, creating the illusion that Manhattan was the hub of the disaster. Very few messages originated from more severely affected locations, such as Breezy Point, Coney Island, and Rockaway—areas with lower levels of smartphone ownership and Twitter usage. As extended power blackouts drained batteries and limited cellular access, even fewer tweets came from the worst hit areas. As one researcher noted, “data are assumed to accurately reflect the social world, but there are significant gaps, with little or no signal coming from particular communities.”152 Organizations have developed ways to overcome this issue. For example, the city of Boston developed an application called Street Bump that utilizes smartphone features such as GPS feeds to collect and report to the city information about road conditions, including potholes. However, after the release of the application, the Street Bump team recognized that because lower income individuals may be less likely to carry smartphones, the data was likely not fully representative of all road conditions. If the city had continued relying on the biased data, it might have skewed road services to higher income neighborhoods. The team addressed this problem by issuing its application to city workers who service the whole city and supplementing the data with that from the public.153 This example demonstrates why it is important to consider the digital divide and other issues of underrepresentation and overrepresentation in data inputs before launching a product or service in order to avoid skewed and potentially unfair ramifications. 2. Does your data model account for biases? While large data sets can give insight into previously intractable challenges, hidden biases at both the collection and analytics stages of big data’s life cycle could lead to disparate impact.154 Researchers have noted that big data analytics “can reproduce existing patterns of discrimination, inherit the prejudice of prior decision-makers, or simply reflect the widespread biases that persist in society.”155 For example, if an employer uses big data analytics to synthesize information gathered on successful existing employees to define a “good employee candidate,” the employer could risk incorporating previous discrimination in employment decisions into new employment decisions.156 Even prior to the widespread use of big data, there is some evidence of the use of data leading to the reproduction of existing biases. For example, one researcher has noted that a hospital developed a computer model to help identify “good medical school applicants” based on performance levels of previous and existing students, but, in doing so, the model reproduced prejudices in prior admission decisions.157 Companies can also design big data algorithms that learn from human behavior; these algorithms may “learn” to generate biased results. For example, one academic found that Reuters and Google queries for names identified by researchers to be associated with African-Americans were more likely to return advertisements for arrest records than for names identified by researchers to be associated with white Americans.158 The academic concluded that determining why this discrimination was occurring was beyyond the scope of her research, but reasoned that search engines’ algorithms may learn to prioritize arrest record ads for searches of names associated with African-Americans if people click on such ads more frequently than other ads.159 This could reinforce the display of such ads and perpetuate the cycle. Companies should therefore think carefully about how the data sets and the algorithms they use have been generated. Indeed, if they identify potential biases in the creation of these data sets or the algorithms, companies should develop strategies to overcome them. As noted above, Google changed its interview and hiring process to ask more behavioral questions and to focus less on academic grades after discovering that replicating its existing definitions of a “good employee” was resulting in a homogeneous tech workforce.160 More broadly, companies are starting to recognize that if their big data algorithms only consider applicants from “top tier” colleges to help them make hiring decisions, they may be incorporating previous biases in college admission decisions.161 As in the examples discussed above, companies should develop ways to use big data to expand the pool of qualified applicants they will consider.162 3. How accurate are your predictions based on big data? Some researchers have also found that big data analysis does not give sufficient attention to traditional applied statistics issues, thus leading to incorrect results and predictions.163 They note that while big data is very good at detecting correlations, it does not explain which correlations are meaningful.164 A prime example that demonstrates the limitations of big data analytics is Google Flu Trends, a machinelearning algorithm for predicting the number of flu cases based on Google search terms. To predict the spread of influenza across the United States, the Google team analyzed the top fifty million search terms for indications that the flu had broken out in particular locations. While, at first, the algorithms appeared to create accurate predictions of where the flu was more prevalent, it generated highly inaccurate estimates over time.165 This could be because the algorithm failed to take into account certain variables. For example, the algorithm may not have taken into account that people would be more likely to search for flu-related terms if the local news ran a story on a flu outbreak, even if the outbreak occurred halfway around the world. As one researcher has noted, Google Flu Trends demonstrates that a “theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down.”166 As another example, workshop participants discussed the fact that lenders can improve access to credit by using non-traditional indicators, e.g., rental or utility bill payment history.167 Consumers, however, have the right to withhold rent if their landlord does not provide heat or basic sanitation services. In these instances, simply compiling rental payment history would not necessarily demonstrate whether the person is a good credit risk.168 In some cases, these sources of inaccuracies are unlikely to have significant negative effects on consumers. For example, it may be that big data analytics shows that 30 percent of consumers who buy diapers will respond to an ad for baby formula. That response rate may be enough for a marketer to find it worthwhile to send buyers of diapers an advertisement for baby formula. The 70 percent of consumers who buy diapers but are not interested in formula can disregard the ad or discard it at little cost. Similarly, consumers who are interested in formula and who do not buy diapers are unlikely to be substantially harmed because they did not get the ad. On the other hand, if big data analytics are used as the basis for access to credit, housing, or other similar benefits, the potential effects on consumers from inaccuracies could be substantial.169 For example, suppose big data analytics predict that people who do not participate in social media are 30 percent more likely to be identity thieves, leading a fraud detection tool to flag such people as “risky.” Suppose further that a wireless company uses this tool and requires “risky” people to submit additional documentation before they can obtain a cell phone contract. These people may not be able to obtain the contract if they do not have the required documentation. According to the document, what are some limitations of big data sets when conducting research?
Respond using only the information contained in the text. The response must be no more than 250 words. EVIDENCE: Collectively, this research suggests that big data offers both new potential discriminatory harms and new potential solutions to discriminatory harms. To maximize the benefits and limit the harms, companies should consider the questions raised by research in this area. These questions include the following: 1. How representative is your data set? Workshop participants and researchers note that the data sets, on which all big data analysis relies, may be missing information about certain populations, e.g., individuals who are more careful about revealing information about themselves, who are less involved in the formal economy, who have unequal access or less fluency in technology resulting in a digital divide148 or data desert,149 or whose behaviors are simply not observed because they are believed to be less profitable constituencies.150 Recent examples demonstrate the impact of missing information about particular populations on data analytics. For example, Hurricane Sandy generated more than twenty million tweets between October 27 and November 1, 2012.151 If organizations were to use this data to determine where services should be deployed, the people who needed services the most may not have received them. The greatest number of tweets about Hurricane Sandy came from Manhattan, creating the illusion that Manhattan was the hub of the disaster. Very few messages originated from more severely affected locations, such as Breezy Point, Coney Island, and Rockaway—areas with lower levels of smartphone ownership and Twitter usage. As extended power blackouts drained batteries and limited cellular access, even fewer tweets came from the worst hit areas. As one researcher noted, “data are assumed to accurately reflect the social world, but there are significant gaps, with little or no signal coming from particular communities.”152 Organizations have developed ways to overcome this issue. For example, the city of Boston developed an application called Street Bump that utilizes smartphone features such as GPS feeds to collect and report to the city information about road conditions, including potholes. However, after the release of the application, the Street Bump team recognized that because lower income individuals may be less likely to carry smartphones, the data was likely not fully representative of all road conditions. If the city had continued relying on the biased data, it might have skewed road services to higher income neighborhoods. The team addressed this problem by issuing its application to city workers who service the whole city and supplementing the data with that from the public.153 This example demonstrates why it is important to consider the digital divide and other issues of underrepresentation and overrepresentation in data inputs before launching a product or service in order to avoid skewed and potentially unfair ramifications. 2. Does your data model account for biases? While large data sets can give insight into previously intractable challenges, hidden biases at both the collection and analytics stages of big data’s life cycle could lead to disparate impact.154 Researchers have noted that big data analytics “can reproduce existing patterns of discrimination, inherit the prejudice of prior decision-makers, or simply reflect the widespread biases that persist in society.”155 For example, if an employer uses big data analytics to synthesize information gathered on successful existing employees to define a “good employee candidate,” the employer could risk incorporating previous discrimination in employment decisions into new employment decisions.156 Even prior to the widespread use of big data, there is some evidence of the use of data leading to the reproduction of existing biases. For example, one researcher has noted that a hospital developed a computer model to help identify “good medical school applicants” based on performance levels of previous and existing students, but, in doing so, the model reproduced prejudices in prior admission decisions.157 Companies can also design big data algorithms that learn from human behavior; these algorithms may “learn” to generate biased results. For example, one academic found that Reuters and Google queries for names identified by researchers to be associated with African-Americans were more likely to return advertisements for arrest records than for names identified by researchers to be associated with white Americans.158 The academic concluded that determining why this discrimination was occurring was beyyond the scope of her research, but reasoned that search engines’ algorithms may learn to prioritize arrest record ads for searches of names associated with African-Americans if people click on such ads more frequently than other ads.159 This could reinforce the display of such ads and perpetuate the cycle. Companies should therefore think carefully about how the data sets and the algorithms they use have been generated. Indeed, if they identify potential biases in the creation of these data sets or the algorithms, companies should develop strategies to overcome them. As noted above, Google changed its interview and hiring process to ask more behavioral questions and to focus less on academic grades after discovering that replicating its existing definitions of a “good employee” was resulting in a homogeneous tech workforce.160 More broadly, companies are starting to recognize that if their big data algorithms only consider applicants from “top tier” colleges to help them make hiring decisions, they may be incorporating previous biases in college admission decisions.161 As in the examples discussed above, companies should develop ways to use big data to expand the pool of qualified applicants they will consider.162 3. How accurate are your predictions based on big data? Some researchers have also found that big data analysis does not give sufficient attention to traditional applied statistics issues, thus leading to incorrect results and predictions.163 They note that while big data is very good at detecting correlations, it does not explain which correlations are meaningful.164 A prime example that demonstrates the limitations of big data analytics is Google Flu Trends, a machinelearning algorithm for predicting the number of flu cases based on Google search terms. To predict the spread of influenza across the United States, the Google team analyzed the top fifty million search terms for indications that the flu had broken out in particular locations. While, at first, the algorithms appeared to create accurate predictions of where the flu was more prevalent, it generated highly inaccurate estimates over time.165 This could be because the algorithm failed to take into account certain variables. For example, the algorithm may not have taken into account that people would be more likely to search for flu-related terms if the local news ran a story on a flu outbreak, even if the outbreak occurred halfway around the world. As one researcher has noted, Google Flu Trends demonstrates that a “theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down.”166 As another example, workshop participants discussed the fact that lenders can improve access to credit by using non-traditional indicators, e.g., rental or utility bill payment history.167 Consumers, however, have the right to withhold rent if their landlord does not provide heat or basic sanitation services. In these instances, simply compiling rental payment history would not necessarily demonstrate whether the person is a good credit risk.168 In some cases, these sources of inaccuracies are unlikely to have significant negative effects on consumers. For example, it may be that big data analytics shows that 30 percent of consumers who buy diapers will respond to an ad for baby formula. That response rate may be enough for a marketer to find it worthwhile to send buyers of diapers an advertisement for baby formula. The 70 percent of consumers who buy diapers but are not interested in formula can disregard the ad or discard it at little cost. Similarly, consumers who are interested in formula and who do not buy diapers are unlikely to be substantially harmed because they did not get the ad. On the other hand, if big data analytics are used as the basis for access to credit, housing, or other similar benefits, the potential effects on consumers from inaccuracies could be substantial.169 For example, suppose big data analytics predict that people who do not participate in social media are 30 percent more likely to be identity thieves, leading a fraud detection tool to flag such people as “risky.” Suppose further that a wireless company uses this tool and requires “risky” people to submit additional documentation before they can obtain a cell phone contract. These people may not be able to obtain the contract if they do not have the required documentation. USER: According to the document, what are some limitations of big data sets when conducting research? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
18
15
1,395
null
598
To answer the following question, use only information contained in the context block/prompt. Do not use any previous knowledge or outside sources.
Whether currently available or unavailable, what is an example of a smokeless cannabis delivery method that clinical trials hope to help develop?
Three focal concerns in evaluating the medical use of marijuana are: 1. Evaluation of the effects of isolated cannabinoids; 2. Evaluation of the risks associated with the medical use of marijuana; and 3. Evaluation of the use of smoked marijuana. EFFECTS OF ISOLATED CANNABINOIDS Cannabinoid Biology Much has been learned since the 1982 IOM report Marijuana and Health. Although it was clear then that most of the effects of marijuana were due to its actions on the brain, there was little information about how THC acted on brain cells (neurons), which cells were affected by THC, or even what general areas of the brain were most affected by THC. In addition, too little was known about cannabinoid physiology to offer any scientific insights into the harmful or therapeutic effects of marijuana. That all changed with the identification and characterization of cannabinoid receptors in the 1980s and 1990s. During the past 16 years, science has advanced greatly and can tell us much more about the potential medical benefits of cannabinoids. Conclusion: At this point, our knowledge about the biology of marijuana and cannabinoids allows us to make some general conclusions: o Cannabinoids likely have a natural role in pain modulation, control of movement, and memory. o The natural role of cannabinoids in immune systems is likely multi-faceted and remains unclear. o The brain develops tolerance to cannabinoids. o Animal research demonstrates the potential for dependence, but this potential is observed under a narrower range of conditions than with benzodiazepines, opiates, cocaine, or nicotine. o Withdrawal symptoms can be observed in animals but appear to be mild compared to opiates or benzodiazepines, such as diazepam (Valium). Conclusion: The different cannabinoid receptor types found in the body appear to play different roles in normal human physiology. In addition, some effects of cannabinoids appear to be independent of those receptors. The variety of mechanisms through which cannabinoids can influence human physiology underlies the variety of potential therapeutic uses for drugs that might act selectively on different cannabinoid systems. Recommendation 1: Research should continue into the physiological effects of synthetic and plant-derived cannabinoids and the natural function of cannabinoids found in the body. Because different cannabinoids appear to have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC alone. Efficacy of Cannabinoid Drugs The accumulated data indicate a potential therapeutic value for cannabinoid drugs, particularly for symptoms such as pain relief, control of nausea and vomiting, and appetite stimulation. The therapeutic effects of cannabinoids are best established for THC, which is generally one of the two most abundant of the cannabinoids in marijuana. (Cannabidiol is generally the other most abundant cannabinoid.) The effects of cannabinoids on the symptoms studied are generally modest, and in most cases there are more effective medications. However, people vary in their responses to medications, and there will likely always be a subpopulation of patients who do not respond well to other medications. The combination of cannabinoid drug effects (anxiety reduction, appetite stimulation, nausea reduction, and pain relief) suggests that cannabinoids would be moderately well suited for particular conditions, such as chemotherapy-induced nausea and vomiting and AIDS wasting. Defined substances, such as purified cannabinoid compounds, are preferable to plant products, which are of variable and uncertain composition. Use of defined cannabinoids permits a more precise evaluation of their effects, whether in combination or alone. Medications that can maximize the desired effects of cannabinoids and minimize the undesired effects can very likely be identified. Although most scientists who study cannabinoids agree that the pathways to cannabinoid drug development are clearly marked, there is no guarantee that the fruits of scientific research will be made available to the public for medical use. Cannabinoid- based drugs will only become available if public investment in cannabinoid drug research is sustained and if there is enough incentive for private enterprise to develop and market such drugs. Conclusion: Scientific data indicate the potential therapeutic value of cannabinoid drugs, primarily THC, for pain relief, control of nausea and vomiting, and appetite stimulation; smoked marijuana, however, is a crude THC delivery system that also delivers harmful substances. Recommendation 2: Clinical trials of cannabinoid drugs for symptom management should be conducted with the goal of developing rapid-onset, reliable, and safe delivery systems. Influence of Psychological Effects on Therapeutic Effects The psychological effects of THC and similar cannabinoids pose three issues for the therapeutic use of cannabinoid drugs. First, for some patients--particularly older patients with no previous marijuana experience--the psychological effects are disturbing. Those patients report experiencing unpleasant feelings and disorientation after being treated with THC, generally more severe for oral THC than for smoked marijuana. Second, for conditions such as movement disorders or nausea, in which anxiety exacerbates the symptoms, the antianxiety effects of cannabinoid drugs can influence symptoms indirectly. This can be beneficial or can create false impressions of the drug effect. Third, for cases in which symptoms are multifaceted, the combination of THC effects might provide a form of adjunctive therapy; for example, AIDS wasting patients would likely benefit from a medication that simultaneously reduces anxiety, pain, and nausea while stimulating appetite. Conclusion: The psychological effects of cannabinoids, such as anxiety reduction, sedation, and euphoria can influence their potential therapeutic value. Those effects are potentially undesirable for certain patients and situations and beneficial for others. In addition, psychological effects can complicate the interpretation of other aspects of the drug's effect. Recommendation 3: Psychological effects of cannabinoids such as anxiety reduction and sedation, which can influence medical benefits, should be evaluated in clinical trials. RISKS ASSOCIATED WITH MEDICAL USE OF MARIJUANA Physiological Risks Marijuana is not a completely benign substance. It is a powerful drug with a variety of effects. However, except for the harms associated with smoking, the adverse effects of marijuana use are within the range of effects tolerated for other medications. The harmful effects to individuals from the perspective of possible medical use of marijuana are not necessarily the same as the harmful physical effects of drug abuse. When interpreting studies purporting to show the harmful effects of marijuana, it is important to keep in mind that the majority of those studies are based on smoked marijuana, and cannabinoid effects cannot be separated from the effects of inhaling smoke from burning plant material and contaminants. For most people the primary adverse effect of acute marijuana use is diminished psychomotor performance. It is, therefore, inadvisable to operate any vehicle or potentially dangerous equipment while under the influence of marijuana, THC, or any cannabinoid drug with comparable effects. In addition, a minority of marijuana users experience dysphoria, or unpleasant feelings. Finally, the short-term immunosuppressive effects are not well established but, if they exist, are not likely great enough to preclude a legitimate medical use. The chronic effects of marijuana are of greater concern for medical use and fall into two categories: the effects of chronic smoking and the effects of THC. Marijuana smoking is associated with abnormalities of cells lining the human respiratory tract. Marijuana smoke, like tobacco smoke, is associated with increased risk of cancer, lung damage, and poor pregnancy outcomes. Although cellular, genetic, and human studies all suggest that marijuana smoke is an important risk factor for the development of respiratory cancer, proof that habitual marijuana smoking does or does not cause cancer awaits the results of well-designed studies. Conclusion: Numerous studies suggest that marijuana smoke is an important risk factor in the development of respiratory disease. Recommendation 4: Studies to define the individual health risks of smoking marijuana should be conducted, particularly among populations in which marijuana use is prevalent. Marijuana Dependence and Withdrawal A second concern associated with chronic marijuana use is dependence on the psychoactive effects of THC. Although few marijuana users develop dependence, some do. Risk factors for marijuana dependence are similar to those for other forms of substance abuse. In particular, anti-social personality and conduct disorders are closely associated with substance abuse. Conclusion: A distinctive marijuana withdrawal syndrome has been identified, but it is mild and short lived. The syndrome includes restlessness, irritability, mild agitation, insomnia, sleep disturbance, nausea, and cramping. Marijuana as a "Gateway" Drug Patterns in progression of drug use from adolescence to adulthood are strikingly regular. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug most people encounter. Not surprisingly, most users of other illicit drugs have used marijuana first. In fact, most drug users begin with alcohol and nicotine before marijuana--usually before they are of legal age. In the sense that marijuana use typically precedes rather than follows initiation of other illicit drug use, it is indeed a "gateway" drug. But because underage smoking and alcohol use typically precede marijuana use, marijuana is not the most common, and is rarely the first, "gateway" to illicit drug use. There is no conclusive evidence that the drug effects of marijuana are causally linked to the subsequent abuse of other illicit drugs. An important caution is that data on drug use progression cannot be assumed to apply to the use of drugs for medical purposes. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would remain the same as seen in illicit use. Finally, there is a broad social concern that sanctioning the medical use of marijuana might increase its use among the general population. At this point there are no convincing data to support this concern. The existing data are consistent with the idea that this would not be a problem if the medical use of marijuana were as closely regulated as other medications with abuse potential. Conclusion: Present data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse. However, this question is beyond the issues normally considered for medical uses of drugs and should not be a factor in evaluating the therapeutic potential of marijuana or cannabinoids. USE OF SMOKED MARIJUANA Because of the health risks associated with smoking, smoked marijuana should generally not be recommended for long-term medical use. Nonetheless, for certain patients, such as the terminally ill or those with debilitating symptoms, the long-term risks are not of great concern. Further, despite the legal, social, and health problems associated with smoking marijuana, it is widely used by certain patient groups. Recommendation 5: Clinical trials of marijuana use for medical purposes should be conducted under the following limited circumstances: trials should involve only short-term marijuana use (less than six months), should be conducted in patients with conditions for which there is reasonable expectation of efficacy, should be approved by institutional review boards, and should collect data about efficacy. The goal of clinical trials of smoked marijuana would not be to develop marijuana as a licensed drug but rather to serve as a first step toward the possible development of nonsmoked rapid-onset cannabinoid delivery systems. However, it will likely be many years before a safe and effective cannabinoid delivery system, such as an inhaler, is available for patients. In the meantime there are patients with debilitating symptoms for whom smoked marijuana might provide relief. The use of smoked marijuana for those patients should weigh both the expected efficacy of marijuana and ethical issues in patient care, including providing information about the known and suspected risks of smoked marijuana use. Recommendation 6: Short-term use of smoked marijuana (less than six months) for patients with debilitating symptoms (such as intractable pain or vomiting) must meet the following conditions: o failure of all approved medications to provide relief has been documented, o the symptoms can reasonably be expected to be relieved by rapid- onset cannabinoid drugs, o such treatment is administered under medical supervision in a manner that allows for assessment of treatment effectiveness, and o involves an oversight strategy comparable to an institutional review board process that could provide guidance within 24 hours of a submission by a physician to provide marijuana to a patient for a specified use. Until a nonsmoked rapid-onset cannabinoid drug delivery system becomes available, we acknowledge that there is no clear alternative for people suffering from chronic conditions that might be relieved by smoking marijuana, such as pain or AIDS wasting. One possible approach is to treat patients as n-of-1 clinical trials (single-patient trials), in which patients are fully informed of their status as experimental subjects using a harmful drug delivery system and in which their condition is closely monitored and documented under medical supervision, thereby increasing the knowledge base of the risks and benefits of marijuana use under such conditions.data, it is important to understand that decisions about drug regulation are based on a variety of moral and social considerations, as well as on medical and scientific ones. Even when a drug is used only for medical purposes, value judgments affect policy decisions concerning its medical use. For example, the magnitude of a drug's expected medical benefit affects regulatory judgments about the acceptability of risks associated with its use. Also, although a drug is normally approved for medical use only on proof of its "safety and efficacy," patients with life-threatening conditions are sometimes (under protocols for "compassionate use") allowed access to unapproved drugs whose benefits and risks are uncertain. Value judgments play an even more substantial role in regulatory decisions concerning drugs, such as marijuana, that are sought and used for nonmedical purposes. Then policymakers must take into account not only the risks and benefits associated with medical use but also possible interactions between the regulatory arrangements governing medical use and the integrity of the legal controls set up to restrict nonmedical use. It should be clear that many elements of drug control policy lie outside the realm of biology and medicine. Ultimately, the complex moral and social judgments that underlie drug control policy must be made by the American people and their elected officials. A goal of this report is to evaluate the biological and medical factors that should be taken into account in making those judgments. HOW THIS STUDY WAS CONDUCTED Information was gathered through scientific workshops, site visits, analysis of the relevant scientific literature, and extensive consultation with biomedical and social scientists. The three 2-day workshops--in Irvine, California; New Orleans, Louisiana; and Washington, D.C.--were open to the public and included scientific presentations and reports, mostly from patients and their families, about their experiences with and perspectives on the medical use of marijuana. Scientific experts in various fields were selected to talk about the latest research on marijuana, cannabinoids, and related topics (listed in Appendix B). Selection of the experts was based on recommendations by their peers, who ranked them among the most accomplished scientists and the most knowledgeable about marijuana and cannabinoids in their own fields. In addition, advocates for (John Morgan) and against (Eric A. Voth) the medical use of marijuana were invited to present scientific evidence in support of their positions. Information presented at the scientific workshops was supplemented by analysis of the scientific literature and evaluating the methods used in various studies and the validity of the authors' conclusions. Different kinds of clinical studies are useful in different ways: results of a controlled double-blind study with adequate sample sizes can be expected to apply to the general population from which study subjects were drawn; an isolated case report can suggest further studies but cannot be presumed to be broadly applicable; and survey data can be highly informative but are generally limited by the need to rely on self-reports of drug use and on unconfirmed medical diagnoses. This report relies mainly on the most relevant and methodologically rigorous studies available and treats the results of more limited studies cautiously. In addition, study results are presented in such a way as to allow thoughtful readers to judge the results themselves. The Institute of Medicine (IOM) appointed a panel of nine experts to advise the study team on technical issues. These included neurology and the treatment of pain (Howard Fields); regulation of prescription drugs (J. Richard Crout); AIDS wasting and clinical trials (Judith Feinberg); treatment and pathology of multiple sclerosis (Timothy Vollmer); drug dependence among adolescents (Thomas Crowley); varieties of drug dependence (Dorothy Hatsukami); internal medicine, health care delivery, and clinical epidemiology (Eric B. Larson); cannabinoids and marijuana pharmacology (Billy R. Martin); and cannabinoid neuroscience (Steven R. Childers). Public outreach included setting up a Web site that provided information about the study and asked for input from the public. The Web site was open for comment from November 1997 until November 1998. Some 130 organizations were invited to participate in the public workshops. Many people in the organizations--particularly those opposed to the medical use of marijuana--felt that a public forum was not conducive to expressing their views; they were invited to communicate their opinions (and reasons for holding them) by mail or telephone. As a result, roughly equal numbers of persons and organizations opposed to and in favor of the medical use of marijuana were heard from. The study team visited four cannabis buyers' clubs in California (the Oakland Cannabis Buyers' Cooperative, the San Francisco Cannabis Cultivators Club, the Los Angeles Cannabis Resource Center, and Californians Helping Alleviate Medical Problems, or CHAMPS) and two HIV/AIDS clinics (AIDS Health Care Foundation in Los Angeles and Louisiana State University Medical Center in New Orleans). We listened to many individual stories from the buyers' clubs about using marijuana to treat a variety of symptoms and heard clinical observations on the use of Marinol to treat AIDS 9 patients. Marinol is the brand name for dronabinol, which is (THC) in pill form and is available by prescription for the treatment of nausea associated with chemotherapy and AIDS wasting. MARIJUANA TODAY The Changing Legal Landscape In the 20th century, marijuana has been used more for its euphoric effects than as a medicine. Its psychological and behavioral effects have concerned public officials since the drug first appeared in the southwestern and southern states during the first two decades of the century. By 1931, at least 29 states had prohibited use of the drug for 3 nonmedicalpurposes. MarijuanawasfirstregulatedatthefederallevelbytheMarijuana Tax Act of 1937, which required anyone producing, distributing, or using marijuana for medical purposes to register and pay a tax and which effectively prohibited nonmedical use of the drug. Although the act did not make medical use of marijuana illegal, it did make it expensive and inconvenient. In 1942, marijuana was removed from the U.S. -tetrahydrocannabinol Pharmacopoeia because it was believed to be a harmful and addictive drug that caused psychoses, mental deterioration, and violent behavior. In the late 1960s and early 1970s, there was a sharp increase in marijuana use among adolescents and young adults. The current legal status of marijuana was established in 1970 with the passage of the Controlled Substances Act, which divided drugs into five schedules and placed marijuana in Schedule I, the category for drugs with high potential for abuse and no accepted medical use (see Appendix C, Scheduling Definitions). In 1972, the National Organization for the Reform of Marijuana Legislation (NORML), an organization that supports decriminalization of marijuana, unsuccessfully petitioned the Bureau of Narcotics and Dangerous Drugs to move marijuana from Schedule I to Schedule II. NORML argued that marijuana is therapeutic in numerous serious ailments, 13 less toxic, and in many cases more effective than conventional medicines. years the medical marijuana movement has been closely linked with the marijuana decriminalization movement, which has colored the debate. Many people criticized that association in their letters to IOM and during the public workshops of this study. The argument against the medical use of marijuana presented most often to the IOM study team was that "the medical marijuana movement is a Trojan horse"; that is, it is a deceptive tactic used by advocates of marijuana decriminalization who would exploit the public's sympathy for seriously ill patients. Since NORML's petition in 1972, there have been a variety of legal decisions concerning marijuana. From 1973 to 1978, 11 states adopted statutes that decriminalized use of marijuana, although some of them recriminalized marijuana use in the 1980s and 1990s. During the 1970s, reports of the medical value of marijuana began to appear, particularly claims that marijuana relieved the nausea associated with chemotherapy. Health departments in six states conducted small studies to investigate the reports. When the AIDS epidemic spread in the 1980s, patients found that marijuana sometimes relieved their symptoms, most dramatically those associated with AIDS wasting. Over this period a number of defendants charged with unlawful possession of marijuana claimed that they were using the drug to treat medical conditions and that violation of the law was therefore justified (the so-called medical necessity defense). Although most courts rejected these 8 Against that backdrop, voters in California and Arizona in 1996 passed two referenda that attempted to legalize the medical use of marijuana under particular conditions. Public support for patient access to marijuana for medical use appears substantial; public opinion polls taken during 1997 and 1998 generally reported 60—70 percent of 15 However, those referenda are at odds with federal laws regulating marijuana, and their implementation raises complex legal questions. Despite the current level of interest, referenda and public discussions have not been well informed by carefully reasoned scientific debate. Although previous reports have all called for more research, the nature of the research that will be most helpful depends greatly on the specific health conditions to be addressed. And while there have been claims, some accepted them. respondents in favor of allowing medical uses of marijuana. Thus, for 25 important recent advances in our understanding of the physiological effects of marijuana, few of the recent investigators have had the time or resources to permit detailed analysis. The results of those advances, only now beginning to be explored, have significant implications for the medical marijuana debate. Several months after the passage of the California and Arizona medical marijuana referendums, the Office of National Drug Control Policy (ONDCP) asked whether IOM would conduct a scientific review of the medical value of marijuana and its constituent compounds. In August 1997, IOM formally began the study and appointed John A. Benson Jr. and Stanley J. Watson Jr. to serve as principal investigators for the study. The charge to IOM was to review the medical use of marijuana and the harms and benefits attributed to it (details are given in Appendix D). Marijuana plants have been used since antiquity for both herbal medication and intoxication. The current debate over the medical use of marijuana is essentially a debate over the value of its medicinal properties relative to the risk posed by its use. 1,10,11 Marijuana's use as an herbal remedy before the 20th century is well documented. However, modern medicine adheres to different standards from those used in the past. The question is not whether marijuana can be used as an herbal remedy but rather how well this remedy meets today's standards of efficacy and safety. We understand much more than previous generations about medical risks. Our society generally expects its licensed medications to be safe, reliable, and of proven efficacy; contaminants and inconsistent ingredients in our health treatments are not tolerated. That refers not only to prescription and over-the-counter drugs but also to vitamin supplements and herbal remedies purchased at the grocery store. For example, the essential amino acid l- tryptophan was widely sold in health food stores as a natural remedy for insomnia until early 1990 when it became linked to an epidemic of a new and potentially fatal illness 9,12 When it was removed from the market shortly thereafter, there was little protest, despite the fact that it was safe for the vast majority of the population. The 1,536 cases and 27 deaths were later traced to contaminants in a batch produced by a single Japanese manufacturer. Although few herbal medicines meet today's standards, they have provided the foundation for modern Western pharmaceuticals. Most current prescriptions have their 7 rootseitherdirectlyorindirectlyinplantremedies. Atthesametime,mostcurrent prescriptions are synthetic compounds that are only distantly related to the natural compounds that led to their development. Digitalis was discovered in foxglove, morphine in poppies, and taxol in the yew tree. Even aspirin (acetylsalicylic acid) has its counterpart in herbal medicine: for many generations, American Indians relieved headaches by chewing the bark of the willow tree, which is rich in a related form of salicylic acid. Although plants continue to be valuable resources for medical advances, drug development is likely to be less and less reliant on plants and more reliant on the tools of (eosinophilia-myalgia syndrome). modern science. Molecular biology, bioinformatics software, and DNA array-based analyses of genes and chemistry are all beginning to yield great advances in drug discovery and development. Until recently, drugs could only be discovered; now they can be designed. Even the discovery process has been accelerated through the use of modern drug-screening techniques. It is increasingly possible to identify or isolate the chemical compounds in a plant, determine which compounds are responsible for the plant's effects, and select the most effective and safe compounds--either for use as purified substances or as tools to develop even more effective, safer, or less expensive compounds. Yet even as the modern pharmacological toolbox becomes more sophisticated and biotechnology yields an ever greater abundance of therapeutic drugs, people increasingly 4,5 In 1997, 46 percent of Americans sought nontraditional medicines and spent over 27 billion unreimbursed dollars; the total number of visits to alternative medicine practitioners appears to have exceeded the number of 5,6 Recent interest in the medical use of marijuana coincides with this trend toward self-help and a search for "natural" therapies. Indeed, several people who spoke at the IOM public hearings in support of the medical use of marijuana said that they generally preferred herbal medicines to standard pharmaceuticals. However, few alternative therapies have been carefully and systematically tested for safety and efficacy, as is required for medications approved by 2 WHO USES MEDICAL MARIJUANA? There have been no comprehensive surveys of the demographics and medical conditions of medical marijuana users, but a few reports provide some indication. In each case, survey results should be understood to reflect the situation in which they were conducted and are not necessarily characteristic of medical marijuana users as a whole. Respondents to surveys reported to the IOM study team were all members of "buyers' clubs," organizations that provide their members with marijuana, although not necessarily through direct cash transactions. The atmosphere of the marijuana buyers' clubs ranges from that of the comparatively formal and closely regulated Oakland Cannabis Buyers' Cooperative to that of a "country club for the indigent," as Denis Peron described the San Francisco Cannabis Cultivators Club (SFCCC), which he directed. John Mendelson, an internist and pharmacologist at the University of California, San Francisco (UCSF) Pain Management Center, surveyed 100 members of the SFCCC who were using marijuana at least weekly. Most of the respondents were unemployed men in their forties. Subjects were paid $50 to participate in the survey; this might have encouraged a greater representation of unemployed subjects. All subjects were tested for drug use. About half tested positive for marijuana only; the other half tested positive for drugs in addition to marijuana (23% for cocaine and 13% for amphetamines). The predominant disorder was AIDS, followed by roughly equal numbers of members who reported chronic pain, mood disorders, and musculoskeletal disorders (Table 1.1). seek alternative, low-technology therapies. visits to primary care physicians. the FDA (Food and Drug Administration). The membership profile of the San Francisco club was similar to that of the Los Angeles Cannabis Resource Center (LACRC), where 83% of the 739 patients were men, 45% were 36—45 years old, and 71% were HIV positive. Table 1.2 shows a distribution of conditions somewhat different from that in SFCCC respondents, probably because of a different membership profile. For example, cancer is generally a disease that occurs late in life; 34 (4.7%) of LACRC members were over 55 years old; only 2% of survey respondents in the SFCCC study were over 55 years old. Jeffrey Jones, executive director of the Oakland Cannabis Buyers' Cooperative, reported that its largest group of patients is HIV-positive men in their forties. The second- largest group is patients with chronic pain. Among the 42 people who spoke at the public workshops or wrote to the study team, only six identified themselves as members of marijuana buyers' clubs. Nonetheless, they presented a similar profile: HIV/AIDS was the predominant disorder, followed by chronic pain (Tables 1.3 and 1.4). All HIV/AIDS patients reported that marijuana relieved nausea and vomiting and improved their appetite. About half the patients who reported using marijuana for chronic pain also reported that it reduced nausea and vomiting. Note that the medical conditions referred to are only those reported to the study team or to interviewers; they cannot be assumed to represent complete or accurate diagnoses. Michael Rowbotham, a neurologist at the UCSF Pain Management Center, noted that many pain patients referred to that center arrive with incorrect diagnoses or with pain of unknown origin. At that center the patients who report medical benefit from marijuana say that it does not reduce their pain but enables them to cope with it. Most--not all--people who use marijuana to relieve medical conditions have previously used it recreationally. An estimated 95% of the LACRC members had used marijuana before joining the club. It is important to emphasize the absence of comprehensive information on marijuana use before its use for medical conditions. Frequency of prior use almost certainly depends on many factors, including membership in a buyers' club, membership in a population sector that uses marijuana more often than others (for example, men 20—30 years old), and the medical condition being treated with marijuana (for example, there are probably relatively fewer recreational marijuana users among cancer patients than among AIDS patients). Patients who reported their experience with marijuana at the public workshops said that marijuana provided them with great relief from symptoms associated with disparate diseases and ailments, including AIDS wasting, spasticity from multiple sclerosis, depression, chronic pain, and nausea associated with chemotherapy. Their circumstances and symptoms were varied, and the IOM study team was not in a position to make medical evaluations or confirm diagnoses. Three representative cases presented to the IOM study team are presented in Box 1.1; the stories have been edited for brevity, but each case is presented in the patient's words and with the patient's permission. The variety of stories presented left the study team with a clear view of people's beliefs about how marijuana had helped them. But this collection of anecdotal data, although useful, is limited. We heard many positive stories but no stories from people who had tried marijuana but found it ineffective. This is a fraction with an unknown denominator. For the numerator we have a sample of positive responses; for the denominator we have no idea of the total number of people who have tried marijuana for medical purposes. Hence, it is impossible to estimate the clinical value of marijuana or cannabinoids in the general population based on anecdotal reports. Marijuana clearly seems to relieve some symptoms for some people--even if only as a placebo effect. But what is the balance of harmful and beneficial effects? That is the essential medical question that can be answered only by careful analysis of data collected under controlled conditions. CANNABIS AND THE CANNABINOIDS Marijuana is the common name for Cannabis sativa, a hemp plant that grows throughout temperate and tropical climates. The most recent review of the constituents of 16 marijuana lists 66 cannabinoids (Table 1.5). different cannabinoid effects or interactions. Most of the cannabinoids are closely related; they fall into only 10 groups of closely related cannabinoids, many of which differ by only a single chemical moiety and might be midpoints along biochemical pathways--that -tetrahydrocannabinol ( THC) is the primary psychoactive ingredient; depending on the particular plant, either THC or cannabidiol is the most abundant cannabinoid in marijuana (Figure 1.1). 9 is, degradation products, precursors, or byproducts. - But that does not mean there are 66 16,18 9 9 Throughout this report, THC is used to indicate of THC are discussed, the full names are used. All the cannabinoids are lipophilic--they are highly soluble in fatty fluids and tissues but not in water. Indeed, THC is so lipophilic that it is aptly described as "greasy." Throughout this report, marijuana refers to unpurified plant extracts, including leaves and flower tops, regardless of how they are consumed--whether by ingestion or by smoking. References to the effects of marijuana should be understood to include the composite effects of its various components; that is, the effects of THC are included among the effects of marijuana, but not all the effects of marijuana are necessarily due to THC. Discussions concerning cannabinoids refer only to those particular compounds and not to the plant extract. This distinction is important; it is often blurred or exaggerated. Cannabinoids are produced in epidermal glands on the leaves (especially the upper ones), stems, and the bracts that support the flowers of the marijuana plant. Although the flower itself has no epidermal glands, it has the highest cannabinoid content anywhere on the plant, probably because of the accumulation of resin secreted by the supporting bracteole (the small leaf-like part below the flower). The amounts of cannabinoids and their relative abundance in a marijuana plant vary with growing conditions, including 14 ). The chemical stability of cannabinoids in harvested plant material is also affected by moisture, temperature, sunlight, and storage. They degrade under any storage condition. humidity, temperature, and soil nutrients (reviewed in Pate, 1994 -THC. In the few cases where variants ORGANIZATION OF THE REPORT Throughout the report, steps that might be taken to fill the gaps in understanding both the potential harms and benefits of marijuana and cannabinoid use are identified. Those steps include identifying knowledge gaps, promising research directions, and potential therapies based on scientific advances in cannabinoid biology. Chapter 2 reviews basic cannabinoid biology and provides a foundation to understand the medical value of marijuana or its constituent cannabinoids. In consideration of the physician's first rule, "first, do no harm," the potential harms attributed to the medical use of marijuana are reviewed before the potential medical benefits. Chapter 3 reviews the risks posed by marijuana use, with emphasis on medical use. Chapter 4 analyzes the most credible clinical data relevant to the medical use of marijuana. It reviews what is known about the physiological mechanisms underlying particular conditions (for example, chronic pain, vomiting, anorexia, and muscle spasticity), what is known about the cellular actions of cannabinoids, and the levels of proof needed to show that marijuana is an effective treatment for specific symptoms. It does not analyze the historical literature; history is informative in enumerating uses of marijuana, but it does not provide the sort of information needed for a scientifically sound evaluation of the efficacy and safety of marijuana for clinical use. Because marijuana is advocated primarily as affording relief from the symptoms of disease rather than as a cure, this chapter is organized largely by symptoms as opposed to disease categories. Finally, chapter 4 compares the conclusions of this report with those of other recent reports on the medical use of marijuana. Chapter 5 describes the process of and analyzes the prospects for cannabinoid drug development. Three focal concerns in evaluating the medical use of marijuana are: 1. Evaluation of the effects of isolated cannabinoids; 2. Evaluation of the risks associated with the medical use of marijuana; and 3. Evaluation of the use of smoked marijuana. EFFECTS OF ISOLATED CANNABINOIDS Cannabinoid Biology Much has been learned since the 1982 IOM report Marijuana and Health. Although it was clear then that most of the effects of marijuana were due to its actions on the brain, there was little information about how THC acted on brain cells (neurons), which cells were affected by THC, or even what general areas of the brain were most affected by THC. In addition, too little was known about cannabinoid physiology to offer any scientific insights into the harmful or therapeutic effects of marijuana. That all changed with the identification and characterization of cannabinoid receptors in the 1980s and 1990s. During the past 16 years, science has advanced greatly and can tell us much more about the potential medical benefits of cannabinoids. Conclusion: At this point, our knowledge about the biology of marijuana and cannabinoids allows us to make some general conclusions: o Cannabinoids likely have a natural role in pain modulation, control of movement, and memory. o The natural role of cannabinoids in immune systems is likely multi-faceted and remains unclear. o The brain develops tolerance to cannabinoids. o Animal research demonstrates the potential for dependence, but this potential is observed under a narrower range of conditions than with benzodiazepines, opiates, cocaine, or nicotine. o Withdrawal symptoms can be observed in animals but appear to be mild compared to opiates or benzodiazepines, such as diazepam (Valium). Conclusion: The different cannabinoid receptor types found in the body appear to play different roles in normal human physiology. In addition, some effects of cannabinoids appear to be independent of those receptors. The variety of mechanisms through which cannabinoids can influence human physiology underlies the variety of potential therapeutic uses for drugs that might act selectively on different cannabinoid systems. Recommendation 1: Research should continue into the physiological effects of synthetic and plant-derived cannabinoids and the natural function of cannabinoids found in the body. Because different cannabinoids appear to have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC alone. Efficacy of Cannabinoid Drugs The accumulated data indicate a potential therapeutic value for cannabinoid drugs, particularly for symptoms such as pain relief, control of nausea and vomiting, and appetite stimulation. The therapeutic effects of cannabinoids are best established for THC, which is generally one of the two most abundant of the cannabinoids in marijuana. (Cannabidiol is generally the other most abundant cannabinoid.) The effects of cannabinoids on the symptoms studied are generally modest, and in most cases there are more effective medications. However, people vary in their responses to medications, and there will likely always be a subpopulation of patients who do not respond well to other medications. The combination of cannabinoid drug effects (anxiety reduction, appetite stimulation, nausea reduction, and pain relief) suggests that cannabinoids would be moderately well suited for particular conditions, such as chemotherapy-induced nausea and vomiting and AIDS wasting. Defined substances, such as purified cannabinoid compounds, are preferable to plant products, which are of variable and uncertain composition. Use of defined cannabinoids permits a more precise evaluation of their effects, whether in combination or alone. Medications that can maximize the desired effects of cannabinoids and minimize the undesired effects can very likely be identified. Although most scientists who study cannabinoids agree that the pathways to cannabinoid drug development are clearly marked, there is no guarantee that the fruits of scientific research will be made available to the public for medical use. Cannabinoid- based drugs will only become available if public investment in cannabinoid drug research is sustained and if there is enough incentive for private enterprise to develop and market such drugs. Conclusion: Scientific data indicate the potential therapeutic value of cannabinoid drugs, primarily THC, for pain relief, control of nausea and vomiting, and appetite stimulation; smoked marijuana, however, is a crude THC delivery system that also delivers harmful substances. Recommendation 2: Clinical trials of cannabinoid drugs for symptom management should be conducted with the goal of developing rapid-onset, reliable, and safe delivery systems. Influence of Psychological Effects on Therapeutic Effects The psychological effects of THC and similar cannabinoids pose three issues for the therapeutic use of cannabinoid drugs. First, for some patients--particularly older patients with no previous marijuana experience--the psychological effects are disturbing. Those patients report experiencing unpleasant feelings and disorientation after being treated with THC, generally more severe for oral THC than for smoked marijuana. Second, for conditions such as movement disorders or nausea, in which anxiety exacerbates the symptoms, the antianxiety effects of cannabinoid drugs can influence symptoms indirectly. This can be beneficial or can create false impressions of the drug effect. Third, for cases in which symptoms are multifaceted, the combination of THC effects might provide a form of adjunctive therapy; for example, AIDS wasting patients would likely benefit from a medication that simultaneously reduces anxiety, pain, and nausea while stimulating appetite. Conclusion: The psychological effects of cannabinoids, such as anxiety reduction, sedation, and euphoria can influence their potential therapeutic value. Those effects are potentially undesirable for certain patients and situations and beneficial for others. In addition, psychological effects can complicate the interpretation of other aspects of the drug's effect. Recommendation 3: Psychological effects of cannabinoids such as anxiety reduction and sedation, which can influence medical benefits, should be evaluated in clinical trials. RISKS ASSOCIATED WITH MEDICAL USE OF MARIJUANA Physiological Risks Marijuana is not a completely benign substance. It is a powerful drug with a variety of effects. However, except for the harms associated with smoking, the adverse effects of marijuana use are within the range of effects tolerated for other medications. The harmful effects to individuals from the perspective of possible medical use of marijuana are not necessarily the same as the harmful physical effects of drug abuse. When interpreting studies purporting to show the harmful effects of marijuana, it is important to keep in mind that the majority of those studies are based on smoked marijuana, and cannabinoid effects cannot be separated from the effects of inhaling smoke from burning plant material and contaminants. For most people the primary adverse effect of acute marijuana use is diminished psychomotor performance. It is, therefore, inadvisable to operate any vehicle or potentially dangerous equipment while under the influence of marijuana, THC, or any cannabinoid drug with comparable effects. In addition, a minority of marijuana users experience dysphoria, or unpleasant feelings. Finally, the short-term immunosuppressive effects are not well established but, if they exist, are not likely great enough to preclude a legitimate medical use. The chronic effects of marijuana are of greater concern for medical use and fall into two categories: the effects of chronic smoking and the effects of THC. Marijuana smoking is associated with abnormalities of cells lining the human respiratory tract. Marijuana smoke, like tobacco smoke, is associated with increased risk of cancer, lung damage, and poor pregnancy outcomes. Although cellular, genetic, and human studies all suggest that marijuana smoke is an important risk factor for the development of respiratory cancer, proof that habitual marijuana smoking does or does not cause cancer awaits the results of well-designed studies. Conclusion: Numerous studies suggest that marijuana smoke is an important risk factor in the development of respiratory disease. Recommendation 4: Studies to define the individual health risks of smoking marijuana should be conducted, particularly among populations in which marijuana use is prevalent. Marijuana Dependence and Withdrawal A second concern associated with chronic marijuana use is dependence on the psychoactive effects of THC. Although few marijuana users develop dependence, some do. Risk factors for marijuana dependence are similar to those for other forms of substance abuse. In particular, anti-social personality and conduct disorders are closely associated with substance abuse. Conclusion: A distinctive marijuana withdrawal syndrome has been identified, but it is mild and short lived. The syndrome includes restlessness, irritability, mild agitation, insomnia, sleep disturbance, nausea, and cramping. Marijuana as a "Gateway" Drug Patterns in progression of drug use from adolescence to adulthood are strikingly regular. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug most people encounter. Not surprisingly, most users of other illicit drugs have used marijuana first. In fact, most drug users begin with alcohol and nicotine before marijuana--usually before they are of legal age. In the sense that marijuana use typically precedes rather than follows initiation of other illicit drug use, it is indeed a "gateway" drug. But because underage smoking and alcohol use typically precede marijuana use, marijuana is not the most common, and is rarely the first, "gateway" to illicit drug use. There is no conclusive evidence that the drug effects of marijuana are causally linked to the subsequent abuse of other illicit drugs. An important caution is that data on drug use progression cannot be assumed to apply to the use of drugs for medical purposes. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would remain the same as seen in illicit use. Finally, there is a broad social concern that sanctioning the medical use of marijuana might increase its use among the general population. At this point there are no convincing data to support this concern. The existing data are consistent with the idea that this would not be a problem if the medical use of marijuana were as closely regulated as other medications with abuse potential. Conclusion: Present data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse. However, this question is beyond the issues normally considered for medical uses of drugs and should not be a factor in evaluating the therapeutic potential of marijuana or cannabinoids. USE OF SMOKED MARIJUANA Because of the health risks associated with smoking, smoked marijuana should generally not be recommended for long-term medical use. Nonetheless, for certain patients, such as the terminally ill or those with debilitating symptoms, the long-term risks are not of great concern. Further, despite the legal, social, and health problems associated with smoking marijuana, it is widely used by certain patient groups. Recommendation 5: Clinical trials of marijuana use for medical purposes should be conducted under the following limited circumstances: trials should involve only short-term marijuana use (less than six months), should be conducted in patients with conditions for which there is reasonable expectation of efficacy, should be approved by institutional review boards, and should collect data about efficacy. The goal of clinical trials of smoked marijuana would not be to develop marijuana as a licensed drug but rather to serve as a first step toward the possible development of nonsmoked rapid-onset cannabinoid delivery systems. However, it will likely be many years before a safe and effective cannabinoid delivery system, such as an inhaler, is available for patients. In the meantime there are patients with debilitating symptoms for whom smoked marijuana might provide relief. The use of smoked marijuana for those patients should weigh both the expected efficacy of marijuana and ethical issues in patient care, including providing information about the known and suspected risks of smoked marijuana use. Recommendation 6: Short-term use of smoked marijuana (less than six months) for patients with debilitating symptoms (such as intractable pain or vomiting) must meet the following conditions: o failure of all approved medications to provide relief has been documented, o the symptoms can reasonably be expected to be relieved by rapid- onset cannabinoid drugs, o such treatment is administered under medical supervision in a manner that allows for assessment of treatment effectiveness, and o involves an oversight strategy comparable to an institutional review board process that could provide guidance within 24 hours of a submission by a physician to provide marijuana to a patient for a specified use. Until a nonsmoked rapid-onset cannabinoid drug delivery system becomes available, we acknowledge that there is no clear alternative for people suffering from chronic conditions that might be relieved by smoking marijuana, such as pain or AIDS wasting. One possible approach is to treat patients as n-of-1 clinical trials (single-patient trials), in which patients are fully informed of their status as experimental subjects using a harmful drug delivery system and in which their condition is closely monitored and documented under medical supervision, thereby increasing the knowledge base of the risks and benefits of marijuana use under such conditions.data, it is important to understand that decisions about drug regulation are based on a variety of moral and social considerations, as well as on medical and scientific ones. Even when a drug is used only for medical purposes, value judgments affect policy decisions concerning its medical use. For example, the magnitude of a drug's expected medical benefit affects regulatory judgments about the acceptability of risks associated with its use. Also, although a drug is normally approved for medical use only on proof of its "safety and efficacy," patients with life-threatening conditions are sometimes (under protocols for "compassionate use") allowed access to unapproved drugs whose benefits and risks are uncertain. Value judgments play an even more substantial role in regulatory decisions concerning drugs, such as marijuana, that are sought and used for nonmedical purposes. Then policymakers must take into account not only the risks and benefits associated with medical use but also possible interactions between the regulatory arrangements governing medical use and the integrity of the legal controls set up to restrict nonmedical use. It should be clear that many elements of drug control policy lie outside the realm of biology and medicine. Ultimately, the complex moral and social judgments that underlie drug control policy must be made by the American people and their elected officials. A goal of this report is to evaluate the biological and medical factors that should be taken into account in making those judgments. HOW THIS STUDY WAS CONDUCTED Information was gathered through scientific workshops, site visits, analysis of the relevant scientific literature, and extensive consultation with biomedical and social scientists. The three 2-day workshops--in Irvine, California; New Orleans, Louisiana; and Washington, D.C.--were open to the public and included scientific presentations and reports, mostly from patients and their families, about their experiences with and perspectives on the medical use of marijuana. Scientific experts in various fields were selected to talk about the latest research on marijuana, cannabinoids, and related topics (listed in Appendix B). Selection of the experts was based on recommendations by their peers, who ranked them among the most accomplished scientists and the most knowledgeable about marijuana and cannabinoids in their own fields. In addition, advocates for (John Morgan) and against (Eric A. Voth) the medical use of marijuana were invited to present scientific evidence in support of their positions. Information presented at the scientific workshops was supplemented by analysis of the scientific literature and evaluating the methods used in various studies and the validity of the authors' conclusions. Different kinds of clinical studies are useful in different ways: results of a controlled double-blind study with adequate sample sizes can be expected to apply to the general population from which study subjects were drawn; an isolated case report can suggest further studies but cannot be presumed to be broadly applicable; and survey data can be highly informative but are generally limited by the need to rely on self-reports of drug use and on unconfirmed medical diagnoses. This report relies mainly on the most relevant and methodologically rigorous studies available and treats the results of more limited studies cautiously. In addition, study results are presented in such a way as to allow thoughtful readers to judge the results themselves. The Institute of Medicine (IOM) appointed a panel of nine experts to advise the study team on technical issues. These included neurology and the treatment of pain (Howard Fields); regulation of prescription drugs (J. Richard Crout); AIDS wasting and clinical trials (Judith Feinberg); treatment and pathology of multiple sclerosis (Timothy Vollmer); drug dependence among adolescents (Thomas Crowley); varieties of drug dependence (Dorothy Hatsukami); internal medicine, health care delivery, and clinical epidemiology (Eric B. Larson); cannabinoids and marijuana pharmacology (Billy R. Martin); and cannabinoid neuroscience (Steven R. Childers). Public outreach included setting up a Web site that provided information about the study and asked for input from the public. The Web site was open for comment from November 1997 until November 1998. Some 130 organizations were invited to participate in the public workshops. Many people in the organizations--particularly those opposed to the medical use of marijuana--felt that a public forum was not conducive to expressing their views; they were invited to communicate their opinions (and reasons for holding them) by mail or telephone. As a result, roughly equal numbers of persons and organizations opposed to and in favor of the medical use of marijuana were heard from. The study team visited four cannabis buyers' clubs in California (the Oakland Cannabis Buyers' Cooperative, the San Francisco Cannabis Cultivators Club, the Los Angeles Cannabis Resource Center, and Californians Helping Alleviate Medical Problems, or CHAMPS) and two HIV/AIDS clinics (AIDS Health Care Foundation in Los Angeles and Louisiana State University Medical Center in New Orleans). We listened to many individual stories from the buyers' clubs about using marijuana to treat a variety of symptoms and heard clinical observations on the use of Marinol to treat AIDS 9 patients. Marinol is the brand name for dronabinol, which is (THC) in pill form and is available by prescription for the treatment of nausea associated with chemotherapy and AIDS wasting. MARIJUANA TODAY The Changing Legal Landscape In the 20th century, marijuana has been used more for its euphoric effects than as a medicine. Its psychological and behavioral effects have concerned public officials since the drug first appeared in the southwestern and southern states during the first two decades of the century. By 1931, at least 29 states had prohibited use of the drug for 3 nonmedicalpurposes. MarijuanawasfirstregulatedatthefederallevelbytheMarijuana Tax Act of 1937, which required anyone producing, distributing, or using marijuana for medical purposes to register and pay a tax and which effectively prohibited nonmedical use of the drug. Although the act did not make medical use of marijuana illegal, it did make it expensive and inconvenient. In 1942, marijuana was removed from the U.S. -tetrahydrocannabinol Pharmacopoeia because it was believed to be a harmful and addictive drug that caused psychoses, mental deterioration, and violent behavior. In the late 1960s and early 1970s, there was a sharp increase in marijuana use among adolescents and young adults. The current legal status of marijuana was established in 1970 with the passage of the Controlled Substances Act, which divided drugs into five schedules and placed marijuana in Schedule I, the category for drugs with high potential for abuse and no accepted medical use (see Appendix C, Scheduling Definitions). In 1972, the National Organization for the Reform of Marijuana Legislation (NORML), an organization that supports decriminalization of marijuana, unsuccessfully petitioned the Bureau of Narcotics and Dangerous Drugs to move marijuana from Schedule I to Schedule II. NORML argued that marijuana is therapeutic in numerous serious ailments, 13 less toxic, and in many cases more effective than conventional medicines. years the medical marijuana movement has been closely linked with the marijuana decriminalization movement, which has colored the debate. Many people criticized that association in their letters to IOM and during the public workshops of this study. The argument against the medical use of marijuana presented most often to the IOM study team was that "the medical marijuana movement is a Trojan horse"; that is, it is a deceptive tactic used by advocates of marijuana decriminalization who would exploit the public's sympathy for seriously ill patients. Since NORML's petition in 1972, there have been a variety of legal decisions concerning marijuana. From 1973 to 1978, 11 states adopted statutes that decriminalized use of marijuana, although some of them recriminalized marijuana use in the 1980s and 1990s. During the 1970s, reports of the medical value of marijuana began to appear, particularly claims that marijuana relieved the nausea associated with chemotherapy. Health departments in six states conducted small studies to investigate the reports. When the AIDS epidemic spread in the 1980s, patients found that marijuana sometimes relieved their symptoms, most dramatically those associated with AIDS wasting. Over this period a number of defendants charged with unlawful possession of marijuana claimed that they were using the drug to treat medical conditions and that violation of the law was therefore justified (the so-called medical necessity defense). Although most courts rejected these 8 Against that backdrop, voters in California and Arizona in 1996 passed two referenda that attempted to legalize the medical use of marijuana under particular conditions. Public support for patient access to marijuana for medical use appears substantial; public opinion polls taken during 1997 and 1998 generally reported 60—70 percent of 15 However, those referenda are at odds with federal laws regulating marijuana, and their implementation raises complex legal questions. Despite the current level of interest, referenda and public discussions have not been well informed by carefully reasoned scientific debate. Although previous reports have all called for more research, the nature of the research that will be most helpful depends greatly on the specific health conditions to be addressed. And while there have been claims, some accepted them. respondents in favor of allowing medical uses of marijuana. Thus, for 25 important recent advances in our understanding of the physiological effects of marijuana, few of the recent investigators have had the time or resources to permit detailed analysis. The results of those advances, only now beginning to be explored, have significant implications for the medical marijuana debate. Several months after the passage of the California and Arizona medical marijuana referendums, the Office of National Drug Control Policy (ONDCP) asked whether IOM would conduct a scientific review of the medical value of marijuana and its constituent compounds. In August 1997, IOM formally began the study and appointed John A. Benson Jr. and Stanley J. Watson Jr. to serve as principal investigators for the study. The charge to IOM was to review the medical use of marijuana and the harms and benefits attributed to it (details are given in Appendix D). Marijuana plants have been used since antiquity for both herbal medication and intoxication. The current debate over the medical use of marijuana is essentially a debate over the value of its medicinal properties relative to the risk posed by its use. 1,10,11 Marijuana's use as an herbal remedy before the 20th century is well documented. However, modern medicine adheres to different standards from those used in the past. The question is not whether marijuana can be used as an herbal remedy but rather how well this remedy meets today's standards of efficacy and safety. We understand much more than previous generations about medical risks. Our society generally expects its licensed medications to be safe, reliable, and of proven efficacy; contaminants and inconsistent ingredients in our health treatments are not tolerated. That refers not only to prescription and over-the-counter drugs but also to vitamin supplements and herbal remedies purchased at the grocery store. For example, the essential amino acid l- tryptophan was widely sold in health food stores as a natural remedy for insomnia until early 1990 when it became linked to an epidemic of a new and potentially fatal illness 9,12 When it was removed from the market shortly thereafter, there was little protest, despite the fact that it was safe for the vast majority of the population. The 1,536 cases and 27 deaths were later traced to contaminants in a batch produced by a single Japanese manufacturer. Although few herbal medicines meet today's standards, they have provided the foundation for modern Western pharmaceuticals. Most current prescriptions have their 7 rootseitherdirectlyorindirectlyinplantremedies. Atthesametime,mostcurrent prescriptions are synthetic compounds that are only distantly related to the natural compounds that led to their development. Digitalis was discovered in foxglove, morphine in poppies, and taxol in the yew tree. Even aspirin (acetylsalicylic acid) has its counterpart in herbal medicine: for many generations, American Indians relieved headaches by chewing the bark of the willow tree, which is rich in a related form of salicylic acid. Although plants continue to be valuable resources for medical advances, drug development is likely to be less and less reliant on plants and more reliant on the tools of (eosinophilia-myalgia syndrome). modern science. Molecular biology, bioinformatics software, and DNA array-based analyses of genes and chemistry are all beginning to yield great advances in drug discovery and development. Until recently, drugs could only be discovered; now they can be designed. Even the discovery process has been accelerated through the use of modern drug-screening techniques. It is increasingly possible to identify or isolate the chemical compounds in a plant, determine which compounds are responsible for the plant's effects, and select the most effective and safe compounds--either for use as purified substances or as tools to develop even more effective, safer, or less expensive compounds. Yet even as the modern pharmacological toolbox becomes more sophisticated and biotechnology yields an ever greater abundance of therapeutic drugs, people increasingly 4,5 In 1997, 46 percent of Americans sought nontraditional medicines and spent over 27 billion unreimbursed dollars; the total number of visits to alternative medicine practitioners appears to have exceeded the number of 5,6 Recent interest in the medical use of marijuana coincides with this trend toward self-help and a search for "natural" therapies. Indeed, several people who spoke at the IOM public hearings in support of the medical use of marijuana said that they generally preferred herbal medicines to standard pharmaceuticals. However, few alternative therapies have been carefully and systematically tested for safety and efficacy, as is required for medications approved by 2 WHO USES MEDICAL MARIJUANA? There have been no comprehensive surveys of the demographics and medical conditions of medical marijuana users, but a few reports provide some indication. In each case, survey results should be understood to reflect the situation in which they were conducted and are not necessarily characteristic of medical marijuana users as a whole. Respondents to surveys reported to the IOM study team were all members of "buyers' clubs," organizations that provide their members with marijuana, although not necessarily through direct cash transactions. The atmosphere of the marijuana buyers' clubs ranges from that of the comparatively formal and closely regulated Oakland Cannabis Buyers' Cooperative to that of a "country club for the indigent," as Denis Peron described the San Francisco Cannabis Cultivators Club (SFCCC), which he directed. John Mendelson, an internist and pharmacologist at the University of California, San Francisco (UCSF) Pain Management Center, surveyed 100 members of the SFCCC who were using marijuana at least weekly. Most of the respondents were unemployed men in their forties. Subjects were paid $50 to participate in the survey; this might have encouraged a greater representation of unemployed subjects. All subjects were tested for drug use. About half tested positive for marijuana only; the other half tested positive for drugs in addition to marijuana (23% for cocaine and 13% for amphetamines). The predominant disorder was AIDS, followed by roughly equal numbers of members who reported chronic pain, mood disorders, and musculoskeletal disorders (Table 1.1). seek alternative, low-technology therapies. visits to primary care physicians. the FDA (Food and Drug Administration). The membership profile of the San Francisco club was similar to that of the Los Angeles Cannabis Resource Center (LACRC), where 83% of the 739 patients were men, 45% were 36—45 years old, and 71% were HIV positive. Table 1.2 shows a distribution of conditions somewhat different from that in SFCCC respondents, probably because of a different membership profile. For example, cancer is generally a disease that occurs late in life; 34 (4.7%) of LACRC members were over 55 years old; only 2% of survey respondents in the SFCCC study were over 55 years old. Jeffrey Jones, executive director of the Oakland Cannabis Buyers' Cooperative, reported that its largest group of patients is HIV-positive men in their forties. The second- largest group is patients with chronic pain. Among the 42 people who spoke at the public workshops or wrote to the study team, only six identified themselves as members of marijuana buyers' clubs. Nonetheless, they presented a similar profile: HIV/AIDS was the predominant disorder, followed by chronic pain (Tables 1.3 and 1.4). All HIV/AIDS patients reported that marijuana relieved nausea and vomiting and improved their appetite. About half the patients who reported using marijuana for chronic pain also reported that it reduced nausea and vomiting. Note that the medical conditions referred to are only those reported to the study team or to interviewers; they cannot be assumed to represent complete or accurate diagnoses. Michael Rowbotham, a neurologist at the UCSF Pain Management Center, noted that many pain patients referred to that center arrive with incorrect diagnoses or with pain of unknown origin. At that center the patients who report medical benefit from marijuana say that it does not reduce their pain but enables them to cope with it. Most--not all--people who use marijuana to relieve medical conditions have previously used it recreationally. An estimated 95% of the LACRC members had used marijuana before joining the club. It is important to emphasize the absence of comprehensive information on marijuana use before its use for medical conditions. Frequency of prior use almost certainly depends on many factors, including membership in a buyers' club, membership in a population sector that uses marijuana more often than others (for example, men 20—30 years old), and the medical condition being treated with marijuana (for example, there are probably relatively fewer recreational marijuana users among cancer patients than among AIDS patients). Patients who reported their experience with marijuana at the public workshops said that marijuana provided them with great relief from symptoms associated with disparate diseases and ailments, including AIDS wasting, spasticity from multiple sclerosis, depression, chronic pain, and nausea associated with chemotherapy. Their circumstances and symptoms were varied, and the IOM study team was not in a position to make medical evaluations or confirm diagnoses. Three representative cases presented to the IOM study team are presented in Box 1.1; the stories have been edited for brevity, but each case is presented in the patient's words and with the patient's permission. The variety of stories presented left the study team with a clear view of people's beliefs about how marijuana had helped them. But this collection of anecdotal data, although useful, is limited. We heard many positive stories but no stories from people who had tried marijuana but found it ineffective. This is a fraction with an unknown denominator. For the numerator we have a sample of positive responses; for the denominator we have no idea of the total number of people who have tried marijuana for medical purposes. Hence, it is impossible to estimate the clinical value of marijuana or cannabinoids in the general population based on anecdotal reports. Marijuana clearly seems to relieve some symptoms for some people--even if only as a placebo effect. But what is the balance of harmful and beneficial effects? That is the essential medical question that can be answered only by careful analysis of data collected under controlled conditions. CANNABIS AND THE CANNABINOIDS Marijuana is the common name for Cannabis sativa, a hemp plant that grows throughout temperate and tropical climates. The most recent review of the constituents of 16 marijuana lists 66 cannabinoids (Table 1.5). different cannabinoid effects or interactions. Most of the cannabinoids are closely related; they fall into only 10 groups of closely related cannabinoids, many of which differ by only a single chemical moiety and might be midpoints along biochemical pathways--that -tetrahydrocannabinol ( THC) is the primary psychoactive ingredient; depending on the particular plant, either THC or cannabidiol is the most abundant cannabinoid in marijuana (Figure 1.1). 9 is, degradation products, precursors, or byproducts. - But that does not mean there are 66 16,18 9 9 Throughout this report, THC is used to indicate of THC are discussed, the full names are used. All the cannabinoids are lipophilic--they are highly soluble in fatty fluids and tissues but not in water. Indeed, THC is so lipophilic that it is aptly described as "greasy." Throughout this report, marijuana refers to unpurified plant extracts, including leaves and flower tops, regardless of how they are consumed--whether by ingestion or by smoking. References to the effects of marijuana should be understood to include the composite effects of its various components; that is, the effects of THC are included among the effects of marijuana, but not all the effects of marijuana are necessarily due to THC. Discussions concerning cannabinoids refer only to those particular compounds and not to the plant extract. This distinction is important; it is often blurred or exaggerated. Cannabinoids are produced in epidermal glands on the leaves (especially the upper ones), stems, and the bracts that support the flowers of the marijuana plant. Although the flower itself has no epidermal glands, it has the highest cannabinoid content anywhere on the plant, probably because of the accumulation of resin secreted by the supporting bracteole (the small leaf-like part below the flower). The amounts of cannabinoids and their relative abundance in a marijuana plant vary with growing conditions, including 14 ). The chemical stability of cannabinoids in harvested plant material is also affected by moisture, temperature, sunlight, and storage. They degrade under any storage condition. humidity, temperature, and soil nutrients (reviewed in Pate, 1994 -THC. In the few cases where variants ORGANIZATION OF THE REPORT Throughout the report, steps that might be taken to fill the gaps in understanding both the potential harms and benefits of marijuana and cannabinoid use are identified. Those steps include identifying knowledge gaps, promising research directions, and potential therapies based on scientific advances in cannabinoid biology. Chapter 2 reviews basic cannabinoid biology and provides a foundation to understand the medical value of marijuana or its constituent cannabinoids. In consideration of the physician's first rule, "first, do no harm," the potential harms attributed to the medical use of marijuana are reviewed before the potential medical benefits. Chapter 3 reviews the risks posed by marijuana use, with emphasis on medical use. Chapter 4 analyzes the most credible clinical data relevant to the medical use of marijuana. It reviews what is known about the physiological mechanisms underlying particular conditions (for example, chronic pain, vomiting, anorexia, and muscle spasticity), what is known about the cellular actions of cannabinoids, and the levels of proof needed to show that marijuana is an effective treatment for specific symptoms. It does not analyze the historical literature; history is informative in enumerating uses of marijuana, but it does not provide the sort of information needed for a scientifically sound evaluation of the efficacy and safety of marijuana for clinical use. Because marijuana is advocated primarily as affording relief from the symptoms of disease rather than as a cure, this chapter is organized largely by symptoms as opposed to disease categories. Finally, chapter 4 compares the conclusions of this report with those of other recent reports on the medical use of marijuana. Chapter 5 describes the process of and analyzes the prospects for cannabinoid drug development. Primum non nocere. This is the physician's first rule: whatever treatment a physician prescribes to a patient--first, that treatment must not harm the patient. The most contentious aspect of the medical marijuana debate is not whether marijuana can alleviate particular symptoms but rather the degree of harm associated with its use. This chapter explores the negative health consequences of marijuana use, first with respect to drug abuse, then from a psychological perspective, and finally from a physiological perspective. THE MARIJUANA "HIGH" The most commonly reported effects of smoked marijuana are a sense of well-being or euphoria and increased talkativeness and laughter alternating with periods of introspective dreaminess followed by lethargy and sleepiness (see reviews by Adams and 1 59 60 Martin, 1996, Hall and Solowij, and Hall et al. ). A characteristic feature of a marijuana "high" is a distortion in the sense of time associated with deficits in short-term memory and learning. A marijuana smoker typically has a sense of enhanced physical and emotional sensitivity, including a feeling of greater interpersonal closeness. The most obvious behavioral abnormality displayed by someone under the influence of marijuana is difficulty in carrying on an intelligible conversation, perhaps because of an inability to remember what was just said even a few words earlier. The high associated with marijuana is not generally claimed to be integral to its therapeutic value. But mood enhancement, anxiety reduction, and mild sedation can be desirable qualities in medications--particularly for patients suffering pain and anxiety. Thus, although the psychological effects of marijuana are merely side effects in the treatment of some symptoms, they might contribute directly to relief of other symptoms. They also must be monitored in controlled clinical trials to discern which effect of cannabinoids is beneficial. These possibilities are discussed later under the discussions of specific symptoms in chapter 4. The effects of various doses and routes of delivery of THC are shown in Table 3.1. Adverse Mood Reactions Although euphoria is the more common reaction to smoking marijuana, adverse mood reactions can occur. Such reactions occur most frequently in inexperienced users after large doses of smoked or oral marijuana. They usually disappear within hours and respond well to reassurance and a supportive environment. Anxiety and paranoia are the 59 most common acute adverse reactions; others include panic, depression, dysphoria, 1,40,66,69 depersonalization, delusions, illusions, and hallucinations. Of regular marijuana smokers, 17% report that they have experienced at least one of the symptoms, usually 145 early in their use of marijuana. of medical marijuana in people who have not previously used marijuana. DRUG DYNAMICS There are many misunderstandings about drug abuse and dependence (see reviews by 114 54 Those observations are particularly relevant for the use O'Brien themostrecentDiagnosticandStatisticalManualofMentalDisorders(DSM-IV), the most influential system in the United States for diagnoses of mental disorders, including substance abuse (see Box 3.1). Tolerance, dependence, and withdrawal are often presumed to imply abuse or addiction, but this is not the case. Tolerance and dependence are normal physiological adaptations to repeated use of any drug. The correct use of prescribed medications for pain, anxiety, and even hypertension commonly produces tolerance and some measure of physiological dependence. Even a patient who takes a medicine for appropriate medical indications and at the correct dosage can develop tolerance, physical dependence, and withdrawal symptoms if the drug is stopped abruptly rather than gradually. For example, a hypertensive patient receiving a beta-adrenergic receptor blocker, such as propranolol, might have a good therapeutic response; but if the drug is stopped abruptly, there can be a withdrawal syndrome that consists of tachycardia and a rebound increase in blood pressure to a point that is temporarily higher than before administration of the medication began. Because it is an illegal substance, some people consider any use of marijuana as substance abuse. However, this report uses the medical definition; that is, substance abuse is a maladaptive pattern of repeated substance use manifested by recurrent and 3 significantadverseconsequences. Substanceabuseanddependencearebothdiagnoses of pathological substance use. Dependence is the more serious diagnosis and implies compulsive drug use that is difficult to stop despite significant substance-related problems (see Box 3.2). Reinforcement Drugs vary in their ability to produce good feelings in users, and the more strongly reinforcing a drug is, the more likely it will be abused (G. Koob, Institute of Medicine (IOM) workshop). Marijuana is indisputably reinforcing for many people. The reinforcing properties of even so mild a stimulant as caffeine are typical of reinforcement 54 in 1994). Caffeine is reinforcing for many people at low doses (100—200 mg, the average amount of caffeine in one to two cups of and Goldstein ). The terms and concepts used in this report are as defined in 3 by addicting drugs (reviewed by Goldstein coffee) and is aversive at high doses (600 mg, the average amount of caffeine in six cups of coffee). The reinforcing effects of many drugs are different for different people. For example, caffeine was most reinforcing for test subjects who scored lowest on tests of anxiety but tended not to be reinforcing for the most anxious subjects. As an argument to dispute the abuse potential of marijuana, some have cited the observation that animals do not willingly self-administer THC, as they will cocaine. Even if that were true, it would not be relevant to human use of marijuana. The value in animal models of drug self-administration is not that they are necessary to show that a drug is reinforcing but rather that they provide a model in which the effects of a drug can be studied. Furthermore, THC is indeed rewarding to animals at some doses but, like many 93 reinforcing drugs, is aversive at high doses (4.0 mg/kg). in experiments conducted in animals outfitted with intravenous catheters that allow them 100 A specific set of neural pathways has been proposed to be a "reward system" that 51 to self-administer WIN 55,212, a drug that mimics the effects of THC. underlies the reinforcement of drugs of abuse and other pleasurable stimuli. properties of drugs are associated with their ability to increase concentrations of particular neurotransmitters in areas that are part of the proposed brain reward system. The median forebrain bundle and the nucleus accumbens are associated with brain reward 88 144 Cocaine, amphetamine, alcohol, opioids, nicotine, and THC extracellular fluid dopamine in the nucleus accumbens region (reviewed by Koob and Le pathways. all increase 88 110 Moal brain reward systems are not strictly "drug reinforcement centers." Rather, their biological role is to respond to a range of positive stimuli, including sweet foods and sexual attraction. Tolerance The rate at which tolerance to the various effects of any drug develops is an important consideration for its safety and efficacy. For medical use, tolerance to some effects of cannabinoids might be desirable. Differences in the rates at which tolerance to the multiple effects of a drug develops can be dangerous. For example, tolerance to the euphoric effects of heroin develops faster than tolerance to its respiratory depressant effects, so heroin users tend to increase their daily doses to reach their desired level of euphoria, thereby putting themselves at risk for respiratory arrest. Because tolerance to the various effects of cannabinoids might develop at different rates, it is important to evaluate independently their effects on mood, motor performance, memory, and attention, as well as any therapeutic use under investigation. Tolerance to most of the effects of marijuana can develop rapidly after only a few doses, and it also disappears rapidly. Tolerance to large doses has been found to persist in experimental animals for long periods after cessation of drug use. Performance impairment is less among people who use marijuana heavily than it is among those who 29,104,124 and Nestler and Aghajanian in 1997). However, it is important to note that possibly because of tolerance. Heavy users tend to reach higher plasma concentrations of THC than light users after similar doses of use marijuana only occasionally, Similar effects have been found Reinforcing THC, arguing against the possibility that heavy users show less performance impairment because they somehow absorb less THC (perhaps due to differences in smoking 95 There appear to be variations in the development of tolerance to the different effects of marijuana and oral THC. For example, daily marijuana smokers participated in a residential laboratory study to compare the development of tolerance to THC pills and to 61,62 behavior). smoked marijuana. day for four consecutive days; another group was given THC pills on the same schedule. During the four-day period, both groups became tolerant to feeling "high" and what they reported as a "good drug effect." In contrast, neither group became tolerant to the stimulatory effects of marijuana or THC on appetite. "Tolerance" does not mean that the drug no longer produced the effects but simply that the effects were less at the end than at the beginning of the four-day period. The marijuana smoking group reported feeling "mellow" after smoking and did not show tolerance to this effect; the group that took THC pills did not report feeling "mellow." The difference was also reported by many people who described their experiences to the IOM study team. The oral and smoked doses were designed to deliver roughly equivalent amounts of THC to a subject. Each smoked marijuana dose consisted of five 10-second puffs of a marijuana cigarette containing 3.1% THC; the pills contained 30 mg of THC. Both groups also received placebo drugs during other four-day periods. Although the dosing of the two groups was comparable, different routes of administration resulted in different patterns of drug effect. The peak effect of smoked marijuana is usually felt within 68,95 One group was given marijuana cigarettes to smoke four times per minutes and declines sharply after 30 minutes not felt until about an hour and lasts for several hours. Withdrawal A distinctive marijuana and THC withdrawal syndrome has been identified, but it is mild and subtle compared with the profound physical syndrome of alcohol or heroin withdrawal. The symptoms of marijuana withdrawal include restlessness, irritability, 31,74 mild agitation, insomnia, sleep EEG disturbance, nausea, and cramping (Table 3.2). In addition to those symptoms, two recent studies noted several more. A group of adolescents under treatment for conduct disorders also reported fatigue and illusions or hallucinations after marijuana abstinence (this study is discussed further in the section on 31 In a residential study of daily marijuana users, withdrawal symptoms included sweating and 62 A marijuana withdrawal syndrome, however, has been reported only in a group of adolescents in treatment for substance 31 ; the peak effect of oral THC is usually 118 "Prevalence and Predictors of Dependence on Marijuana and Other Drugs"). runny nose, in addition to those listed above. abuse problems 62,74 daily. and in a research setting where subjects were given marijuana or THC Withdrawal symptoms have been observed in carefully controlled laboratory studies 61,62 of people after use of both oral THC and smoked marijuana. were given very high doses of oral THC: 180—210 mg per day for 10—20 days, roughly In one study, subjects equivalent to smoking 9—10 2% THC cigarettes per day. During the abstinence period at the end of the study, the study subjects were irritable and showed insomnia, runny nose, sweating, and decreased appetite. The withdrawal symptoms, however, were short lived. In four days they had abated. The time course contrasts with that in another study in which lower doses of oral THC were used (80—120 mg/day for four days) and 61,62 withdrawal symptoms were still near maximal after four days. In animals, simply discontinuing chronic heavy dosing of THC does not reveal withdrawal symptoms, but the "removal" of THC from the brain can be made abrupt by another drug that blocks THC at its receptor if administered when the chronic THC is withdrawn. The withdrawal syndrome is pronounced, and the behavior of the animals 153 becomes hyperactive and disorganized. 16,24 The half-life of THC in brain is about an Although traces of THC can remain in the brain for much longer periods, the hour. amounts are not physiologically significant. Thus, the lack of a withdrawal syndrome when THC is abruptly withdrawn without administration of a receptor-blocking drug is probably not due to a prolonged decline in brain concentrations. Craving Craving, the intense desire for a drug, is the most difficult aspect of addiction to overcome. Research on craving has focused on nicotine, alcohol, cocaine, and opiates but 115 has not specifically addressed marijuana. is known about drug craving, its relevance to marijuana use has not been established. Most people who suffer from addiction relapse within a year of abstinence, and they 58 As addiction develops, craving increases even as maladaptive consequences accumulate. Animal studies indicate that the tendency to relapse is based on changes in brain function that continue for months or years after the 115 last use of the drug. manifestation of an abstinence syndrome remains an unanswered question in drug abuse 88 The "liking" of sweet foods, for example, is mediated by opioid forebrain systems and by brain stem systems, whereas "wanting" seems to be mediated by 109 Anticraving medications have been developed for nicotine and alcohol. The antidepressant, bupropion, blocks nicotine craving, while naltrexone blocks alcohol 115 often attribute their relapse to craving. research. ascending dopamine neurons that project to the nucleus accumbens. Another category of addiction medication includes drugs that block other drugs' effects. Some of those drugs also block craving. For example, methadone blocks the euphoric effects of heroin and also reduces craving. MARIJUANA USE AND DEPENDENCE Prevalence of Use Millions of Americans have tried marijuana, but most are not regular users. In 1996, 68.6 million people--32% of the U.S. population over 12 years old--had tried marijuana craving. 74 Thus, while this section briefly reviews what Whether neurobiological conditions change during the 132 or hashish at least once in their lifetime, but only 5% were current users. is most prevalent among 18- to 25-year-olds and declines sharply after the age of 34 77,132 (Figure 3.1). although the difference decreases by adulthood. Whites are more likely than blacks to use marijuana in adolescence, 132 Marijuana use Most people who have used marijuana did so first during adolescence. Social influences, such as peer pressure and prevalence of use by peers, are highly predictive of 9 initiationintomarijuanause. Initiationisnot,ofcourse,synonymouswithcontinuedor regular use. A cohort of 456 students who experimented with marijuana during their high school years were surveyed about their reasons for initiating, continuing, and stopping 9 theirmarijuanause. Studentswhobeganasheavyuserswereexcludedfromthe analysis. Those who did not become regular marijuana users cited two types of reasons for discontinuing. The first was related to health and well-being; that is, they felt that marijuana was bad for their health or for their family and work relationships. The second type was based on age-related changes in circumstances, including increased responsibility and decreased regular contact with other marijuana users. Among high school students who quit, parental disapproval was a stronger influence than peer disapproval in discontinuing marijuana use. In the initiation of marijuana use, the reverse was true. The reasons cited by those who continued to use marijuana were to "get in a better mood or feel better." Social factors were not a significant predictor of continued use. Data on young adults show similar trends. Those who use drugs in response to social influences are more likely to stop using them than those who also use them for 80 The age distribution of marijuana users among the general population contrasts with that of medical marijuana users. Marijuana use generally declines sharply after the age of 34 years, whereas medical marijuana users tend to be over 35. That raises the question of what, if any, relationship exists between abuse and medical use of marijuana; however, no studies reported in the scientific literature have addressed this question. Prevalence and Predictors of Dependence on Marijuana and Other Drugs Many factors influence the likelihood that a particular person will become a drug abuser or an addict; the user, the environment, and the drug are all important factors 114 (Table 3.3). people who are vulnerable to drug abuse for individual reasons and who find themselves in an environment that encourages drug abuse are initially likely to abuse the most readily available drug--regardless of its unique set of effects on the brain. The third category includes drug-specific effects that influence the abuse liability of a particular drug. As discussed earlier in this chapter, the more strongly reinforcing a drug is, the more likely that it will be abused. The abuse liability of a drug is enhanced by how quickly its effects are felt, and this is determined by how the drug is delivered. In general, the effects of drugs that are inhaled or injected are felt within minutes, and the effects of drugs that are ingested take a half hour or more. psychological reasons. The first two categories apply to potential abuse of any substance; that is, The proportion of people who become addicted varies among drugs. Table 3.4 shows estimates for the proportion of people among the general population who used or became dependent on different types of drugs. The proportion of users that ever became dependent includes anyone who was ever dependent--whether it was for a period of weeks or years--and thus includes more than those who are currently dependent. Compared to most other drugs listed in this table, dependence among marijuana users is relatively rare. This might be due to differences in specific drug effects, the availability of or penalties associated with the use of the different drugs, or some combination. Daily use of most illicit drugs is extremely rare in the general population. In 1989, daily use of marijuana among high school seniors was less than that of alcohol (2.9% and 76 Drug dependence is more prevalent in some sectors of the population than in others. 8 Age,gender,andraceorethnicgroupareallimportant. Excludingtobaccoandalcohol, 8 thefollowingtrendsofdrugdependencearestatisticallysignificant: Menare1.6times as likely than women to become drug dependent, non-Hispanic whites are about twice as likely as blacks to become drug dependent (the difference between non-Hispanic and Hispanic whites was not significant), and people 25—44 years old are more than three times as likely as those over 45 years old to become drug dependent. More often than not, drug dependence co-occurs with other psychiatric disorders. Most people with a diagnosis of drug dependence disorder also have a diagnosis of a 76 The most frequent co- occurring disorder is alcohol abuse; 60% of men and 30% of women with a diagnosis of drug dependence also abuse alcohol. In women who are drug dependent, phobic disorders and major depression are almost equally common (29% and 28%, respectively). Note that this study distinguished only between alcohol, nicotine and "other drugs"; marijuana was grouped among "other drugs." The frequency with which drug dependence and other psychiatric disorders co-occur might not be the same for marijuana and other drugs that were included in that category. A strong association between drug dependence and antisocial personality or its precursor, conduct disorder, is also widely reported in children and adults (reviewed in 126 ). Although the causes of the association are uncertain, Robins recently concluded that it is more likely that conduct disorders generally lead to substance abuse 126 Such a trend might, however, depend on the age at which the conduct disorder is manifested. A longitudinal study by Brooks and co-workers noted a significant relationship between adolescent drug use and disruptive disorders in young adulthood; except for earlier psychopathology, such as childhood conduct disorder, the drug use preceded the 18 In contrast with use of other illicit drugs and tobacco, moderate (less than once a week and more than once a month) to heavy marijuana use did not predict anxiety or depressive disorders; but it was similar to those other drugs in predicting antisocial personality disorder. The rates of disruptive disorders increased with 4.2%, respectively). another psychiatric disorder (76% of men and 65% of women). 1998 by Robins than the reverse. psychiatric disorders. increased drug use. Thus, heavy drug use among adolescents can be a warning sign for later psychiatric disorders; whether it is an early manifestation of or a cause of those disorders remains to be determined. Psychiatric disorders are more prevalent among adolescents who use drugs--including 79 alcohol and nicotine--than among those who do not. Table 3.5 indicates that adolescent boys who smoke cigarettes daily are about 10 times as likely to have a psychiatric disorder diagnosis as those who do not smoke. However, the table does not compare intensity of use among the different drug classes. Thus, although daily cigarette smoking among adolescent boys is more strongly associated with psychiatric disorders than is any use of illicit substances, it does not follow that this comparison is true for every amount 79 Few marijuana users become dependent on it (Table 3.4), but those who do encounter 19,143 of cigarette smoking. problems similar to those associated with dependence on other drugs. appears to be less severe among people who use only marijuana than among those who 19,143 abuse cocaine or those who abuse marijuana with other drugs (including alcohol). Data gathered in 1990—1992 from the National Comorbidity Study of over 8,000 persons 15—54 years old indicate that 4.2% of the general population were dependent on 8 marijuanaatsometime. Similarresultsforthefrequencyofsubstanceabuseamongthe general population were obtained from the Epidemiological Catchment Area Program, a survey of over 19,000 people. According to data collected in the early 1980s for that study, 4.4% of adults have, at one time, met the criteria for marijuana dependence. In comparison, 13.8% of adults met the criteria for alcohol dependence and 36.0% for tobacco dependence. After alcohol and nicotine, marijuana was the substance most frequently associated with a diagnosis of substance dependence. In a 15-year study begun in 1979, 7.3% of 1,201 adolescents and young adults in suburban New Jersey at some time met the criteria for marijuana dependence; this indicates that the rate of marijuana dependence might be even higher in some groups of 71 Adolescents meet the criteria for drug dependence at lower rates of marijuana use than do adults, and this 25 adolescents and young adults than in the general population. suggests that they are more vulnerable to dependence than adults (see Box 3.2). Dependence Youths who are already dependent on other substances are particularly vulnerable to 31 marijuana dependence. For example, Crowley and co-workers 229 adolescent patients in a residential treatment program for delinquent, substance- involved youth and found that those patients were dependent on an average of 3.2 substances. The adolescents had previously been diagnosed as dependent on at least one substance (including nicotine and alcohol) and had three or more conduct disorder symptoms during their life. About 83% of those who had used marijuana at least six times went on to develop marijuana dependence. About equal numbers of youths in the study had a diagnosis of marijuana dependence and a diagnosis of alcohol dependence; fewer were nicotine dependent. Comparisons of dependence potential between different drugs should be made cautiously. The probability that a particular drug will be abused is interviewed a group of influenced by many factors, including the specific drug effects and availability of the drug. Although parents often state that marijuana caused their children to be rebellious, the troubled adolescents in the study by Crowley and co-workers developed conduct disorders before marijuana abuse. That is consistent with reports that the more symptoms 127 of conduct disorders children have, the younger they begin drug abuse, earlier they begin drug use, the more likely it is to be followed by abuse or 125 Genetic factors are known to play a role in the likelihood of abuse for drugs other than 7,129 dependence. marijuana, and it is not unexpected that genetic factors play a role in the marijuana experience, including the likelihood of abuse. A study of over 8,000 male twins listed in the Vietnam Era Twin Registry indicated that genes have a statistically significant 97 influence on whether a person finds the effects of marijuana pleasant. Not surprisingly, people who found marijuana to be pleasurable used it more often than those who found it unpleasant. The study suggested that, although social influences play an important role in the initiation of use, individual differences--perhaps associated with the brain's reward system--influence whether a person will continue using marijuana. Similar results were 86 Family and social environment strongly influenced the likelihood of ever using marijuana but had little effect on the likelihood of heavy use or abuse. The latter were more influenced by genetic factors. Those results are consistent with the finding that the degree to which rats find THC rewarding is genetically based. In summary, although few marijuana users develop dependence, some do. But they appear to be less likely to do so than users of other drugs (including alcohol and nicotine), and marijuana dependence appears to be less severe than dependence on other drugs. Drug dependence is more prevalent in some sectors of the population than others, but no group has been identified as particularly vulnerable to the drug-specific effects of marijuana. Adolescents, especially troubled ones, and people with psychiatric disorders (including substance abuse) appear to be more likely than the general population to become dependent on marijuana. If marijuana or cannabinoid drugs were approved for therapeutic uses, it would be important to consider the possibility of dependence, particularly for patients at high risk for substance dependence. Some controlled substances that are approved medications produce dependence after long-term use; this, however, is a normal part of patient management and does not generally present undue risk to the patient. Progression from Marijuana to Other Drugs The fear that marijuana use might cause, as opposed to merely precede, the use of drugs that are more harmful is of great concern. To judge from comments submitted to the IOM study team, it appears to be of greater concern than the harms directly related to marijuana itself. The discussion that marijuana is a "gateway" drug implicitly recognizes that other illicit drugs might inflict greater damage to health or social relations than found in a study of female twins. and that the 92 marijuana. Although the scientific literature generally discusses drug use progression between a variety of drug classes, including alcohol and tobacco, the public discussion has focused on marijuana as a "gateway" drug that leads to abuse of more harmful illicit drugs, such as cocaine and heroin. There are strikingly regular patterns in the progression of drug use from adolescence to adulthood. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug that most people encounter. Not surprisingly, most users of other illicit 81,82 drugs used marijuana first. marijuana--they begin with alcohol and nicotine, usually when they are too young to do 82,90 so legally. The gateway analogy evokes two ideas that are often confused. The first, more often referred to as the "stepping stone" hypothesis, is the idea that progression from marijuana 82 In fact, most drug users do not begin their drug use with to other drugs arises from pharmacological properties of marijuana itself. that marijuana serves as a gateway to the world of illegal drugs in which youths have greater opportunity and are under greater social pressure to try other illegal drugs. The latter interpretation is most often used in the scientific literature, and it is supported, although not proven, by the available data. The stepping stone hypothesis applies to marijuana only in the broadest sense. People who enjoy the effects of marijuana are, logically, more likely to be willing to try other mood-altering drugs than are people who are not willing to try marijuana or who dislike its effects. In other words, many of the factors associated with a willingness to use marijuana are, presumably, the same as those associated with a willingness to use other illicit drugs. Those factors include physiological reactions to the drug effect, which are consistent with the stepping stone hypothesis, but also psychosocial factors, which are independent of drug-specific effects. There is no evidence that marijuana serves as a stepping stone on the basis of its particular physiological effect. One might argue that marijuana is generally used before other illicit mood-altering drugs, in part, because its effects are milder; in that case, marijuana is a stepping stone only in the same sense as taking a small dose of a particular drug and then increasing that dose over time is a stepping stone to increased drug use. Whereas the stepping stone hypothesis presumes a predominantly physiological component of drug progression, the gateway theory is a social theory. The latter does not suggest that the pharmacological qualities of marijuana make it a risk factor for progression to other drug use. Instead, the legal status of marijuana makes it a gateway 82 Psychiatric disorders are associated with substance dependence and are probably risk factors for progression in drug use. For example, the troubled adolescents studied by 31 were dependent on an average of 3.2 substances, and this suggests that their conduct disorders were associated with increased risk of progressing from one drug to another. Abuse of a single substance is probably also a risk factor for later multiple drug use. For example, in a longitudinal study that examined drug use and drug. Crowley and co-workers The second is dependence, about 26% of problem drinkers reported that they first used marijuana after the onset of alcohol-related problems (R. Pandina, IOM workshop). The study also found that 11% of marijuana users developed chronic marijuana problems; most also had alcohol problems. Intensity of drug use is an important risk factor in progression. Daily marijuana users are more likely than their peers to be extensive users of other substances (for review, see 78 Kandel and Davies by the age 24—25, 75% never used any other illicit drug; 53% of those who had used it 78 The factors that best predict use of illicit drugs other than marijuana are probably the following: age of first alcohol or nicotine use, heavy marijuana use, and psychiatric disorders. However, progression to illicit drug use is not synonymous with heavy or persistent drug use. Indeed, although the age of onset of use of licit drugs (alcohol and nicotine) predicts later illicit drug use, it does not appear to predict persistent or heavy 90 use of illicit drugs. Data on the gateway phenomenon are often overinterpreted. For example, one study 55 ). Of 34- to 35-year- old men who had used marijuana 10—99 times more than 100 times did progress to using other illicit drugs 10 or more times. Comparable proportions for women are 64% and 50%. reports that "marijuana's role as a gateway drug appears to have increased." It was a retrospective study based on interviews of drug abusers who reported smoking crack or injecting heroin daily. The data from the study provide no indication of what proportion of marijuana users become serious drug abusers; rather, they indicate that serious drug abusers usually use marijuana before they smoke crack or inject heroin. Only a small percentage of the adult population uses crack or heroin daily; during the five-year period from 1993 to 1997, an average of three people per 1,000 used crack and about two per 132 Many of the data on which the gateway theory is based do not measure dependence; instead, they measure use--even once-only use. Thus, they show only that marijuana users are more likely to use other illicit drugs (even if only once) than are people who never use marijuana, not that they become dependent or even frequent users. The authors of these studies are careful to point out that their data should not be used as evidence of an inexorable causal progression; rather they note that identifying stage-based user groups makes it possible to identify the specific risk factors that predict movement from 25 In the sense that marijuana use typically precedes rather than follows initiation into the use of other illicit drugs, it is indeed a gateway drug. However, it does not appear to be a gateway drug to the extent that it is the cause or even that it is the most significant predictor of serious drug abuse; that is, care must be taken not to attribute cause to association. The most consistent predictors of serious drug use appear to be the intensity of marijuana use and co-occurring psychiatric disorders or a family history of 78,83 psychopathology (including alcoholism). 1,000 used heroin in the preceding month. one stage of drug use to the next--the real issue in the gateway discussion. An important caution is that data on drug use progression pertain to nonmedical drug use. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would be the same. Kandel and co-workers also included nonmedical use of prescription psychoactive drugs in their study of drug use 82 progression. a clear and consistent sequence of drug use involving the abuse of prescription psychoactive drugs. The current data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse among medical marijuana users. Whether the medical use of marijuana might encourage drug abuse among the general community--not among medical marijuana users themselves but among others simply because of the fact that marijuana would be used for medical purposes--is another question. LINK BETWEEN MEDICAL USE AND DRUG ABUSE Almost everyone who spoke or wrote to the IOM study team about the potential harms posed by the medical use of marijuana felt that it would send the wrong message to children and teenagers. They stated that information about the harms caused by marijuana is undermined by claims that marijuana might have medical value. Yet many of our powerful medicines are also dangerous medicines. These two facets of medicine-- effectiveness and risk--are inextricably linked. The question here is not whether marijuana can be both harmful and helpful but whether the perception of its benefits will increase its abuse. For now any answer to the question remains conjecture. Because marijuana is not an approved medicine, there is little information about the consequences of its medical use in modern society. Reasonable inferences might be drawn from some examples. Opiates, such as morphine and codeine, are an example of a class of drugs that is both abused to great harm and used to great medical benefit, and it would be useful to examine the relationship between their medical use and their abuse. In a "natural experiment" during 1973—1978 some states decriminalized marijuana, and others did not. Finally, one can examine the short-term consequences of the publicity surrounding the 1996 medical marijuana campaign in California and ask whether it had any measurable impact on marijuana consumption among youth in California; the consequences of "message" that marijuana might have medical use are examined below. Medical Use and Abuse of Opiates Two highly influential papers published in the 1920s and 1950s led to widespread concern among physicians and medical licensing boards that liberal use of opiates would 106 in 1996). Such fears have proven unfounded; it is now recognized that fear of producing addicts through medical treatment resulted in needless suffering among patients with pain as physicians 27,44 In contrast with the use of alcohol, nicotine, and illicit drugs, there was not result in many addicts (reviewed by Moulin and co-workers needlessly limited appropriate doses of medications. addiction problems with misuse of drugs that have been prescribed for medical use. Few people begin their drug 114 Opiates are carefully regulated in the medical setting, and diversion of medically prescribed opiates to the black market is not generally considered to be a major problem. No evidence suggests that the use of opiates or cocaine for medical purposes has increased the perception that their illicit use is safe or acceptable. Clearly, there are risks that patients will abuse marijuana for its psychoactive effects and some likelihood of diversion of marijuana from legitimate medical channels into the illicit market. But those risks do not differentiate marijuana from many accepted medications that are abused by some patients or diverted from medical channels for nonmedical use. Medications with abuse potential are placed in Schedule II of the Controlled Substances Act, which brings them under stricter control, including quotas on the amount that can be legally manufactured (see chapter 5 for discussion of the Controlled Substances Act). That scheduling also signals to physicians that a drug has abuse potential and that they should monitor its use by patients who could be at risk for drug abuse. Marijuana Decriminalization Monitoring the Future, the annual survey of values and lifestyles of high school seniors, revealed that high school seniors in decriminalized states reported using no more 72 marijuana than did their counterparts in states where marijuana was not decriminalized. Another study reported somewhat conflicting evidence indicating that decriminalization 105 had increased marijuana use. Network (DAWN), which has collected data on drug-related emergency room (ER) cases since 1975. There was a greater increase from 1975 to 1978 in the proportion of ER patients who had used marijuana in states that had decriminalized marijuana in 1975— 1976 than in states that had not decriminalized it (Table 3.6). Despite the greater increase among decriminalized states, the proportion of marijuana users among ER patients by 1978 was about equal in states that had and states that had not decriminalized marijuana. That is because the non-decriminalized states had higher rates of marijuana use before decriminalization. In contrast with marijuana use, rates of other illicit drug use among ER patients were substantially higher in states that did not decriminalize marijuana use. Thus, there are different possible reasons for the greater increase in marijuana use in the decriminalized states. On the one hand, decriminalization might have led to an increased use of marijuana (at least among people who sought health care in hospital ERs). On the other hand, the lack of decriminalization might have encouraged greater use of drugs that are even more dangerous than marijuana. The differences between the results for high school seniors from the Monitoring the Future study and the DAWN data are unclear, although the author of the latter study suggests that the reasons might lie in limitations inherent in how the DAWN data are 105 In 1976, the Netherlands adopted a policy of toleration for possession of up to 30 g of marijuana. There was little change in marijuana use during the seven years after the policy change, which suggests that the change itself had little effect; however, in 1984, when Dutch "coffee shops" that sold marijuana commercially spread throughout That study used data from the Drug Awareness Warning collected. Amsterdam, marijuana use began to increase. continued to increase in the Netherlands at the same rate as in the United States and Norway--two countries that strictly forbid marijuana sale and possession. Furthermore, during this period, approximately equal percentages of American and Dutch 18 year olds used marijuana; Norwegian 18 year olds were about half as likely to have used marijuana. The authors of this study conclude that there is little evidence that the Dutch marijuana depenalization policy led to increased marijuana use, although they note that commercialization of marijuana might have contributed to its increased use. Thus, there is little evidence that decriminalization of marijuana use necessarily leads to a substantial increase in marijuana use. The Medical Marijuana Debate The most recent National Household Survey on Drug Abuse showed that among people 12—17 years old the perceived risk associated with smoking marijuana once or 132 (Perceived risk is measured as the percentage of survey respondents who report that they "perceive great risk of harm" in using a drug at a specified frequency.) At first glance, that might seem to validate the fear that the medical marijuana debate of 1996--before passage of the California medical marijuana referendum in November 1997--had sent a message that marijuana use is safe. But a closer analysis of the data shows that Californian youth were an exception to the national trend. In contrast to the national trend, the perceived risk of 1321 In summary, there is no evidence that the medical marijuana debate has altered adolescents' 132 PSYCHOLOGICAL HARMS In assessing the relative risks and benefits related to the medical use of marijuana, the psychological effects of marijuana can be viewed both as unwanted side effects and as potentially desirable end points in medical treatment. However, the vast majority of research on the psychological effects of marijuana has been in the context of assessing the drug's intoxicating effects when it is used for nonmedical purposes. Thus, the literature does not directly address the effects of marijuana taken for medical purposes. There are some important caveats to consider in attempting to extrapolate from the research mentioned above to the medical use of marijuana. The circumstances under which psychoactive drugs are taken are an important influence on their psychological effects. Furthermore, research protocols to study marijuana's psychological effects in most instances were required to use participants who already had experience with marijuana. People who might have had adverse reactions to marijuana either would choose not to participate in this type of study or would be screened out by the investigator. Therefore, the incidence of adverse reactions to marijuana that might occur in people with no marijuana experience cannot be estimated from such studies. A further complicating factor concerns the dose regimen used for laboratory studies. In most instances, laboratory research studies have looked at the effects of single doses of twice a week had decreased significantly between 1996 and 1997. marijuana use did not change among California youth between 1996 and 1997. perceptions of the risks associated with marijuana use. 98 During the 1990s, marijuana use has marijuana, which might be different from those observed when the drug is taken repeatedly for a chronic medical condition. Nonetheless, laboratory studies are useful in suggesting what psychological functions might be studied when marijuana is evaluated for medical purposes. Results of laboratory studies indicate that acute and chronic marijuana use has pronounced effects on mood, psychomotor, and cognitive functions. These psychological domains should therefore be considered in assessing the relative risks and therapeutic benefits related to marijuana or cannabinoids for any medical condition. Psychiatric Disorders A major question remains as to whether marijuana can produce lasting mood disorders 52 or psychotic disorders, such as schizophrenia. Georgotas and Zeidenberg reported that smoking 10—22 marijuana cigarettes per day was associated with a gradual waning of the positive mood and social facilitating effects of marijuana and an increase in irritability, social isolation, and paranoid thinking. Inasmuch as smoking one cigarette is 68,95,118 enough to make a person feel "high" for about 1—3 hours, the subjects in that study were taking very high doses of marijuana. Reports have described the development of apathy, lowered motivation, and impaired educational performance in heavy marijuana 121,122 There are clinical reports of marijuana-induced psychosis-like states (schizophrenia-like, 112 depression, and/or mania) lasting for a week or more. of the varied nature of the psychotic states induced by marijuana, there is no specific "marijuana psychosis." Rather, the marijuana experience might trigger latent users who do not appear to be behaviorally impaired in other ways. psychopathology of many types. concluded that disorder. As noted earlier, drug abuse is common among people with psychiatric 66 60 More recently, Hall and colleagues "there is reasonable evidence that heavy cannabis use, and perhaps acute use in sensitive individuals, can produce an acute psychosis in which confusion, amnesia, delusions, hallucinations, anxiety, agitation and hypomanic symptoms predominate." Regardless of which of those interpretations is correct, the two reports agree that there is little evidence that marijuana alone produces a psychosis that persists after the period of intoxication. Schizophrenia The association between marijuana and schizophrenia is not well understood. The scientific literature indicates general agreement that heavy marijuana use can precipitate schizophrenic episodes but not that marijuana use can cause the underlying psychotic 59,96,151 disorders. Estimates of the prevalence of marijuana use among schizophrenics vary considerably but are in general agreement that it is at least as great as that among the general population. 35 Schizophrenics prefer the effects of marijuana to those of alcohol 134 134 and cocaine, reasons for this are unknown, but it raises the possibility that schizophrenics might obtain some symptomatic relief from moderate marijuana use. But overall, compared with the general population, people with schizophrenia or with a family history of schizophrenia which they seem to use less often than does the general population. The Hollister suggests that, because are likely to be at greater risk for adverse psychiatric effects from the use of cannabinoids. Cognition As discussed earlier, acutely administered marijuana impairs cognition. 60,66,112 Positron emission tomography (PET) imaging allows investigators to measure the acute effects of marijuana smoking on active brain function. Human volunteers who perform auditory attention tasks before and after smoking a marijuana cigarette show impaired performance while under the influence of marijuana; this is associated with substantial reduction in blood flow to the temporal lobe of the brain, an area that is sensitive to such 116,117 tasks. Marijuana smoking increases blood flow in other brain regions, such as the 101,155 frontal lobes and lateral cerebellum. Earlier studies purporting to show structural 22 changes in the brains of heavy marijuana users have not been replicated with more sophisticated techniques. 28,89 14,122 Nevertheless, recent studies marijuana users after a brief period (19—24 hours) of marijuana abstinence. Longer term 140 Although these studies have attempted to match heavy marijuana users with subjects of similar cognitive abilities before exposure to marijuana use, the adequacy of this matching has been 133 cognitive deficits in heavy marijuana users have also been reported. have found subtle defects in cognitive tasks in heavy questioned. reviewed in an article by Pope and colleagues. are designed to differentiate between changes in brain function caused the effects of marijuana and by the illness for which marijuana is being given. AIDS dementia is an obvious example of this possible confusion. It is also important to determine whether repeated use of marijuana at therapeutic dosages produces any irreversible cognitive effects. Psychomotor Performance Marijuana administration has been reported to affect psychomotor performance on a 23 not only details the studies that have been done but also points out the inconsistencies among studies, the methodological shortcomings of many studies, and the large individual differences among the studies attributable to subject, situational, and methodological factors. Those factors must be considered in studies of psychomotor performance when participants are involved in a clinical trial of the efficacy of marijuana. The types of psychomotor functions that have been shown to be disrupted by the acute administration of marijuana include body sway, hand steadiness, rotary pursuit, driving and flying simulation, divided attention, sustained attention, and the digit-symbol substitution test. A study of experienced airplane pilots showed that even 24 hours after a single marijuana cigarette their performance on flight 163 Before the tests, however, they told the study investigators that they were sure their performance would be unaffected. The complex methodological issues facing research in this area are well number of tasks. The review by Chait and Pierri simulator tests was impaired. 121 Care must be exercised so that studies Cognitive impairments associated with acutely administered marijuana limit the activities that people would be able to do safely or productively. For example, no one under the influence of marijuana or THC should drive a vehicle or operate potentially dangerous equipment. Amotivational Syndrome One of the more controversial effects claimed for marijuana is the production of an "amotivational syndrome." This syndrome is not a medical diagnosis, but it has been used to describe young people who drop out of social activities and show little interest in school, work, or other goal-directed activity. When heavy marijuana use accompanies these symptoms, the drug is often cited as the cause, but no convincing data demonstrate 23 a causal relationship between marijuana smoking and these behavioral characteristics. is not enough to observe that a chronic marijuana user lacks motivation. Instead, relevant personality traits and behavior of subjects must be assessed before and after the subject becomes a heavy marijuana user. Because such research can only be done on subjects who become heavy marijuana users on their own, a large population study--such as the Epidemiological Catchment Area study described earlier in this chapter--would be needed to shed light on the relationship between motivation and marijuana use. Even then, although a causal relationship between the two could, in theory, be dismissed by an epidemiological study, causality could not be proven.
Whether currently available or unavailable, what is an example of a smokeless cannabis delivery method that clinical trials hope to help develop? To answer the following question, use only information contained in the context block/prompt. Do not use any previous knowledge or outside sources. Three focal concerns in evaluating the medical use of marijuana are: 1. Evaluation of the effects of isolated cannabinoids; 2. Evaluation of the risks associated with the medical use of marijuana; and 3. Evaluation of the use of smoked marijuana. EFFECTS OF ISOLATED CANNABINOIDS Cannabinoid Biology Much has been learned since the 1982 IOM report Marijuana and Health. Although it was clear then that most of the effects of marijuana were due to its actions on the brain, there was little information about how THC acted on brain cells (neurons), which cells were affected by THC, or even what general areas of the brain were most affected by THC. In addition, too little was known about cannabinoid physiology to offer any scientific insights into the harmful or therapeutic effects of marijuana. That all changed with the identification and characterization of cannabinoid receptors in the 1980s and 1990s. During the past 16 years, science has advanced greatly and can tell us much more about the potential medical benefits of cannabinoids. Conclusion: At this point, our knowledge about the biology of marijuana and cannabinoids allows us to make some general conclusions: o Cannabinoids likely have a natural role in pain modulation, control of movement, and memory. o The natural role of cannabinoids in immune systems is likely multi-faceted and remains unclear. o The brain develops tolerance to cannabinoids. o Animal research demonstrates the potential for dependence, but this potential is observed under a narrower range of conditions than with benzodiazepines, opiates, cocaine, or nicotine. o Withdrawal symptoms can be observed in animals but appear to be mild compared to opiates or benzodiazepines, such as diazepam (Valium). Conclusion: The different cannabinoid receptor types found in the body appear to play different roles in normal human physiology. In addition, some effects of cannabinoids appear to be independent of those receptors. The variety of mechanisms through which cannabinoids can influence human physiology underlies the variety of potential therapeutic uses for drugs that might act selectively on different cannabinoid systems. Recommendation 1: Research should continue into the physiological effects of synthetic and plant-derived cannabinoids and the natural function of cannabinoids found in the body. Because different cannabinoids appear to have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC alone. Efficacy of Cannabinoid Drugs The accumulated data indicate a potential therapeutic value for cannabinoid drugs, particularly for symptoms such as pain relief, control of nausea and vomiting, and appetite stimulation. The therapeutic effects of cannabinoids are best established for THC, which is generally one of the two most abundant of the cannabinoids in marijuana. (Cannabidiol is generally the other most abundant cannabinoid.) The effects of cannabinoids on the symptoms studied are generally modest, and in most cases there are more effective medications. However, people vary in their responses to medications, and there will likely always be a subpopulation of patients who do not respond well to other medications. The combination of cannabinoid drug effects (anxiety reduction, appetite stimulation, nausea reduction, and pain relief) suggests that cannabinoids would be moderately well suited for particular conditions, such as chemotherapy-induced nausea and vomiting and AIDS wasting. Defined substances, such as purified cannabinoid compounds, are preferable to plant products, which are of variable and uncertain composition. Use of defined cannabinoids permits a more precise evaluation of their effects, whether in combination or alone. Medications that can maximize the desired effects of cannabinoids and minimize the undesired effects can very likely be identified. Although most scientists who study cannabinoids agree that the pathways to cannabinoid drug development are clearly marked, there is no guarantee that the fruits of scientific research will be made available to the public for medical use. Cannabinoid- based drugs will only become available if public investment in cannabinoid drug research is sustained and if there is enough incentive for private enterprise to develop and market such drugs. Conclusion: Scientific data indicate the potential therapeutic value of cannabinoid drugs, primarily THC, for pain relief, control of nausea and vomiting, and appetite stimulation; smoked marijuana, however, is a crude THC delivery system that also delivers harmful substances. Recommendation 2: Clinical trials of cannabinoid drugs for symptom management should be conducted with the goal of developing rapid-onset, reliable, and safe delivery systems. Influence of Psychological Effects on Therapeutic Effects The psychological effects of THC and similar cannabinoids pose three issues for the therapeutic use of cannabinoid drugs. First, for some patients--particularly older patients with no previous marijuana experience--the psychological effects are disturbing. Those patients report experiencing unpleasant feelings and disorientation after being treated with THC, generally more severe for oral THC than for smoked marijuana. Second, for conditions such as movement disorders or nausea, in which anxiety exacerbates the symptoms, the antianxiety effects of cannabinoid drugs can influence symptoms indirectly. This can be beneficial or can create false impressions of the drug effect. Third, for cases in which symptoms are multifaceted, the combination of THC effects might provide a form of adjunctive therapy; for example, AIDS wasting patients would likely benefit from a medication that simultaneously reduces anxiety, pain, and nausea while stimulating appetite. Conclusion: The psychological effects of cannabinoids, such as anxiety reduction, sedation, and euphoria can influence their potential therapeutic value. Those effects are potentially undesirable for certain patients and situations and beneficial for others. In addition, psychological effects can complicate the interpretation of other aspects of the drug's effect. Recommendation 3: Psychological effects of cannabinoids such as anxiety reduction and sedation, which can influence medical benefits, should be evaluated in clinical trials. RISKS ASSOCIATED WITH MEDICAL USE OF MARIJUANA Physiological Risks Marijuana is not a completely benign substance. It is a powerful drug with a variety of effects. However, except for the harms associated with smoking, the adverse effects of marijuana use are within the range of effects tolerated for other medications. The harmful effects to individuals from the perspective of possible medical use of marijuana are not necessarily the same as the harmful physical effects of drug abuse. When interpreting studies purporting to show the harmful effects of marijuana, it is important to keep in mind that the majority of those studies are based on smoked marijuana, and cannabinoid effects cannot be separated from the effects of inhaling smoke from burning plant material and contaminants. For most people the primary adverse effect of acute marijuana use is diminished psychomotor performance. It is, therefore, inadvisable to operate any vehicle or potentially dangerous equipment while under the influence of marijuana, THC, or any cannabinoid drug with comparable effects. In addition, a minority of marijuana users experience dysphoria, or unpleasant feelings. Finally, the short-term immunosuppressive effects are not well established but, if they exist, are not likely great enough to preclude a legitimate medical use. The chronic effects of marijuana are of greater concern for medical use and fall into two categories: the effects of chronic smoking and the effects of THC. Marijuana smoking is associated with abnormalities of cells lining the human respiratory tract. Marijuana smoke, like tobacco smoke, is associated with increased risk of cancer, lung damage, and poor pregnancy outcomes. Although cellular, genetic, and human studies all suggest that marijuana smoke is an important risk factor for the development of respiratory cancer, proof that habitual marijuana smoking does or does not cause cancer awaits the results of well-designed studies. Conclusion: Numerous studies suggest that marijuana smoke is an important risk factor in the development of respiratory disease. Recommendation 4: Studies to define the individual health risks of smoking marijuana should be conducted, particularly among populations in which marijuana use is prevalent. Marijuana Dependence and Withdrawal A second concern associated with chronic marijuana use is dependence on the psychoactive effects of THC. Although few marijuana users develop dependence, some do. Risk factors for marijuana dependence are similar to those for other forms of substance abuse. In particular, anti-social personality and conduct disorders are closely associated with substance abuse. Conclusion: A distinctive marijuana withdrawal syndrome has been identified, but it is mild and short lived. The syndrome includes restlessness, irritability, mild agitation, insomnia, sleep disturbance, nausea, and cramping. Marijuana as a "Gateway" Drug Patterns in progression of drug use from adolescence to adulthood are strikingly regular. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug most people encounter. Not surprisingly, most users of other illicit drugs have used marijuana first. In fact, most drug users begin with alcohol and nicotine before marijuana--usually before they are of legal age. In the sense that marijuana use typically precedes rather than follows initiation of other illicit drug use, it is indeed a "gateway" drug. But because underage smoking and alcohol use typically precede marijuana use, marijuana is not the most common, and is rarely the first, "gateway" to illicit drug use. There is no conclusive evidence that the drug effects of marijuana are causally linked to the subsequent abuse of other illicit drugs. An important caution is that data on drug use progression cannot be assumed to apply to the use of drugs for medical purposes. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would remain the same as seen in illicit use. Finally, there is a broad social concern that sanctioning the medical use of marijuana might increase its use among the general population. At this point there are no convincing data to support this concern. The existing data are consistent with the idea that this would not be a problem if the medical use of marijuana were as closely regulated as other medications with abuse potential. Conclusion: Present data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse. However, this question is beyond the issues normally considered for medical uses of drugs and should not be a factor in evaluating the therapeutic potential of marijuana or cannabinoids. USE OF SMOKED MARIJUANA Because of the health risks associated with smoking, smoked marijuana should generally not be recommended for long-term medical use. Nonetheless, for certain patients, such as the terminally ill or those with debilitating symptoms, the long-term risks are not of great concern. Further, despite the legal, social, and health problems associated with smoking marijuana, it is widely used by certain patient groups. Recommendation 5: Clinical trials of marijuana use for medical purposes should be conducted under the following limited circumstances: trials should involve only short-term marijuana use (less than six months), should be conducted in patients with conditions for which there is reasonable expectation of efficacy, should be approved by institutional review boards, and should collect data about efficacy. The goal of clinical trials of smoked marijuana would not be to develop marijuana as a licensed drug but rather to serve as a first step toward the possible development of nonsmoked rapid-onset cannabinoid delivery systems. However, it will likely be many years before a safe and effective cannabinoid delivery system, such as an inhaler, is available for patients. In the meantime there are patients with debilitating symptoms for whom smoked marijuana might provide relief. The use of smoked marijuana for those patients should weigh both the expected efficacy of marijuana and ethical issues in patient care, including providing information about the known and suspected risks of smoked marijuana use. Recommendation 6: Short-term use of smoked marijuana (less than six months) for patients with debilitating symptoms (such as intractable pain or vomiting) must meet the following conditions: o failure of all approved medications to provide relief has been documented, o the symptoms can reasonably be expected to be relieved by rapid- onset cannabinoid drugs, o such treatment is administered under medical supervision in a manner that allows for assessment of treatment effectiveness, and o involves an oversight strategy comparable to an institutional review board process that could provide guidance within 24 hours of a submission by a physician to provide marijuana to a patient for a specified use. Until a nonsmoked rapid-onset cannabinoid drug delivery system becomes available, we acknowledge that there is no clear alternative for people suffering from chronic conditions that might be relieved by smoking marijuana, such as pain or AIDS wasting. One possible approach is to treat patients as n-of-1 clinical trials (single-patient trials), in which patients are fully informed of their status as experimental subjects using a harmful drug delivery system and in which their condition is closely monitored and documented under medical supervision, thereby increasing the knowledge base of the risks and benefits of marijuana use under such conditions.data, it is important to understand that decisions about drug regulation are based on a variety of moral and social considerations, as well as on medical and scientific ones. Even when a drug is used only for medical purposes, value judgments affect policy decisions concerning its medical use. For example, the magnitude of a drug's expected medical benefit affects regulatory judgments about the acceptability of risks associated with its use. Also, although a drug is normally approved for medical use only on proof of its "safety and efficacy," patients with life-threatening conditions are sometimes (under protocols for "compassionate use") allowed access to unapproved drugs whose benefits and risks are uncertain. Value judgments play an even more substantial role in regulatory decisions concerning drugs, such as marijuana, that are sought and used for nonmedical purposes. Then policymakers must take into account not only the risks and benefits associated with medical use but also possible interactions between the regulatory arrangements governing medical use and the integrity of the legal controls set up to restrict nonmedical use. It should be clear that many elements of drug control policy lie outside the realm of biology and medicine. Ultimately, the complex moral and social judgments that underlie drug control policy must be made by the American people and their elected officials. A goal of this report is to evaluate the biological and medical factors that should be taken into account in making those judgments. HOW THIS STUDY WAS CONDUCTED Information was gathered through scientific workshops, site visits, analysis of the relevant scientific literature, and extensive consultation with biomedical and social scientists. The three 2-day workshops--in Irvine, California; New Orleans, Louisiana; and Washington, D.C.--were open to the public and included scientific presentations and reports, mostly from patients and their families, about their experiences with and perspectives on the medical use of marijuana. Scientific experts in various fields were selected to talk about the latest research on marijuana, cannabinoids, and related topics (listed in Appendix B). Selection of the experts was based on recommendations by their peers, who ranked them among the most accomplished scientists and the most knowledgeable about marijuana and cannabinoids in their own fields. In addition, advocates for (John Morgan) and against (Eric A. Voth) the medical use of marijuana were invited to present scientific evidence in support of their positions. Information presented at the scientific workshops was supplemented by analysis of the scientific literature and evaluating the methods used in various studies and the validity of the authors' conclusions. Different kinds of clinical studies are useful in different ways: results of a controlled double-blind study with adequate sample sizes can be expected to apply to the general population from which study subjects were drawn; an isolated case report can suggest further studies but cannot be presumed to be broadly applicable; and survey data can be highly informative but are generally limited by the need to rely on self-reports of drug use and on unconfirmed medical diagnoses. This report relies mainly on the most relevant and methodologically rigorous studies available and treats the results of more limited studies cautiously. In addition, study results are presented in such a way as to allow thoughtful readers to judge the results themselves. The Institute of Medicine (IOM) appointed a panel of nine experts to advise the study team on technical issues. These included neurology and the treatment of pain (Howard Fields); regulation of prescription drugs (J. Richard Crout); AIDS wasting and clinical trials (Judith Feinberg); treatment and pathology of multiple sclerosis (Timothy Vollmer); drug dependence among adolescents (Thomas Crowley); varieties of drug dependence (Dorothy Hatsukami); internal medicine, health care delivery, and clinical epidemiology (Eric B. Larson); cannabinoids and marijuana pharmacology (Billy R. Martin); and cannabinoid neuroscience (Steven R. Childers). Public outreach included setting up a Web site that provided information about the study and asked for input from the public. The Web site was open for comment from November 1997 until November 1998. Some 130 organizations were invited to participate in the public workshops. Many people in the organizations--particularly those opposed to the medical use of marijuana--felt that a public forum was not conducive to expressing their views; they were invited to communicate their opinions (and reasons for holding them) by mail or telephone. As a result, roughly equal numbers of persons and organizations opposed to and in favor of the medical use of marijuana were heard from. The study team visited four cannabis buyers' clubs in California (the Oakland Cannabis Buyers' Cooperative, the San Francisco Cannabis Cultivators Club, the Los Angeles Cannabis Resource Center, and Californians Helping Alleviate Medical Problems, or CHAMPS) and two HIV/AIDS clinics (AIDS Health Care Foundation in Los Angeles and Louisiana State University Medical Center in New Orleans). We listened to many individual stories from the buyers' clubs about using marijuana to treat a variety of symptoms and heard clinical observations on the use of Marinol to treat AIDS 9 patients. Marinol is the brand name for dronabinol, which is (THC) in pill form and is available by prescription for the treatment of nausea associated with chemotherapy and AIDS wasting. MARIJUANA TODAY The Changing Legal Landscape In the 20th century, marijuana has been used more for its euphoric effects than as a medicine. Its psychological and behavioral effects have concerned public officials since the drug first appeared in the southwestern and southern states during the first two decades of the century. By 1931, at least 29 states had prohibited use of the drug for 3 nonmedicalpurposes. MarijuanawasfirstregulatedatthefederallevelbytheMarijuana Tax Act of 1937, which required anyone producing, distributing, or using marijuana for medical purposes to register and pay a tax and which effectively prohibited nonmedical use of the drug. Although the act did not make medical use of marijuana illegal, it did make it expensive and inconvenient. In 1942, marijuana was removed from the U.S. -tetrahydrocannabinol Pharmacopoeia because it was believed to be a harmful and addictive drug that caused psychoses, mental deterioration, and violent behavior. In the late 1960s and early 1970s, there was a sharp increase in marijuana use among adolescents and young adults. The current legal status of marijuana was established in 1970 with the passage of the Controlled Substances Act, which divided drugs into five schedules and placed marijuana in Schedule I, the category for drugs with high potential for abuse and no accepted medical use (see Appendix C, Scheduling Definitions). In 1972, the National Organization for the Reform of Marijuana Legislation (NORML), an organization that supports decriminalization of marijuana, unsuccessfully petitioned the Bureau of Narcotics and Dangerous Drugs to move marijuana from Schedule I to Schedule II. NORML argued that marijuana is therapeutic in numerous serious ailments, 13 less toxic, and in many cases more effective than conventional medicines. years the medical marijuana movement has been closely linked with the marijuana decriminalization movement, which has colored the debate. Many people criticized that association in their letters to IOM and during the public workshops of this study. The argument against the medical use of marijuana presented most often to the IOM study team was that "the medical marijuana movement is a Trojan horse"; that is, it is a deceptive tactic used by advocates of marijuana decriminalization who would exploit the public's sympathy for seriously ill patients. Since NORML's petition in 1972, there have been a variety of legal decisions concerning marijuana. From 1973 to 1978, 11 states adopted statutes that decriminalized use of marijuana, although some of them recriminalized marijuana use in the 1980s and 1990s. During the 1970s, reports of the medical value of marijuana began to appear, particularly claims that marijuana relieved the nausea associated with chemotherapy. Health departments in six states conducted small studies to investigate the reports. When the AIDS epidemic spread in the 1980s, patients found that marijuana sometimes relieved their symptoms, most dramatically those associated with AIDS wasting. Over this period a number of defendants charged with unlawful possession of marijuana claimed that they were using the drug to treat medical conditions and that violation of the law was therefore justified (the so-called medical necessity defense). Although most courts rejected these 8 Against that backdrop, voters in California and Arizona in 1996 passed two referenda that attempted to legalize the medical use of marijuana under particular conditions. Public support for patient access to marijuana for medical use appears substantial; public opinion polls taken during 1997 and 1998 generally reported 60—70 percent of 15 However, those referenda are at odds with federal laws regulating marijuana, and their implementation raises complex legal questions. Despite the current level of interest, referenda and public discussions have not been well informed by carefully reasoned scientific debate. Although previous reports have all called for more research, the nature of the research that will be most helpful depends greatly on the specific health conditions to be addressed. And while there have been claims, some accepted them. respondents in favor of allowing medical uses of marijuana. Thus, for 25 important recent advances in our understanding of the physiological effects of marijuana, few of the recent investigators have had the time or resources to permit detailed analysis. The results of those advances, only now beginning to be explored, have significant implications for the medical marijuana debate. Several months after the passage of the California and Arizona medical marijuana referendums, the Office of National Drug Control Policy (ONDCP) asked whether IOM would conduct a scientific review of the medical value of marijuana and its constituent compounds. In August 1997, IOM formally began the study and appointed John A. Benson Jr. and Stanley J. Watson Jr. to serve as principal investigators for the study. The charge to IOM was to review the medical use of marijuana and the harms and benefits attributed to it (details are given in Appendix D). Marijuana plants have been used since antiquity for both herbal medication and intoxication. The current debate over the medical use of marijuana is essentially a debate over the value of its medicinal properties relative to the risk posed by its use. 1,10,11 Marijuana's use as an herbal remedy before the 20th century is well documented. However, modern medicine adheres to different standards from those used in the past. The question is not whether marijuana can be used as an herbal remedy but rather how well this remedy meets today's standards of efficacy and safety. We understand much more than previous generations about medical risks. Our society generally expects its licensed medications to be safe, reliable, and of proven efficacy; contaminants and inconsistent ingredients in our health treatments are not tolerated. That refers not only to prescription and over-the-counter drugs but also to vitamin supplements and herbal remedies purchased at the grocery store. For example, the essential amino acid l- tryptophan was widely sold in health food stores as a natural remedy for insomnia until early 1990 when it became linked to an epidemic of a new and potentially fatal illness 9,12 When it was removed from the market shortly thereafter, there was little protest, despite the fact that it was safe for the vast majority of the population. The 1,536 cases and 27 deaths were later traced to contaminants in a batch produced by a single Japanese manufacturer. Although few herbal medicines meet today's standards, they have provided the foundation for modern Western pharmaceuticals. Most current prescriptions have their 7 rootseitherdirectlyorindirectlyinplantremedies. Atthesametime,mostcurrent prescriptions are synthetic compounds that are only distantly related to the natural compounds that led to their development. Digitalis was discovered in foxglove, morphine in poppies, and taxol in the yew tree. Even aspirin (acetylsalicylic acid) has its counterpart in herbal medicine: for many generations, American Indians relieved headaches by chewing the bark of the willow tree, which is rich in a related form of salicylic acid. Although plants continue to be valuable resources for medical advances, drug development is likely to be less and less reliant on plants and more reliant on the tools of (eosinophilia-myalgia syndrome). modern science. Molecular biology, bioinformatics software, and DNA array-based analyses of genes and chemistry are all beginning to yield great advances in drug discovery and development. Until recently, drugs could only be discovered; now they can be designed. Even the discovery process has been accelerated through the use of modern drug-screening techniques. It is increasingly possible to identify or isolate the chemical compounds in a plant, determine which compounds are responsible for the plant's effects, and select the most effective and safe compounds--either for use as purified substances or as tools to develop even more effective, safer, or less expensive compounds. Yet even as the modern pharmacological toolbox becomes more sophisticated and biotechnology yields an ever greater abundance of therapeutic drugs, people increasingly 4,5 In 1997, 46 percent of Americans sought nontraditional medicines and spent over 27 billion unreimbursed dollars; the total number of visits to alternative medicine practitioners appears to have exceeded the number of 5,6 Recent interest in the medical use of marijuana coincides with this trend toward self-help and a search for "natural" therapies. Indeed, several people who spoke at the IOM public hearings in support of the medical use of marijuana said that they generally preferred herbal medicines to standard pharmaceuticals. However, few alternative therapies have been carefully and systematically tested for safety and efficacy, as is required for medications approved by 2 WHO USES MEDICAL MARIJUANA? There have been no comprehensive surveys of the demographics and medical conditions of medical marijuana users, but a few reports provide some indication. In each case, survey results should be understood to reflect the situation in which they were conducted and are not necessarily characteristic of medical marijuana users as a whole. Respondents to surveys reported to the IOM study team were all members of "buyers' clubs," organizations that provide their members with marijuana, although not necessarily through direct cash transactions. The atmosphere of the marijuana buyers' clubs ranges from that of the comparatively formal and closely regulated Oakland Cannabis Buyers' Cooperative to that of a "country club for the indigent," as Denis Peron described the San Francisco Cannabis Cultivators Club (SFCCC), which he directed. John Mendelson, an internist and pharmacologist at the University of California, San Francisco (UCSF) Pain Management Center, surveyed 100 members of the SFCCC who were using marijuana at least weekly. Most of the respondents were unemployed men in their forties. Subjects were paid $50 to participate in the survey; this might have encouraged a greater representation of unemployed subjects. All subjects were tested for drug use. About half tested positive for marijuana only; the other half tested positive for drugs in addition to marijuana (23% for cocaine and 13% for amphetamines). The predominant disorder was AIDS, followed by roughly equal numbers of members who reported chronic pain, mood disorders, and musculoskeletal disorders (Table 1.1). seek alternative, low-technology therapies. visits to primary care physicians. the FDA (Food and Drug Administration). The membership profile of the San Francisco club was similar to that of the Los Angeles Cannabis Resource Center (LACRC), where 83% of the 739 patients were men, 45% were 36—45 years old, and 71% were HIV positive. Table 1.2 shows a distribution of conditions somewhat different from that in SFCCC respondents, probably because of a different membership profile. For example, cancer is generally a disease that occurs late in life; 34 (4.7%) of LACRC members were over 55 years old; only 2% of survey respondents in the SFCCC study were over 55 years old. Jeffrey Jones, executive director of the Oakland Cannabis Buyers' Cooperative, reported that its largest group of patients is HIV-positive men in their forties. The second- largest group is patients with chronic pain. Among the 42 people who spoke at the public workshops or wrote to the study team, only six identified themselves as members of marijuana buyers' clubs. Nonetheless, they presented a similar profile: HIV/AIDS was the predominant disorder, followed by chronic pain (Tables 1.3 and 1.4). All HIV/AIDS patients reported that marijuana relieved nausea and vomiting and improved their appetite. About half the patients who reported using marijuana for chronic pain also reported that it reduced nausea and vomiting. Note that the medical conditions referred to are only those reported to the study team or to interviewers; they cannot be assumed to represent complete or accurate diagnoses. Michael Rowbotham, a neurologist at the UCSF Pain Management Center, noted that many pain patients referred to that center arrive with incorrect diagnoses or with pain of unknown origin. At that center the patients who report medical benefit from marijuana say that it does not reduce their pain but enables them to cope with it. Most--not all--people who use marijuana to relieve medical conditions have previously used it recreationally. An estimated 95% of the LACRC members had used marijuana before joining the club. It is important to emphasize the absence of comprehensive information on marijuana use before its use for medical conditions. Frequency of prior use almost certainly depends on many factors, including membership in a buyers' club, membership in a population sector that uses marijuana more often than others (for example, men 20—30 years old), and the medical condition being treated with marijuana (for example, there are probably relatively fewer recreational marijuana users among cancer patients than among AIDS patients). Patients who reported their experience with marijuana at the public workshops said that marijuana provided them with great relief from symptoms associated with disparate diseases and ailments, including AIDS wasting, spasticity from multiple sclerosis, depression, chronic pain, and nausea associated with chemotherapy. Their circumstances and symptoms were varied, and the IOM study team was not in a position to make medical evaluations or confirm diagnoses. Three representative cases presented to the IOM study team are presented in Box 1.1; the stories have been edited for brevity, but each case is presented in the patient's words and with the patient's permission. The variety of stories presented left the study team with a clear view of people's beliefs about how marijuana had helped them. But this collection of anecdotal data, although useful, is limited. We heard many positive stories but no stories from people who had tried marijuana but found it ineffective. This is a fraction with an unknown denominator. For the numerator we have a sample of positive responses; for the denominator we have no idea of the total number of people who have tried marijuana for medical purposes. Hence, it is impossible to estimate the clinical value of marijuana or cannabinoids in the general population based on anecdotal reports. Marijuana clearly seems to relieve some symptoms for some people--even if only as a placebo effect. But what is the balance of harmful and beneficial effects? That is the essential medical question that can be answered only by careful analysis of data collected under controlled conditions. CANNABIS AND THE CANNABINOIDS Marijuana is the common name for Cannabis sativa, a hemp plant that grows throughout temperate and tropical climates. The most recent review of the constituents of 16 marijuana lists 66 cannabinoids (Table 1.5). different cannabinoid effects or interactions. Most of the cannabinoids are closely related; they fall into only 10 groups of closely related cannabinoids, many of which differ by only a single chemical moiety and might be midpoints along biochemical pathways--that -tetrahydrocannabinol ( THC) is the primary psychoactive ingredient; depending on the particular plant, either THC or cannabidiol is the most abundant cannabinoid in marijuana (Figure 1.1). 9 is, degradation products, precursors, or byproducts. - But that does not mean there are 66 16,18 9 9 Throughout this report, THC is used to indicate of THC are discussed, the full names are used. All the cannabinoids are lipophilic--they are highly soluble in fatty fluids and tissues but not in water. Indeed, THC is so lipophilic that it is aptly described as "greasy." Throughout this report, marijuana refers to unpurified plant extracts, including leaves and flower tops, regardless of how they are consumed--whether by ingestion or by smoking. References to the effects of marijuana should be understood to include the composite effects of its various components; that is, the effects of THC are included among the effects of marijuana, but not all the effects of marijuana are necessarily due to THC. Discussions concerning cannabinoids refer only to those particular compounds and not to the plant extract. This distinction is important; it is often blurred or exaggerated. Cannabinoids are produced in epidermal glands on the leaves (especially the upper ones), stems, and the bracts that support the flowers of the marijuana plant. Although the flower itself has no epidermal glands, it has the highest cannabinoid content anywhere on the plant, probably because of the accumulation of resin secreted by the supporting bracteole (the small leaf-like part below the flower). The amounts of cannabinoids and their relative abundance in a marijuana plant vary with growing conditions, including 14 ). The chemical stability of cannabinoids in harvested plant material is also affected by moisture, temperature, sunlight, and storage. They degrade under any storage condition. humidity, temperature, and soil nutrients (reviewed in Pate, 1994 -THC. In the few cases where variants ORGANIZATION OF THE REPORT Throughout the report, steps that might be taken to fill the gaps in understanding both the potential harms and benefits of marijuana and cannabinoid use are identified. Those steps include identifying knowledge gaps, promising research directions, and potential therapies based on scientific advances in cannabinoid biology. Chapter 2 reviews basic cannabinoid biology and provides a foundation to understand the medical value of marijuana or its constituent cannabinoids. In consideration of the physician's first rule, "first, do no harm," the potential harms attributed to the medical use of marijuana are reviewed before the potential medical benefits. Chapter 3 reviews the risks posed by marijuana use, with emphasis on medical use. Chapter 4 analyzes the most credible clinical data relevant to the medical use of marijuana. It reviews what is known about the physiological mechanisms underlying particular conditions (for example, chronic pain, vomiting, anorexia, and muscle spasticity), what is known about the cellular actions of cannabinoids, and the levels of proof needed to show that marijuana is an effective treatment for specific symptoms. It does not analyze the historical literature; history is informative in enumerating uses of marijuana, but it does not provide the sort of information needed for a scientifically sound evaluation of the efficacy and safety of marijuana for clinical use. Because marijuana is advocated primarily as affording relief from the symptoms of disease rather than as a cure, this chapter is organized largely by symptoms as opposed to disease categories. Finally, chapter 4 compares the conclusions of this report with those of other recent reports on the medical use of marijuana. Chapter 5 describes the process of and analyzes the prospects for cannabinoid drug development. Three focal concerns in evaluating the medical use of marijuana are: 1. Evaluation of the effects of isolated cannabinoids; 2. Evaluation of the risks associated with the medical use of marijuana; and 3. Evaluation of the use of smoked marijuana. EFFECTS OF ISOLATED CANNABINOIDS Cannabinoid Biology Much has been learned since the 1982 IOM report Marijuana and Health. Although it was clear then that most of the effects of marijuana were due to its actions on the brain, there was little information about how THC acted on brain cells (neurons), which cells were affected by THC, or even what general areas of the brain were most affected by THC. In addition, too little was known about cannabinoid physiology to offer any scientific insights into the harmful or therapeutic effects of marijuana. That all changed with the identification and characterization of cannabinoid receptors in the 1980s and 1990s. During the past 16 years, science has advanced greatly and can tell us much more about the potential medical benefits of cannabinoids. Conclusion: At this point, our knowledge about the biology of marijuana and cannabinoids allows us to make some general conclusions: o Cannabinoids likely have a natural role in pain modulation, control of movement, and memory. o The natural role of cannabinoids in immune systems is likely multi-faceted and remains unclear. o The brain develops tolerance to cannabinoids. o Animal research demonstrates the potential for dependence, but this potential is observed under a narrower range of conditions than with benzodiazepines, opiates, cocaine, or nicotine. o Withdrawal symptoms can be observed in animals but appear to be mild compared to opiates or benzodiazepines, such as diazepam (Valium). Conclusion: The different cannabinoid receptor types found in the body appear to play different roles in normal human physiology. In addition, some effects of cannabinoids appear to be independent of those receptors. The variety of mechanisms through which cannabinoids can influence human physiology underlies the variety of potential therapeutic uses for drugs that might act selectively on different cannabinoid systems. Recommendation 1: Research should continue into the physiological effects of synthetic and plant-derived cannabinoids and the natural function of cannabinoids found in the body. Because different cannabinoids appear to have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC alone. Efficacy of Cannabinoid Drugs The accumulated data indicate a potential therapeutic value for cannabinoid drugs, particularly for symptoms such as pain relief, control of nausea and vomiting, and appetite stimulation. The therapeutic effects of cannabinoids are best established for THC, which is generally one of the two most abundant of the cannabinoids in marijuana. (Cannabidiol is generally the other most abundant cannabinoid.) The effects of cannabinoids on the symptoms studied are generally modest, and in most cases there are more effective medications. However, people vary in their responses to medications, and there will likely always be a subpopulation of patients who do not respond well to other medications. The combination of cannabinoid drug effects (anxiety reduction, appetite stimulation, nausea reduction, and pain relief) suggests that cannabinoids would be moderately well suited for particular conditions, such as chemotherapy-induced nausea and vomiting and AIDS wasting. Defined substances, such as purified cannabinoid compounds, are preferable to plant products, which are of variable and uncertain composition. Use of defined cannabinoids permits a more precise evaluation of their effects, whether in combination or alone. Medications that can maximize the desired effects of cannabinoids and minimize the undesired effects can very likely be identified. Although most scientists who study cannabinoids agree that the pathways to cannabinoid drug development are clearly marked, there is no guarantee that the fruits of scientific research will be made available to the public for medical use. Cannabinoid- based drugs will only become available if public investment in cannabinoid drug research is sustained and if there is enough incentive for private enterprise to develop and market such drugs. Conclusion: Scientific data indicate the potential therapeutic value of cannabinoid drugs, primarily THC, for pain relief, control of nausea and vomiting, and appetite stimulation; smoked marijuana, however, is a crude THC delivery system that also delivers harmful substances. Recommendation 2: Clinical trials of cannabinoid drugs for symptom management should be conducted with the goal of developing rapid-onset, reliable, and safe delivery systems. Influence of Psychological Effects on Therapeutic Effects The psychological effects of THC and similar cannabinoids pose three issues for the therapeutic use of cannabinoid drugs. First, for some patients--particularly older patients with no previous marijuana experience--the psychological effects are disturbing. Those patients report experiencing unpleasant feelings and disorientation after being treated with THC, generally more severe for oral THC than for smoked marijuana. Second, for conditions such as movement disorders or nausea, in which anxiety exacerbates the symptoms, the antianxiety effects of cannabinoid drugs can influence symptoms indirectly. This can be beneficial or can create false impressions of the drug effect. Third, for cases in which symptoms are multifaceted, the combination of THC effects might provide a form of adjunctive therapy; for example, AIDS wasting patients would likely benefit from a medication that simultaneously reduces anxiety, pain, and nausea while stimulating appetite. Conclusion: The psychological effects of cannabinoids, such as anxiety reduction, sedation, and euphoria can influence their potential therapeutic value. Those effects are potentially undesirable for certain patients and situations and beneficial for others. In addition, psychological effects can complicate the interpretation of other aspects of the drug's effect. Recommendation 3: Psychological effects of cannabinoids such as anxiety reduction and sedation, which can influence medical benefits, should be evaluated in clinical trials. RISKS ASSOCIATED WITH MEDICAL USE OF MARIJUANA Physiological Risks Marijuana is not a completely benign substance. It is a powerful drug with a variety of effects. However, except for the harms associated with smoking, the adverse effects of marijuana use are within the range of effects tolerated for other medications. The harmful effects to individuals from the perspective of possible medical use of marijuana are not necessarily the same as the harmful physical effects of drug abuse. When interpreting studies purporting to show the harmful effects of marijuana, it is important to keep in mind that the majority of those studies are based on smoked marijuana, and cannabinoid effects cannot be separated from the effects of inhaling smoke from burning plant material and contaminants. For most people the primary adverse effect of acute marijuana use is diminished psychomotor performance. It is, therefore, inadvisable to operate any vehicle or potentially dangerous equipment while under the influence of marijuana, THC, or any cannabinoid drug with comparable effects. In addition, a minority of marijuana users experience dysphoria, or unpleasant feelings. Finally, the short-term immunosuppressive effects are not well established but, if they exist, are not likely great enough to preclude a legitimate medical use. The chronic effects of marijuana are of greater concern for medical use and fall into two categories: the effects of chronic smoking and the effects of THC. Marijuana smoking is associated with abnormalities of cells lining the human respiratory tract. Marijuana smoke, like tobacco smoke, is associated with increased risk of cancer, lung damage, and poor pregnancy outcomes. Although cellular, genetic, and human studies all suggest that marijuana smoke is an important risk factor for the development of respiratory cancer, proof that habitual marijuana smoking does or does not cause cancer awaits the results of well-designed studies. Conclusion: Numerous studies suggest that marijuana smoke is an important risk factor in the development of respiratory disease. Recommendation 4: Studies to define the individual health risks of smoking marijuana should be conducted, particularly among populations in which marijuana use is prevalent. Marijuana Dependence and Withdrawal A second concern associated with chronic marijuana use is dependence on the psychoactive effects of THC. Although few marijuana users develop dependence, some do. Risk factors for marijuana dependence are similar to those for other forms of substance abuse. In particular, anti-social personality and conduct disorders are closely associated with substance abuse. Conclusion: A distinctive marijuana withdrawal syndrome has been identified, but it is mild and short lived. The syndrome includes restlessness, irritability, mild agitation, insomnia, sleep disturbance, nausea, and cramping. Marijuana as a "Gateway" Drug Patterns in progression of drug use from adolescence to adulthood are strikingly regular. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug most people encounter. Not surprisingly, most users of other illicit drugs have used marijuana first. In fact, most drug users begin with alcohol and nicotine before marijuana--usually before they are of legal age. In the sense that marijuana use typically precedes rather than follows initiation of other illicit drug use, it is indeed a "gateway" drug. But because underage smoking and alcohol use typically precede marijuana use, marijuana is not the most common, and is rarely the first, "gateway" to illicit drug use. There is no conclusive evidence that the drug effects of marijuana are causally linked to the subsequent abuse of other illicit drugs. An important caution is that data on drug use progression cannot be assumed to apply to the use of drugs for medical purposes. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would remain the same as seen in illicit use. Finally, there is a broad social concern that sanctioning the medical use of marijuana might increase its use among the general population. At this point there are no convincing data to support this concern. The existing data are consistent with the idea that this would not be a problem if the medical use of marijuana were as closely regulated as other medications with abuse potential. Conclusion: Present data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse. However, this question is beyond the issues normally considered for medical uses of drugs and should not be a factor in evaluating the therapeutic potential of marijuana or cannabinoids. USE OF SMOKED MARIJUANA Because of the health risks associated with smoking, smoked marijuana should generally not be recommended for long-term medical use. Nonetheless, for certain patients, such as the terminally ill or those with debilitating symptoms, the long-term risks are not of great concern. Further, despite the legal, social, and health problems associated with smoking marijuana, it is widely used by certain patient groups. Recommendation 5: Clinical trials of marijuana use for medical purposes should be conducted under the following limited circumstances: trials should involve only short-term marijuana use (less than six months), should be conducted in patients with conditions for which there is reasonable expectation of efficacy, should be approved by institutional review boards, and should collect data about efficacy. The goal of clinical trials of smoked marijuana would not be to develop marijuana as a licensed drug but rather to serve as a first step toward the possible development of nonsmoked rapid-onset cannabinoid delivery systems. However, it will likely be many years before a safe and effective cannabinoid delivery system, such as an inhaler, is available for patients. In the meantime there are patients with debilitating symptoms for whom smoked marijuana might provide relief. The use of smoked marijuana for those patients should weigh both the expected efficacy of marijuana and ethical issues in patient care, including providing information about the known and suspected risks of smoked marijuana use. Recommendation 6: Short-term use of smoked marijuana (less than six months) for patients with debilitating symptoms (such as intractable pain or vomiting) must meet the following conditions: o failure of all approved medications to provide relief has been documented, o the symptoms can reasonably be expected to be relieved by rapid- onset cannabinoid drugs, o such treatment is administered under medical supervision in a manner that allows for assessment of treatment effectiveness, and o involves an oversight strategy comparable to an institutional review board process that could provide guidance within 24 hours of a submission by a physician to provide marijuana to a patient for a specified use. Until a nonsmoked rapid-onset cannabinoid drug delivery system becomes available, we acknowledge that there is no clear alternative for people suffering from chronic conditions that might be relieved by smoking marijuana, such as pain or AIDS wasting. One possible approach is to treat patients as n-of-1 clinical trials (single-patient trials), in which patients are fully informed of their status as experimental subjects using a harmful drug delivery system and in which their condition is closely monitored and documented under medical supervision, thereby increasing the knowledge base of the risks and benefits of marijuana use under such conditions.data, it is important to understand that decisions about drug regulation are based on a variety of moral and social considerations, as well as on medical and scientific ones. Even when a drug is used only for medical purposes, value judgments affect policy decisions concerning its medical use. For example, the magnitude of a drug's expected medical benefit affects regulatory judgments about the acceptability of risks associated with its use. Also, although a drug is normally approved for medical use only on proof of its "safety and efficacy," patients with life-threatening conditions are sometimes (under protocols for "compassionate use") allowed access to unapproved drugs whose benefits and risks are uncertain. Value judgments play an even more substantial role in regulatory decisions concerning drugs, such as marijuana, that are sought and used for nonmedical purposes. Then policymakers must take into account not only the risks and benefits associated with medical use but also possible interactions between the regulatory arrangements governing medical use and the integrity of the legal controls set up to restrict nonmedical use. It should be clear that many elements of drug control policy lie outside the realm of biology and medicine. Ultimately, the complex moral and social judgments that underlie drug control policy must be made by the American people and their elected officials. A goal of this report is to evaluate the biological and medical factors that should be taken into account in making those judgments. HOW THIS STUDY WAS CONDUCTED Information was gathered through scientific workshops, site visits, analysis of the relevant scientific literature, and extensive consultation with biomedical and social scientists. The three 2-day workshops--in Irvine, California; New Orleans, Louisiana; and Washington, D.C.--were open to the public and included scientific presentations and reports, mostly from patients and their families, about their experiences with and perspectives on the medical use of marijuana. Scientific experts in various fields were selected to talk about the latest research on marijuana, cannabinoids, and related topics (listed in Appendix B). Selection of the experts was based on recommendations by their peers, who ranked them among the most accomplished scientists and the most knowledgeable about marijuana and cannabinoids in their own fields. In addition, advocates for (John Morgan) and against (Eric A. Voth) the medical use of marijuana were invited to present scientific evidence in support of their positions. Information presented at the scientific workshops was supplemented by analysis of the scientific literature and evaluating the methods used in various studies and the validity of the authors' conclusions. Different kinds of clinical studies are useful in different ways: results of a controlled double-blind study with adequate sample sizes can be expected to apply to the general population from which study subjects were drawn; an isolated case report can suggest further studies but cannot be presumed to be broadly applicable; and survey data can be highly informative but are generally limited by the need to rely on self-reports of drug use and on unconfirmed medical diagnoses. This report relies mainly on the most relevant and methodologically rigorous studies available and treats the results of more limited studies cautiously. In addition, study results are presented in such a way as to allow thoughtful readers to judge the results themselves. The Institute of Medicine (IOM) appointed a panel of nine experts to advise the study team on technical issues. These included neurology and the treatment of pain (Howard Fields); regulation of prescription drugs (J. Richard Crout); AIDS wasting and clinical trials (Judith Feinberg); treatment and pathology of multiple sclerosis (Timothy Vollmer); drug dependence among adolescents (Thomas Crowley); varieties of drug dependence (Dorothy Hatsukami); internal medicine, health care delivery, and clinical epidemiology (Eric B. Larson); cannabinoids and marijuana pharmacology (Billy R. Martin); and cannabinoid neuroscience (Steven R. Childers). Public outreach included setting up a Web site that provided information about the study and asked for input from the public. The Web site was open for comment from November 1997 until November 1998. Some 130 organizations were invited to participate in the public workshops. Many people in the organizations--particularly those opposed to the medical use of marijuana--felt that a public forum was not conducive to expressing their views; they were invited to communicate their opinions (and reasons for holding them) by mail or telephone. As a result, roughly equal numbers of persons and organizations opposed to and in favor of the medical use of marijuana were heard from. The study team visited four cannabis buyers' clubs in California (the Oakland Cannabis Buyers' Cooperative, the San Francisco Cannabis Cultivators Club, the Los Angeles Cannabis Resource Center, and Californians Helping Alleviate Medical Problems, or CHAMPS) and two HIV/AIDS clinics (AIDS Health Care Foundation in Los Angeles and Louisiana State University Medical Center in New Orleans). We listened to many individual stories from the buyers' clubs about using marijuana to treat a variety of symptoms and heard clinical observations on the use of Marinol to treat AIDS 9 patients. Marinol is the brand name for dronabinol, which is (THC) in pill form and is available by prescription for the treatment of nausea associated with chemotherapy and AIDS wasting. MARIJUANA TODAY The Changing Legal Landscape In the 20th century, marijuana has been used more for its euphoric effects than as a medicine. Its psychological and behavioral effects have concerned public officials since the drug first appeared in the southwestern and southern states during the first two decades of the century. By 1931, at least 29 states had prohibited use of the drug for 3 nonmedicalpurposes. MarijuanawasfirstregulatedatthefederallevelbytheMarijuana Tax Act of 1937, which required anyone producing, distributing, or using marijuana for medical purposes to register and pay a tax and which effectively prohibited nonmedical use of the drug. Although the act did not make medical use of marijuana illegal, it did make it expensive and inconvenient. In 1942, marijuana was removed from the U.S. -tetrahydrocannabinol Pharmacopoeia because it was believed to be a harmful and addictive drug that caused psychoses, mental deterioration, and violent behavior. In the late 1960s and early 1970s, there was a sharp increase in marijuana use among adolescents and young adults. The current legal status of marijuana was established in 1970 with the passage of the Controlled Substances Act, which divided drugs into five schedules and placed marijuana in Schedule I, the category for drugs with high potential for abuse and no accepted medical use (see Appendix C, Scheduling Definitions). In 1972, the National Organization for the Reform of Marijuana Legislation (NORML), an organization that supports decriminalization of marijuana, unsuccessfully petitioned the Bureau of Narcotics and Dangerous Drugs to move marijuana from Schedule I to Schedule II. NORML argued that marijuana is therapeutic in numerous serious ailments, 13 less toxic, and in many cases more effective than conventional medicines. years the medical marijuana movement has been closely linked with the marijuana decriminalization movement, which has colored the debate. Many people criticized that association in their letters to IOM and during the public workshops of this study. The argument against the medical use of marijuana presented most often to the IOM study team was that "the medical marijuana movement is a Trojan horse"; that is, it is a deceptive tactic used by advocates of marijuana decriminalization who would exploit the public's sympathy for seriously ill patients. Since NORML's petition in 1972, there have been a variety of legal decisions concerning marijuana. From 1973 to 1978, 11 states adopted statutes that decriminalized use of marijuana, although some of them recriminalized marijuana use in the 1980s and 1990s. During the 1970s, reports of the medical value of marijuana began to appear, particularly claims that marijuana relieved the nausea associated with chemotherapy. Health departments in six states conducted small studies to investigate the reports. When the AIDS epidemic spread in the 1980s, patients found that marijuana sometimes relieved their symptoms, most dramatically those associated with AIDS wasting. Over this period a number of defendants charged with unlawful possession of marijuana claimed that they were using the drug to treat medical conditions and that violation of the law was therefore justified (the so-called medical necessity defense). Although most courts rejected these 8 Against that backdrop, voters in California and Arizona in 1996 passed two referenda that attempted to legalize the medical use of marijuana under particular conditions. Public support for patient access to marijuana for medical use appears substantial; public opinion polls taken during 1997 and 1998 generally reported 60—70 percent of 15 However, those referenda are at odds with federal laws regulating marijuana, and their implementation raises complex legal questions. Despite the current level of interest, referenda and public discussions have not been well informed by carefully reasoned scientific debate. Although previous reports have all called for more research, the nature of the research that will be most helpful depends greatly on the specific health conditions to be addressed. And while there have been claims, some accepted them. respondents in favor of allowing medical uses of marijuana. Thus, for 25 important recent advances in our understanding of the physiological effects of marijuana, few of the recent investigators have had the time or resources to permit detailed analysis. The results of those advances, only now beginning to be explored, have significant implications for the medical marijuana debate. Several months after the passage of the California and Arizona medical marijuana referendums, the Office of National Drug Control Policy (ONDCP) asked whether IOM would conduct a scientific review of the medical value of marijuana and its constituent compounds. In August 1997, IOM formally began the study and appointed John A. Benson Jr. and Stanley J. Watson Jr. to serve as principal investigators for the study. The charge to IOM was to review the medical use of marijuana and the harms and benefits attributed to it (details are given in Appendix D). Marijuana plants have been used since antiquity for both herbal medication and intoxication. The current debate over the medical use of marijuana is essentially a debate over the value of its medicinal properties relative to the risk posed by its use. 1,10,11 Marijuana's use as an herbal remedy before the 20th century is well documented. However, modern medicine adheres to different standards from those used in the past. The question is not whether marijuana can be used as an herbal remedy but rather how well this remedy meets today's standards of efficacy and safety. We understand much more than previous generations about medical risks. Our society generally expects its licensed medications to be safe, reliable, and of proven efficacy; contaminants and inconsistent ingredients in our health treatments are not tolerated. That refers not only to prescription and over-the-counter drugs but also to vitamin supplements and herbal remedies purchased at the grocery store. For example, the essential amino acid l- tryptophan was widely sold in health food stores as a natural remedy for insomnia until early 1990 when it became linked to an epidemic of a new and potentially fatal illness 9,12 When it was removed from the market shortly thereafter, there was little protest, despite the fact that it was safe for the vast majority of the population. The 1,536 cases and 27 deaths were later traced to contaminants in a batch produced by a single Japanese manufacturer. Although few herbal medicines meet today's standards, they have provided the foundation for modern Western pharmaceuticals. Most current prescriptions have their 7 rootseitherdirectlyorindirectlyinplantremedies. Atthesametime,mostcurrent prescriptions are synthetic compounds that are only distantly related to the natural compounds that led to their development. Digitalis was discovered in foxglove, morphine in poppies, and taxol in the yew tree. Even aspirin (acetylsalicylic acid) has its counterpart in herbal medicine: for many generations, American Indians relieved headaches by chewing the bark of the willow tree, which is rich in a related form of salicylic acid. Although plants continue to be valuable resources for medical advances, drug development is likely to be less and less reliant on plants and more reliant on the tools of (eosinophilia-myalgia syndrome). modern science. Molecular biology, bioinformatics software, and DNA array-based analyses of genes and chemistry are all beginning to yield great advances in drug discovery and development. Until recently, drugs could only be discovered; now they can be designed. Even the discovery process has been accelerated through the use of modern drug-screening techniques. It is increasingly possible to identify or isolate the chemical compounds in a plant, determine which compounds are responsible for the plant's effects, and select the most effective and safe compounds--either for use as purified substances or as tools to develop even more effective, safer, or less expensive compounds. Yet even as the modern pharmacological toolbox becomes more sophisticated and biotechnology yields an ever greater abundance of therapeutic drugs, people increasingly 4,5 In 1997, 46 percent of Americans sought nontraditional medicines and spent over 27 billion unreimbursed dollars; the total number of visits to alternative medicine practitioners appears to have exceeded the number of 5,6 Recent interest in the medical use of marijuana coincides with this trend toward self-help and a search for "natural" therapies. Indeed, several people who spoke at the IOM public hearings in support of the medical use of marijuana said that they generally preferred herbal medicines to standard pharmaceuticals. However, few alternative therapies have been carefully and systematically tested for safety and efficacy, as is required for medications approved by 2 WHO USES MEDICAL MARIJUANA? There have been no comprehensive surveys of the demographics and medical conditions of medical marijuana users, but a few reports provide some indication. In each case, survey results should be understood to reflect the situation in which they were conducted and are not necessarily characteristic of medical marijuana users as a whole. Respondents to surveys reported to the IOM study team were all members of "buyers' clubs," organizations that provide their members with marijuana, although not necessarily through direct cash transactions. The atmosphere of the marijuana buyers' clubs ranges from that of the comparatively formal and closely regulated Oakland Cannabis Buyers' Cooperative to that of a "country club for the indigent," as Denis Peron described the San Francisco Cannabis Cultivators Club (SFCCC), which he directed. John Mendelson, an internist and pharmacologist at the University of California, San Francisco (UCSF) Pain Management Center, surveyed 100 members of the SFCCC who were using marijuana at least weekly. Most of the respondents were unemployed men in their forties. Subjects were paid $50 to participate in the survey; this might have encouraged a greater representation of unemployed subjects. All subjects were tested for drug use. About half tested positive for marijuana only; the other half tested positive for drugs in addition to marijuana (23% for cocaine and 13% for amphetamines). The predominant disorder was AIDS, followed by roughly equal numbers of members who reported chronic pain, mood disorders, and musculoskeletal disorders (Table 1.1). seek alternative, low-technology therapies. visits to primary care physicians. the FDA (Food and Drug Administration). The membership profile of the San Francisco club was similar to that of the Los Angeles Cannabis Resource Center (LACRC), where 83% of the 739 patients were men, 45% were 36—45 years old, and 71% were HIV positive. Table 1.2 shows a distribution of conditions somewhat different from that in SFCCC respondents, probably because of a different membership profile. For example, cancer is generally a disease that occurs late in life; 34 (4.7%) of LACRC members were over 55 years old; only 2% of survey respondents in the SFCCC study were over 55 years old. Jeffrey Jones, executive director of the Oakland Cannabis Buyers' Cooperative, reported that its largest group of patients is HIV-positive men in their forties. The second- largest group is patients with chronic pain. Among the 42 people who spoke at the public workshops or wrote to the study team, only six identified themselves as members of marijuana buyers' clubs. Nonetheless, they presented a similar profile: HIV/AIDS was the predominant disorder, followed by chronic pain (Tables 1.3 and 1.4). All HIV/AIDS patients reported that marijuana relieved nausea and vomiting and improved their appetite. About half the patients who reported using marijuana for chronic pain also reported that it reduced nausea and vomiting. Note that the medical conditions referred to are only those reported to the study team or to interviewers; they cannot be assumed to represent complete or accurate diagnoses. Michael Rowbotham, a neurologist at the UCSF Pain Management Center, noted that many pain patients referred to that center arrive with incorrect diagnoses or with pain of unknown origin. At that center the patients who report medical benefit from marijuana say that it does not reduce their pain but enables them to cope with it. Most--not all--people who use marijuana to relieve medical conditions have previously used it recreationally. An estimated 95% of the LACRC members had used marijuana before joining the club. It is important to emphasize the absence of comprehensive information on marijuana use before its use for medical conditions. Frequency of prior use almost certainly depends on many factors, including membership in a buyers' club, membership in a population sector that uses marijuana more often than others (for example, men 20—30 years old), and the medical condition being treated with marijuana (for example, there are probably relatively fewer recreational marijuana users among cancer patients than among AIDS patients). Patients who reported their experience with marijuana at the public workshops said that marijuana provided them with great relief from symptoms associated with disparate diseases and ailments, including AIDS wasting, spasticity from multiple sclerosis, depression, chronic pain, and nausea associated with chemotherapy. Their circumstances and symptoms were varied, and the IOM study team was not in a position to make medical evaluations or confirm diagnoses. Three representative cases presented to the IOM study team are presented in Box 1.1; the stories have been edited for brevity, but each case is presented in the patient's words and with the patient's permission. The variety of stories presented left the study team with a clear view of people's beliefs about how marijuana had helped them. But this collection of anecdotal data, although useful, is limited. We heard many positive stories but no stories from people who had tried marijuana but found it ineffective. This is a fraction with an unknown denominator. For the numerator we have a sample of positive responses; for the denominator we have no idea of the total number of people who have tried marijuana for medical purposes. Hence, it is impossible to estimate the clinical value of marijuana or cannabinoids in the general population based on anecdotal reports. Marijuana clearly seems to relieve some symptoms for some people--even if only as a placebo effect. But what is the balance of harmful and beneficial effects? That is the essential medical question that can be answered only by careful analysis of data collected under controlled conditions. CANNABIS AND THE CANNABINOIDS Marijuana is the common name for Cannabis sativa, a hemp plant that grows throughout temperate and tropical climates. The most recent review of the constituents of 16 marijuana lists 66 cannabinoids (Table 1.5). different cannabinoid effects or interactions. Most of the cannabinoids are closely related; they fall into only 10 groups of closely related cannabinoids, many of which differ by only a single chemical moiety and might be midpoints along biochemical pathways--that -tetrahydrocannabinol ( THC) is the primary psychoactive ingredient; depending on the particular plant, either THC or cannabidiol is the most abundant cannabinoid in marijuana (Figure 1.1). 9 is, degradation products, precursors, or byproducts. - But that does not mean there are 66 16,18 9 9 Throughout this report, THC is used to indicate of THC are discussed, the full names are used. All the cannabinoids are lipophilic--they are highly soluble in fatty fluids and tissues but not in water. Indeed, THC is so lipophilic that it is aptly described as "greasy." Throughout this report, marijuana refers to unpurified plant extracts, including leaves and flower tops, regardless of how they are consumed--whether by ingestion or by smoking. References to the effects of marijuana should be understood to include the composite effects of its various components; that is, the effects of THC are included among the effects of marijuana, but not all the effects of marijuana are necessarily due to THC. Discussions concerning cannabinoids refer only to those particular compounds and not to the plant extract. This distinction is important; it is often blurred or exaggerated. Cannabinoids are produced in epidermal glands on the leaves (especially the upper ones), stems, and the bracts that support the flowers of the marijuana plant. Although the flower itself has no epidermal glands, it has the highest cannabinoid content anywhere on the plant, probably because of the accumulation of resin secreted by the supporting bracteole (the small leaf-like part below the flower). The amounts of cannabinoids and their relative abundance in a marijuana plant vary with growing conditions, including 14 ). The chemical stability of cannabinoids in harvested plant material is also affected by moisture, temperature, sunlight, and storage. They degrade under any storage condition. humidity, temperature, and soil nutrients (reviewed in Pate, 1994 -THC. In the few cases where variants ORGANIZATION OF THE REPORT Throughout the report, steps that might be taken to fill the gaps in understanding both the potential harms and benefits of marijuana and cannabinoid use are identified. Those steps include identifying knowledge gaps, promising research directions, and potential therapies based on scientific advances in cannabinoid biology. Chapter 2 reviews basic cannabinoid biology and provides a foundation to understand the medical value of marijuana or its constituent cannabinoids. In consideration of the physician's first rule, "first, do no harm," the potential harms attributed to the medical use of marijuana are reviewed before the potential medical benefits. Chapter 3 reviews the risks posed by marijuana use, with emphasis on medical use. Chapter 4 analyzes the most credible clinical data relevant to the medical use of marijuana. It reviews what is known about the physiological mechanisms underlying particular conditions (for example, chronic pain, vomiting, anorexia, and muscle spasticity), what is known about the cellular actions of cannabinoids, and the levels of proof needed to show that marijuana is an effective treatment for specific symptoms. It does not analyze the historical literature; history is informative in enumerating uses of marijuana, but it does not provide the sort of information needed for a scientifically sound evaluation of the efficacy and safety of marijuana for clinical use. Because marijuana is advocated primarily as affording relief from the symptoms of disease rather than as a cure, this chapter is organized largely by symptoms as opposed to disease categories. Finally, chapter 4 compares the conclusions of this report with those of other recent reports on the medical use of marijuana. Chapter 5 describes the process of and analyzes the prospects for cannabinoid drug development. Primum non nocere. This is the physician's first rule: whatever treatment a physician prescribes to a patient--first, that treatment must not harm the patient. The most contentious aspect of the medical marijuana debate is not whether marijuana can alleviate particular symptoms but rather the degree of harm associated with its use. This chapter explores the negative health consequences of marijuana use, first with respect to drug abuse, then from a psychological perspective, and finally from a physiological perspective. THE MARIJUANA "HIGH" The most commonly reported effects of smoked marijuana are a sense of well-being or euphoria and increased talkativeness and laughter alternating with periods of introspective dreaminess followed by lethargy and sleepiness (see reviews by Adams and 1 59 60 Martin, 1996, Hall and Solowij, and Hall et al. ). A characteristic feature of a marijuana "high" is a distortion in the sense of time associated with deficits in short-term memory and learning. A marijuana smoker typically has a sense of enhanced physical and emotional sensitivity, including a feeling of greater interpersonal closeness. The most obvious behavioral abnormality displayed by someone under the influence of marijuana is difficulty in carrying on an intelligible conversation, perhaps because of an inability to remember what was just said even a few words earlier. The high associated with marijuana is not generally claimed to be integral to its therapeutic value. But mood enhancement, anxiety reduction, and mild sedation can be desirable qualities in medications--particularly for patients suffering pain and anxiety. Thus, although the psychological effects of marijuana are merely side effects in the treatment of some symptoms, they might contribute directly to relief of other symptoms. They also must be monitored in controlled clinical trials to discern which effect of cannabinoids is beneficial. These possibilities are discussed later under the discussions of specific symptoms in chapter 4. The effects of various doses and routes of delivery of THC are shown in Table 3.1. Adverse Mood Reactions Although euphoria is the more common reaction to smoking marijuana, adverse mood reactions can occur. Such reactions occur most frequently in inexperienced users after large doses of smoked or oral marijuana. They usually disappear within hours and respond well to reassurance and a supportive environment. Anxiety and paranoia are the 59 most common acute adverse reactions; others include panic, depression, dysphoria, 1,40,66,69 depersonalization, delusions, illusions, and hallucinations. Of regular marijuana smokers, 17% report that they have experienced at least one of the symptoms, usually 145 early in their use of marijuana. of medical marijuana in people who have not previously used marijuana. DRUG DYNAMICS There are many misunderstandings about drug abuse and dependence (see reviews by 114 54 Those observations are particularly relevant for the use O'Brien themostrecentDiagnosticandStatisticalManualofMentalDisorders(DSM-IV), the most influential system in the United States for diagnoses of mental disorders, including substance abuse (see Box 3.1). Tolerance, dependence, and withdrawal are often presumed to imply abuse or addiction, but this is not the case. Tolerance and dependence are normal physiological adaptations to repeated use of any drug. The correct use of prescribed medications for pain, anxiety, and even hypertension commonly produces tolerance and some measure of physiological dependence. Even a patient who takes a medicine for appropriate medical indications and at the correct dosage can develop tolerance, physical dependence, and withdrawal symptoms if the drug is stopped abruptly rather than gradually. For example, a hypertensive patient receiving a beta-adrenergic receptor blocker, such as propranolol, might have a good therapeutic response; but if the drug is stopped abruptly, there can be a withdrawal syndrome that consists of tachycardia and a rebound increase in blood pressure to a point that is temporarily higher than before administration of the medication began. Because it is an illegal substance, some people consider any use of marijuana as substance abuse. However, this report uses the medical definition; that is, substance abuse is a maladaptive pattern of repeated substance use manifested by recurrent and 3 significantadverseconsequences. Substanceabuseanddependencearebothdiagnoses of pathological substance use. Dependence is the more serious diagnosis and implies compulsive drug use that is difficult to stop despite significant substance-related problems (see Box 3.2). Reinforcement Drugs vary in their ability to produce good feelings in users, and the more strongly reinforcing a drug is, the more likely it will be abused (G. Koob, Institute of Medicine (IOM) workshop). Marijuana is indisputably reinforcing for many people. The reinforcing properties of even so mild a stimulant as caffeine are typical of reinforcement 54 in 1994). Caffeine is reinforcing for many people at low doses (100—200 mg, the average amount of caffeine in one to two cups of and Goldstein ). The terms and concepts used in this report are as defined in 3 by addicting drugs (reviewed by Goldstein coffee) and is aversive at high doses (600 mg, the average amount of caffeine in six cups of coffee). The reinforcing effects of many drugs are different for different people. For example, caffeine was most reinforcing for test subjects who scored lowest on tests of anxiety but tended not to be reinforcing for the most anxious subjects. As an argument to dispute the abuse potential of marijuana, some have cited the observation that animals do not willingly self-administer THC, as they will cocaine. Even if that were true, it would not be relevant to human use of marijuana. The value in animal models of drug self-administration is not that they are necessary to show that a drug is reinforcing but rather that they provide a model in which the effects of a drug can be studied. Furthermore, THC is indeed rewarding to animals at some doses but, like many 93 reinforcing drugs, is aversive at high doses (4.0 mg/kg). in experiments conducted in animals outfitted with intravenous catheters that allow them 100 A specific set of neural pathways has been proposed to be a "reward system" that 51 to self-administer WIN 55,212, a drug that mimics the effects of THC. underlies the reinforcement of drugs of abuse and other pleasurable stimuli. properties of drugs are associated with their ability to increase concentrations of particular neurotransmitters in areas that are part of the proposed brain reward system. The median forebrain bundle and the nucleus accumbens are associated with brain reward 88 144 Cocaine, amphetamine, alcohol, opioids, nicotine, and THC extracellular fluid dopamine in the nucleus accumbens region (reviewed by Koob and Le pathways. all increase 88 110 Moal brain reward systems are not strictly "drug reinforcement centers." Rather, their biological role is to respond to a range of positive stimuli, including sweet foods and sexual attraction. Tolerance The rate at which tolerance to the various effects of any drug develops is an important consideration for its safety and efficacy. For medical use, tolerance to some effects of cannabinoids might be desirable. Differences in the rates at which tolerance to the multiple effects of a drug develops can be dangerous. For example, tolerance to the euphoric effects of heroin develops faster than tolerance to its respiratory depressant effects, so heroin users tend to increase their daily doses to reach their desired level of euphoria, thereby putting themselves at risk for respiratory arrest. Because tolerance to the various effects of cannabinoids might develop at different rates, it is important to evaluate independently their effects on mood, motor performance, memory, and attention, as well as any therapeutic use under investigation. Tolerance to most of the effects of marijuana can develop rapidly after only a few doses, and it also disappears rapidly. Tolerance to large doses has been found to persist in experimental animals for long periods after cessation of drug use. Performance impairment is less among people who use marijuana heavily than it is among those who 29,104,124 and Nestler and Aghajanian in 1997). However, it is important to note that possibly because of tolerance. Heavy users tend to reach higher plasma concentrations of THC than light users after similar doses of use marijuana only occasionally, Similar effects have been found Reinforcing THC, arguing against the possibility that heavy users show less performance impairment because they somehow absorb less THC (perhaps due to differences in smoking 95 There appear to be variations in the development of tolerance to the different effects of marijuana and oral THC. For example, daily marijuana smokers participated in a residential laboratory study to compare the development of tolerance to THC pills and to 61,62 behavior). smoked marijuana. day for four consecutive days; another group was given THC pills on the same schedule. During the four-day period, both groups became tolerant to feeling "high" and what they reported as a "good drug effect." In contrast, neither group became tolerant to the stimulatory effects of marijuana or THC on appetite. "Tolerance" does not mean that the drug no longer produced the effects but simply that the effects were less at the end than at the beginning of the four-day period. The marijuana smoking group reported feeling "mellow" after smoking and did not show tolerance to this effect; the group that took THC pills did not report feeling "mellow." The difference was also reported by many people who described their experiences to the IOM study team. The oral and smoked doses were designed to deliver roughly equivalent amounts of THC to a subject. Each smoked marijuana dose consisted of five 10-second puffs of a marijuana cigarette containing 3.1% THC; the pills contained 30 mg of THC. Both groups also received placebo drugs during other four-day periods. Although the dosing of the two groups was comparable, different routes of administration resulted in different patterns of drug effect. The peak effect of smoked marijuana is usually felt within 68,95 One group was given marijuana cigarettes to smoke four times per minutes and declines sharply after 30 minutes not felt until about an hour and lasts for several hours. Withdrawal A distinctive marijuana and THC withdrawal syndrome has been identified, but it is mild and subtle compared with the profound physical syndrome of alcohol or heroin withdrawal. The symptoms of marijuana withdrawal include restlessness, irritability, 31,74 mild agitation, insomnia, sleep EEG disturbance, nausea, and cramping (Table 3.2). In addition to those symptoms, two recent studies noted several more. A group of adolescents under treatment for conduct disorders also reported fatigue and illusions or hallucinations after marijuana abstinence (this study is discussed further in the section on 31 In a residential study of daily marijuana users, withdrawal symptoms included sweating and 62 A marijuana withdrawal syndrome, however, has been reported only in a group of adolescents in treatment for substance 31 ; the peak effect of oral THC is usually 118 "Prevalence and Predictors of Dependence on Marijuana and Other Drugs"). runny nose, in addition to those listed above. abuse problems 62,74 daily. and in a research setting where subjects were given marijuana or THC Withdrawal symptoms have been observed in carefully controlled laboratory studies 61,62 of people after use of both oral THC and smoked marijuana. were given very high doses of oral THC: 180—210 mg per day for 10—20 days, roughly In one study, subjects equivalent to smoking 9—10 2% THC cigarettes per day. During the abstinence period at the end of the study, the study subjects were irritable and showed insomnia, runny nose, sweating, and decreased appetite. The withdrawal symptoms, however, were short lived. In four days they had abated. The time course contrasts with that in another study in which lower doses of oral THC were used (80—120 mg/day for four days) and 61,62 withdrawal symptoms were still near maximal after four days. In animals, simply discontinuing chronic heavy dosing of THC does not reveal withdrawal symptoms, but the "removal" of THC from the brain can be made abrupt by another drug that blocks THC at its receptor if administered when the chronic THC is withdrawn. The withdrawal syndrome is pronounced, and the behavior of the animals 153 becomes hyperactive and disorganized. 16,24 The half-life of THC in brain is about an Although traces of THC can remain in the brain for much longer periods, the hour. amounts are not physiologically significant. Thus, the lack of a withdrawal syndrome when THC is abruptly withdrawn without administration of a receptor-blocking drug is probably not due to a prolonged decline in brain concentrations. Craving Craving, the intense desire for a drug, is the most difficult aspect of addiction to overcome. Research on craving has focused on nicotine, alcohol, cocaine, and opiates but 115 has not specifically addressed marijuana. is known about drug craving, its relevance to marijuana use has not been established. Most people who suffer from addiction relapse within a year of abstinence, and they 58 As addiction develops, craving increases even as maladaptive consequences accumulate. Animal studies indicate that the tendency to relapse is based on changes in brain function that continue for months or years after the 115 last use of the drug. manifestation of an abstinence syndrome remains an unanswered question in drug abuse 88 The "liking" of sweet foods, for example, is mediated by opioid forebrain systems and by brain stem systems, whereas "wanting" seems to be mediated by 109 Anticraving medications have been developed for nicotine and alcohol. The antidepressant, bupropion, blocks nicotine craving, while naltrexone blocks alcohol 115 often attribute their relapse to craving. research. ascending dopamine neurons that project to the nucleus accumbens. Another category of addiction medication includes drugs that block other drugs' effects. Some of those drugs also block craving. For example, methadone blocks the euphoric effects of heroin and also reduces craving. MARIJUANA USE AND DEPENDENCE Prevalence of Use Millions of Americans have tried marijuana, but most are not regular users. In 1996, 68.6 million people--32% of the U.S. population over 12 years old--had tried marijuana craving. 74 Thus, while this section briefly reviews what Whether neurobiological conditions change during the 132 or hashish at least once in their lifetime, but only 5% were current users. is most prevalent among 18- to 25-year-olds and declines sharply after the age of 34 77,132 (Figure 3.1). although the difference decreases by adulthood. Whites are more likely than blacks to use marijuana in adolescence, 132 Marijuana use Most people who have used marijuana did so first during adolescence. Social influences, such as peer pressure and prevalence of use by peers, are highly predictive of 9 initiationintomarijuanause. Initiationisnot,ofcourse,synonymouswithcontinuedor regular use. A cohort of 456 students who experimented with marijuana during their high school years were surveyed about their reasons for initiating, continuing, and stopping 9 theirmarijuanause. Studentswhobeganasheavyuserswereexcludedfromthe analysis. Those who did not become regular marijuana users cited two types of reasons for discontinuing. The first was related to health and well-being; that is, they felt that marijuana was bad for their health or for their family and work relationships. The second type was based on age-related changes in circumstances, including increased responsibility and decreased regular contact with other marijuana users. Among high school students who quit, parental disapproval was a stronger influence than peer disapproval in discontinuing marijuana use. In the initiation of marijuana use, the reverse was true. The reasons cited by those who continued to use marijuana were to "get in a better mood or feel better." Social factors were not a significant predictor of continued use. Data on young adults show similar trends. Those who use drugs in response to social influences are more likely to stop using them than those who also use them for 80 The age distribution of marijuana users among the general population contrasts with that of medical marijuana users. Marijuana use generally declines sharply after the age of 34 years, whereas medical marijuana users tend to be over 35. That raises the question of what, if any, relationship exists between abuse and medical use of marijuana; however, no studies reported in the scientific literature have addressed this question. Prevalence and Predictors of Dependence on Marijuana and Other Drugs Many factors influence the likelihood that a particular person will become a drug abuser or an addict; the user, the environment, and the drug are all important factors 114 (Table 3.3). people who are vulnerable to drug abuse for individual reasons and who find themselves in an environment that encourages drug abuse are initially likely to abuse the most readily available drug--regardless of its unique set of effects on the brain. The third category includes drug-specific effects that influence the abuse liability of a particular drug. As discussed earlier in this chapter, the more strongly reinforcing a drug is, the more likely that it will be abused. The abuse liability of a drug is enhanced by how quickly its effects are felt, and this is determined by how the drug is delivered. In general, the effects of drugs that are inhaled or injected are felt within minutes, and the effects of drugs that are ingested take a half hour or more. psychological reasons. The first two categories apply to potential abuse of any substance; that is, The proportion of people who become addicted varies among drugs. Table 3.4 shows estimates for the proportion of people among the general population who used or became dependent on different types of drugs. The proportion of users that ever became dependent includes anyone who was ever dependent--whether it was for a period of weeks or years--and thus includes more than those who are currently dependent. Compared to most other drugs listed in this table, dependence among marijuana users is relatively rare. This might be due to differences in specific drug effects, the availability of or penalties associated with the use of the different drugs, or some combination. Daily use of most illicit drugs is extremely rare in the general population. In 1989, daily use of marijuana among high school seniors was less than that of alcohol (2.9% and 76 Drug dependence is more prevalent in some sectors of the population than in others. 8 Age,gender,andraceorethnicgroupareallimportant. Excludingtobaccoandalcohol, 8 thefollowingtrendsofdrugdependencearestatisticallysignificant: Menare1.6times as likely than women to become drug dependent, non-Hispanic whites are about twice as likely as blacks to become drug dependent (the difference between non-Hispanic and Hispanic whites was not significant), and people 25—44 years old are more than three times as likely as those over 45 years old to become drug dependent. More often than not, drug dependence co-occurs with other psychiatric disorders. Most people with a diagnosis of drug dependence disorder also have a diagnosis of a 76 The most frequent co- occurring disorder is alcohol abuse; 60% of men and 30% of women with a diagnosis of drug dependence also abuse alcohol. In women who are drug dependent, phobic disorders and major depression are almost equally common (29% and 28%, respectively). Note that this study distinguished only between alcohol, nicotine and "other drugs"; marijuana was grouped among "other drugs." The frequency with which drug dependence and other psychiatric disorders co-occur might not be the same for marijuana and other drugs that were included in that category. A strong association between drug dependence and antisocial personality or its precursor, conduct disorder, is also widely reported in children and adults (reviewed in 126 ). Although the causes of the association are uncertain, Robins recently concluded that it is more likely that conduct disorders generally lead to substance abuse 126 Such a trend might, however, depend on the age at which the conduct disorder is manifested. A longitudinal study by Brooks and co-workers noted a significant relationship between adolescent drug use and disruptive disorders in young adulthood; except for earlier psychopathology, such as childhood conduct disorder, the drug use preceded the 18 In contrast with use of other illicit drugs and tobacco, moderate (less than once a week and more than once a month) to heavy marijuana use did not predict anxiety or depressive disorders; but it was similar to those other drugs in predicting antisocial personality disorder. The rates of disruptive disorders increased with 4.2%, respectively). another psychiatric disorder (76% of men and 65% of women). 1998 by Robins than the reverse. psychiatric disorders. increased drug use. Thus, heavy drug use among adolescents can be a warning sign for later psychiatric disorders; whether it is an early manifestation of or a cause of those disorders remains to be determined. Psychiatric disorders are more prevalent among adolescents who use drugs--including 79 alcohol and nicotine--than among those who do not. Table 3.5 indicates that adolescent boys who smoke cigarettes daily are about 10 times as likely to have a psychiatric disorder diagnosis as those who do not smoke. However, the table does not compare intensity of use among the different drug classes. Thus, although daily cigarette smoking among adolescent boys is more strongly associated with psychiatric disorders than is any use of illicit substances, it does not follow that this comparison is true for every amount 79 Few marijuana users become dependent on it (Table 3.4), but those who do encounter 19,143 of cigarette smoking. problems similar to those associated with dependence on other drugs. appears to be less severe among people who use only marijuana than among those who 19,143 abuse cocaine or those who abuse marijuana with other drugs (including alcohol). Data gathered in 1990—1992 from the National Comorbidity Study of over 8,000 persons 15—54 years old indicate that 4.2% of the general population were dependent on 8 marijuanaatsometime. Similarresultsforthefrequencyofsubstanceabuseamongthe general population were obtained from the Epidemiological Catchment Area Program, a survey of over 19,000 people. According to data collected in the early 1980s for that study, 4.4% of adults have, at one time, met the criteria for marijuana dependence. In comparison, 13.8% of adults met the criteria for alcohol dependence and 36.0% for tobacco dependence. After alcohol and nicotine, marijuana was the substance most frequently associated with a diagnosis of substance dependence. In a 15-year study begun in 1979, 7.3% of 1,201 adolescents and young adults in suburban New Jersey at some time met the criteria for marijuana dependence; this indicates that the rate of marijuana dependence might be even higher in some groups of 71 Adolescents meet the criteria for drug dependence at lower rates of marijuana use than do adults, and this 25 adolescents and young adults than in the general population. suggests that they are more vulnerable to dependence than adults (see Box 3.2). Dependence Youths who are already dependent on other substances are particularly vulnerable to 31 marijuana dependence. For example, Crowley and co-workers 229 adolescent patients in a residential treatment program for delinquent, substance- involved youth and found that those patients were dependent on an average of 3.2 substances. The adolescents had previously been diagnosed as dependent on at least one substance (including nicotine and alcohol) and had three or more conduct disorder symptoms during their life. About 83% of those who had used marijuana at least six times went on to develop marijuana dependence. About equal numbers of youths in the study had a diagnosis of marijuana dependence and a diagnosis of alcohol dependence; fewer were nicotine dependent. Comparisons of dependence potential between different drugs should be made cautiously. The probability that a particular drug will be abused is interviewed a group of influenced by many factors, including the specific drug effects and availability of the drug. Although parents often state that marijuana caused their children to be rebellious, the troubled adolescents in the study by Crowley and co-workers developed conduct disorders before marijuana abuse. That is consistent with reports that the more symptoms 127 of conduct disorders children have, the younger they begin drug abuse, earlier they begin drug use, the more likely it is to be followed by abuse or 125 Genetic factors are known to play a role in the likelihood of abuse for drugs other than 7,129 dependence. marijuana, and it is not unexpected that genetic factors play a role in the marijuana experience, including the likelihood of abuse. A study of over 8,000 male twins listed in the Vietnam Era Twin Registry indicated that genes have a statistically significant 97 influence on whether a person finds the effects of marijuana pleasant. Not surprisingly, people who found marijuana to be pleasurable used it more often than those who found it unpleasant. The study suggested that, although social influences play an important role in the initiation of use, individual differences--perhaps associated with the brain's reward system--influence whether a person will continue using marijuana. Similar results were 86 Family and social environment strongly influenced the likelihood of ever using marijuana but had little effect on the likelihood of heavy use or abuse. The latter were more influenced by genetic factors. Those results are consistent with the finding that the degree to which rats find THC rewarding is genetically based. In summary, although few marijuana users develop dependence, some do. But they appear to be less likely to do so than users of other drugs (including alcohol and nicotine), and marijuana dependence appears to be less severe than dependence on other drugs. Drug dependence is more prevalent in some sectors of the population than others, but no group has been identified as particularly vulnerable to the drug-specific effects of marijuana. Adolescents, especially troubled ones, and people with psychiatric disorders (including substance abuse) appear to be more likely than the general population to become dependent on marijuana. If marijuana or cannabinoid drugs were approved for therapeutic uses, it would be important to consider the possibility of dependence, particularly for patients at high risk for substance dependence. Some controlled substances that are approved medications produce dependence after long-term use; this, however, is a normal part of patient management and does not generally present undue risk to the patient. Progression from Marijuana to Other Drugs The fear that marijuana use might cause, as opposed to merely precede, the use of drugs that are more harmful is of great concern. To judge from comments submitted to the IOM study team, it appears to be of greater concern than the harms directly related to marijuana itself. The discussion that marijuana is a "gateway" drug implicitly recognizes that other illicit drugs might inflict greater damage to health or social relations than found in a study of female twins. and that the 92 marijuana. Although the scientific literature generally discusses drug use progression between a variety of drug classes, including alcohol and tobacco, the public discussion has focused on marijuana as a "gateway" drug that leads to abuse of more harmful illicit drugs, such as cocaine and heroin. There are strikingly regular patterns in the progression of drug use from adolescence to adulthood. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug that most people encounter. Not surprisingly, most users of other illicit 81,82 drugs used marijuana first. marijuana--they begin with alcohol and nicotine, usually when they are too young to do 82,90 so legally. The gateway analogy evokes two ideas that are often confused. The first, more often referred to as the "stepping stone" hypothesis, is the idea that progression from marijuana 82 In fact, most drug users do not begin their drug use with to other drugs arises from pharmacological properties of marijuana itself. that marijuana serves as a gateway to the world of illegal drugs in which youths have greater opportunity and are under greater social pressure to try other illegal drugs. The latter interpretation is most often used in the scientific literature, and it is supported, although not proven, by the available data. The stepping stone hypothesis applies to marijuana only in the broadest sense. People who enjoy the effects of marijuana are, logically, more likely to be willing to try other mood-altering drugs than are people who are not willing to try marijuana or who dislike its effects. In other words, many of the factors associated with a willingness to use marijuana are, presumably, the same as those associated with a willingness to use other illicit drugs. Those factors include physiological reactions to the drug effect, which are consistent with the stepping stone hypothesis, but also psychosocial factors, which are independent of drug-specific effects. There is no evidence that marijuana serves as a stepping stone on the basis of its particular physiological effect. One might argue that marijuana is generally used before other illicit mood-altering drugs, in part, because its effects are milder; in that case, marijuana is a stepping stone only in the same sense as taking a small dose of a particular drug and then increasing that dose over time is a stepping stone to increased drug use. Whereas the stepping stone hypothesis presumes a predominantly physiological component of drug progression, the gateway theory is a social theory. The latter does not suggest that the pharmacological qualities of marijuana make it a risk factor for progression to other drug use. Instead, the legal status of marijuana makes it a gateway 82 Psychiatric disorders are associated with substance dependence and are probably risk factors for progression in drug use. For example, the troubled adolescents studied by 31 were dependent on an average of 3.2 substances, and this suggests that their conduct disorders were associated with increased risk of progressing from one drug to another. Abuse of a single substance is probably also a risk factor for later multiple drug use. For example, in a longitudinal study that examined drug use and drug. Crowley and co-workers The second is dependence, about 26% of problem drinkers reported that they first used marijuana after the onset of alcohol-related problems (R. Pandina, IOM workshop). The study also found that 11% of marijuana users developed chronic marijuana problems; most also had alcohol problems. Intensity of drug use is an important risk factor in progression. Daily marijuana users are more likely than their peers to be extensive users of other substances (for review, see 78 Kandel and Davies by the age 24—25, 75% never used any other illicit drug; 53% of those who had used it 78 The factors that best predict use of illicit drugs other than marijuana are probably the following: age of first alcohol or nicotine use, heavy marijuana use, and psychiatric disorders. However, progression to illicit drug use is not synonymous with heavy or persistent drug use. Indeed, although the age of onset of use of licit drugs (alcohol and nicotine) predicts later illicit drug use, it does not appear to predict persistent or heavy 90 use of illicit drugs. Data on the gateway phenomenon are often overinterpreted. For example, one study 55 ). Of 34- to 35-year- old men who had used marijuana 10—99 times more than 100 times did progress to using other illicit drugs 10 or more times. Comparable proportions for women are 64% and 50%. reports that "marijuana's role as a gateway drug appears to have increased." It was a retrospective study based on interviews of drug abusers who reported smoking crack or injecting heroin daily. The data from the study provide no indication of what proportion of marijuana users become serious drug abusers; rather, they indicate that serious drug abusers usually use marijuana before they smoke crack or inject heroin. Only a small percentage of the adult population uses crack or heroin daily; during the five-year period from 1993 to 1997, an average of three people per 1,000 used crack and about two per 132 Many of the data on which the gateway theory is based do not measure dependence; instead, they measure use--even once-only use. Thus, they show only that marijuana users are more likely to use other illicit drugs (even if only once) than are people who never use marijuana, not that they become dependent or even frequent users. The authors of these studies are careful to point out that their data should not be used as evidence of an inexorable causal progression; rather they note that identifying stage-based user groups makes it possible to identify the specific risk factors that predict movement from 25 In the sense that marijuana use typically precedes rather than follows initiation into the use of other illicit drugs, it is indeed a gateway drug. However, it does not appear to be a gateway drug to the extent that it is the cause or even that it is the most significant predictor of serious drug abuse; that is, care must be taken not to attribute cause to association. The most consistent predictors of serious drug use appear to be the intensity of marijuana use and co-occurring psychiatric disorders or a family history of 78,83 psychopathology (including alcoholism). 1,000 used heroin in the preceding month. one stage of drug use to the next--the real issue in the gateway discussion. An important caution is that data on drug use progression pertain to nonmedical drug use. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would be the same. Kandel and co-workers also included nonmedical use of prescription psychoactive drugs in their study of drug use 82 progression. a clear and consistent sequence of drug use involving the abuse of prescription psychoactive drugs. The current data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse among medical marijuana users. Whether the medical use of marijuana might encourage drug abuse among the general community--not among medical marijuana users themselves but among others simply because of the fact that marijuana would be used for medical purposes--is another question. LINK BETWEEN MEDICAL USE AND DRUG ABUSE Almost everyone who spoke or wrote to the IOM study team about the potential harms posed by the medical use of marijuana felt that it would send the wrong message to children and teenagers. They stated that information about the harms caused by marijuana is undermined by claims that marijuana might have medical value. Yet many of our powerful medicines are also dangerous medicines. These two facets of medicine-- effectiveness and risk--are inextricably linked. The question here is not whether marijuana can be both harmful and helpful but whether the perception of its benefits will increase its abuse. For now any answer to the question remains conjecture. Because marijuana is not an approved medicine, there is little information about the consequences of its medical use in modern society. Reasonable inferences might be drawn from some examples. Opiates, such as morphine and codeine, are an example of a class of drugs that is both abused to great harm and used to great medical benefit, and it would be useful to examine the relationship between their medical use and their abuse. In a "natural experiment" during 1973—1978 some states decriminalized marijuana, and others did not. Finally, one can examine the short-term consequences of the publicity surrounding the 1996 medical marijuana campaign in California and ask whether it had any measurable impact on marijuana consumption among youth in California; the consequences of "message" that marijuana might have medical use are examined below. Medical Use and Abuse of Opiates Two highly influential papers published in the 1920s and 1950s led to widespread concern among physicians and medical licensing boards that liberal use of opiates would 106 in 1996). Such fears have proven unfounded; it is now recognized that fear of producing addicts through medical treatment resulted in needless suffering among patients with pain as physicians 27,44 In contrast with the use of alcohol, nicotine, and illicit drugs, there was not result in many addicts (reviewed by Moulin and co-workers needlessly limited appropriate doses of medications. addiction problems with misuse of drugs that have been prescribed for medical use. Few people begin their drug 114 Opiates are carefully regulated in the medical setting, and diversion of medically prescribed opiates to the black market is not generally considered to be a major problem. No evidence suggests that the use of opiates or cocaine for medical purposes has increased the perception that their illicit use is safe or acceptable. Clearly, there are risks that patients will abuse marijuana for its psychoactive effects and some likelihood of diversion of marijuana from legitimate medical channels into the illicit market. But those risks do not differentiate marijuana from many accepted medications that are abused by some patients or diverted from medical channels for nonmedical use. Medications with abuse potential are placed in Schedule II of the Controlled Substances Act, which brings them under stricter control, including quotas on the amount that can be legally manufactured (see chapter 5 for discussion of the Controlled Substances Act). That scheduling also signals to physicians that a drug has abuse potential and that they should monitor its use by patients who could be at risk for drug abuse. Marijuana Decriminalization Monitoring the Future, the annual survey of values and lifestyles of high school seniors, revealed that high school seniors in decriminalized states reported using no more 72 marijuana than did their counterparts in states where marijuana was not decriminalized. Another study reported somewhat conflicting evidence indicating that decriminalization 105 had increased marijuana use. Network (DAWN), which has collected data on drug-related emergency room (ER) cases since 1975. There was a greater increase from 1975 to 1978 in the proportion of ER patients who had used marijuana in states that had decriminalized marijuana in 1975— 1976 than in states that had not decriminalized it (Table 3.6). Despite the greater increase among decriminalized states, the proportion of marijuana users among ER patients by 1978 was about equal in states that had and states that had not decriminalized marijuana. That is because the non-decriminalized states had higher rates of marijuana use before decriminalization. In contrast with marijuana use, rates of other illicit drug use among ER patients were substantially higher in states that did not decriminalize marijuana use. Thus, there are different possible reasons for the greater increase in marijuana use in the decriminalized states. On the one hand, decriminalization might have led to an increased use of marijuana (at least among people who sought health care in hospital ERs). On the other hand, the lack of decriminalization might have encouraged greater use of drugs that are even more dangerous than marijuana. The differences between the results for high school seniors from the Monitoring the Future study and the DAWN data are unclear, although the author of the latter study suggests that the reasons might lie in limitations inherent in how the DAWN data are 105 In 1976, the Netherlands adopted a policy of toleration for possession of up to 30 g of marijuana. There was little change in marijuana use during the seven years after the policy change, which suggests that the change itself had little effect; however, in 1984, when Dutch "coffee shops" that sold marijuana commercially spread throughout That study used data from the Drug Awareness Warning collected. Amsterdam, marijuana use began to increase. continued to increase in the Netherlands at the same rate as in the United States and Norway--two countries that strictly forbid marijuana sale and possession. Furthermore, during this period, approximately equal percentages of American and Dutch 18 year olds used marijuana; Norwegian 18 year olds were about half as likely to have used marijuana. The authors of this study conclude that there is little evidence that the Dutch marijuana depenalization policy led to increased marijuana use, although they note that commercialization of marijuana might have contributed to its increased use. Thus, there is little evidence that decriminalization of marijuana use necessarily leads to a substantial increase in marijuana use. The Medical Marijuana Debate The most recent National Household Survey on Drug Abuse showed that among people 12—17 years old the perceived risk associated with smoking marijuana once or 132 (Perceived risk is measured as the percentage of survey respondents who report that they "perceive great risk of harm" in using a drug at a specified frequency.) At first glance, that might seem to validate the fear that the medical marijuana debate of 1996--before passage of the California medical marijuana referendum in November 1997--had sent a message that marijuana use is safe. But a closer analysis of the data shows that Californian youth were an exception to the national trend. In contrast to the national trend, the perceived risk of 1321 In summary, there is no evidence that the medical marijuana debate has altered adolescents' 132 PSYCHOLOGICAL HARMS In assessing the relative risks and benefits related to the medical use of marijuana, the psychological effects of marijuana can be viewed both as unwanted side effects and as potentially desirable end points in medical treatment. However, the vast majority of research on the psychological effects of marijuana has been in the context of assessing the drug's intoxicating effects when it is used for nonmedical purposes. Thus, the literature does not directly address the effects of marijuana taken for medical purposes. There are some important caveats to consider in attempting to extrapolate from the research mentioned above to the medical use of marijuana. The circumstances under which psychoactive drugs are taken are an important influence on their psychological effects. Furthermore, research protocols to study marijuana's psychological effects in most instances were required to use participants who already had experience with marijuana. People who might have had adverse reactions to marijuana either would choose not to participate in this type of study or would be screened out by the investigator. Therefore, the incidence of adverse reactions to marijuana that might occur in people with no marijuana experience cannot be estimated from such studies. A further complicating factor concerns the dose regimen used for laboratory studies. In most instances, laboratory research studies have looked at the effects of single doses of twice a week had decreased significantly between 1996 and 1997. marijuana use did not change among California youth between 1996 and 1997. perceptions of the risks associated with marijuana use. 98 During the 1990s, marijuana use has marijuana, which might be different from those observed when the drug is taken repeatedly for a chronic medical condition. Nonetheless, laboratory studies are useful in suggesting what psychological functions might be studied when marijuana is evaluated for medical purposes. Results of laboratory studies indicate that acute and chronic marijuana use has pronounced effects on mood, psychomotor, and cognitive functions. These psychological domains should therefore be considered in assessing the relative risks and therapeutic benefits related to marijuana or cannabinoids for any medical condition. Psychiatric Disorders A major question remains as to whether marijuana can produce lasting mood disorders 52 or psychotic disorders, such as schizophrenia. Georgotas and Zeidenberg reported that smoking 10—22 marijuana cigarettes per day was associated with a gradual waning of the positive mood and social facilitating effects of marijuana and an increase in irritability, social isolation, and paranoid thinking. Inasmuch as smoking one cigarette is 68,95,118 enough to make a person feel "high" for about 1—3 hours, the subjects in that study were taking very high doses of marijuana. Reports have described the development of apathy, lowered motivation, and impaired educational performance in heavy marijuana 121,122 There are clinical reports of marijuana-induced psychosis-like states (schizophrenia-like, 112 depression, and/or mania) lasting for a week or more. of the varied nature of the psychotic states induced by marijuana, there is no specific "marijuana psychosis." Rather, the marijuana experience might trigger latent users who do not appear to be behaviorally impaired in other ways. psychopathology of many types. concluded that disorder. As noted earlier, drug abuse is common among people with psychiatric 66 60 More recently, Hall and colleagues "there is reasonable evidence that heavy cannabis use, and perhaps acute use in sensitive individuals, can produce an acute psychosis in which confusion, amnesia, delusions, hallucinations, anxiety, agitation and hypomanic symptoms predominate." Regardless of which of those interpretations is correct, the two reports agree that there is little evidence that marijuana alone produces a psychosis that persists after the period of intoxication. Schizophrenia The association between marijuana and schizophrenia is not well understood. The scientific literature indicates general agreement that heavy marijuana use can precipitate schizophrenic episodes but not that marijuana use can cause the underlying psychotic 59,96,151 disorders. Estimates of the prevalence of marijuana use among schizophrenics vary considerably but are in general agreement that it is at least as great as that among the general population. 35 Schizophrenics prefer the effects of marijuana to those of alcohol 134 134 and cocaine, reasons for this are unknown, but it raises the possibility that schizophrenics might obtain some symptomatic relief from moderate marijuana use. But overall, compared with the general population, people with schizophrenia or with a family history of schizophrenia which they seem to use less often than does the general population. The Hollister suggests that, because are likely to be at greater risk for adverse psychiatric effects from the use of cannabinoids. Cognition As discussed earlier, acutely administered marijuana impairs cognition. 60,66,112 Positron emission tomography (PET) imaging allows investigators to measure the acute effects of marijuana smoking on active brain function. Human volunteers who perform auditory attention tasks before and after smoking a marijuana cigarette show impaired performance while under the influence of marijuana; this is associated with substantial reduction in blood flow to the temporal lobe of the brain, an area that is sensitive to such 116,117 tasks. Marijuana smoking increases blood flow in other brain regions, such as the 101,155 frontal lobes and lateral cerebellum. Earlier studies purporting to show structural 22 changes in the brains of heavy marijuana users have not been replicated with more sophisticated techniques. 28,89 14,122 Nevertheless, recent studies marijuana users after a brief period (19—24 hours) of marijuana abstinence. Longer term 140 Although these studies have attempted to match heavy marijuana users with subjects of similar cognitive abilities before exposure to marijuana use, the adequacy of this matching has been 133 cognitive deficits in heavy marijuana users have also been reported. have found subtle defects in cognitive tasks in heavy questioned. reviewed in an article by Pope and colleagues. are designed to differentiate between changes in brain function caused the effects of marijuana and by the illness for which marijuana is being given. AIDS dementia is an obvious example of this possible confusion. It is also important to determine whether repeated use of marijuana at therapeutic dosages produces any irreversible cognitive effects. Psychomotor Performance Marijuana administration has been reported to affect psychomotor performance on a 23 not only details the studies that have been done but also points out the inconsistencies among studies, the methodological shortcomings of many studies, and the large individual differences among the studies attributable to subject, situational, and methodological factors. Those factors must be considered in studies of psychomotor performance when participants are involved in a clinical trial of the efficacy of marijuana. The types of psychomotor functions that have been shown to be disrupted by the acute administration of marijuana include body sway, hand steadiness, rotary pursuit, driving and flying simulation, divided attention, sustained attention, and the digit-symbol substitution test. A study of experienced airplane pilots showed that even 24 hours after a single marijuana cigarette their performance on flight 163 Before the tests, however, they told the study investigators that they were sure their performance would be unaffected. The complex methodological issues facing research in this area are well number of tasks. The review by Chait and Pierri simulator tests was impaired. 121 Care must be exercised so that studies Cognitive impairments associated with acutely administered marijuana limit the activities that people would be able to do safely or productively. For example, no one under the influence of marijuana or THC should drive a vehicle or operate potentially dangerous equipment. Amotivational Syndrome One of the more controversial effects claimed for marijuana is the production of an "amotivational syndrome." This syndrome is not a medical diagnosis, but it has been used to describe young people who drop out of social activities and show little interest in school, work, or other goal-directed activity. When heavy marijuana use accompanies these symptoms, the drug is often cited as the cause, but no convincing data demonstrate 23 a causal relationship between marijuana smoking and these behavioral characteristics. is not enough to observe that a chronic marijuana user lacks motivation. Instead, relevant personality traits and behavior of subjects must be assessed before and after the subject becomes a heavy marijuana user. Because such research can only be done on subjects who become heavy marijuana users on their own, a large population study--such as the Epidemiological Catchment Area study described earlier in this chapter--would be needed to shed light on the relationship between motivation and marijuana use. Even then, although a causal relationship between the two could, in theory, be dismissed by an epidemiological study, causality could not be proven.
To answer the following question, use only information contained in the context block/prompt. Do not use any previous knowledge or outside sources. EVIDENCE: Three focal concerns in evaluating the medical use of marijuana are: 1. Evaluation of the effects of isolated cannabinoids; 2. Evaluation of the risks associated with the medical use of marijuana; and 3. Evaluation of the use of smoked marijuana. EFFECTS OF ISOLATED CANNABINOIDS Cannabinoid Biology Much has been learned since the 1982 IOM report Marijuana and Health. Although it was clear then that most of the effects of marijuana were due to its actions on the brain, there was little information about how THC acted on brain cells (neurons), which cells were affected by THC, or even what general areas of the brain were most affected by THC. In addition, too little was known about cannabinoid physiology to offer any scientific insights into the harmful or therapeutic effects of marijuana. That all changed with the identification and characterization of cannabinoid receptors in the 1980s and 1990s. During the past 16 years, science has advanced greatly and can tell us much more about the potential medical benefits of cannabinoids. Conclusion: At this point, our knowledge about the biology of marijuana and cannabinoids allows us to make some general conclusions: o Cannabinoids likely have a natural role in pain modulation, control of movement, and memory. o The natural role of cannabinoids in immune systems is likely multi-faceted and remains unclear. o The brain develops tolerance to cannabinoids. o Animal research demonstrates the potential for dependence, but this potential is observed under a narrower range of conditions than with benzodiazepines, opiates, cocaine, or nicotine. o Withdrawal symptoms can be observed in animals but appear to be mild compared to opiates or benzodiazepines, such as diazepam (Valium). Conclusion: The different cannabinoid receptor types found in the body appear to play different roles in normal human physiology. In addition, some effects of cannabinoids appear to be independent of those receptors. The variety of mechanisms through which cannabinoids can influence human physiology underlies the variety of potential therapeutic uses for drugs that might act selectively on different cannabinoid systems. Recommendation 1: Research should continue into the physiological effects of synthetic and plant-derived cannabinoids and the natural function of cannabinoids found in the body. Because different cannabinoids appear to have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC alone. Efficacy of Cannabinoid Drugs The accumulated data indicate a potential therapeutic value for cannabinoid drugs, particularly for symptoms such as pain relief, control of nausea and vomiting, and appetite stimulation. The therapeutic effects of cannabinoids are best established for THC, which is generally one of the two most abundant of the cannabinoids in marijuana. (Cannabidiol is generally the other most abundant cannabinoid.) The effects of cannabinoids on the symptoms studied are generally modest, and in most cases there are more effective medications. However, people vary in their responses to medications, and there will likely always be a subpopulation of patients who do not respond well to other medications. The combination of cannabinoid drug effects (anxiety reduction, appetite stimulation, nausea reduction, and pain relief) suggests that cannabinoids would be moderately well suited for particular conditions, such as chemotherapy-induced nausea and vomiting and AIDS wasting. Defined substances, such as purified cannabinoid compounds, are preferable to plant products, which are of variable and uncertain composition. Use of defined cannabinoids permits a more precise evaluation of their effects, whether in combination or alone. Medications that can maximize the desired effects of cannabinoids and minimize the undesired effects can very likely be identified. Although most scientists who study cannabinoids agree that the pathways to cannabinoid drug development are clearly marked, there is no guarantee that the fruits of scientific research will be made available to the public for medical use. Cannabinoid- based drugs will only become available if public investment in cannabinoid drug research is sustained and if there is enough incentive for private enterprise to develop and market such drugs. Conclusion: Scientific data indicate the potential therapeutic value of cannabinoid drugs, primarily THC, for pain relief, control of nausea and vomiting, and appetite stimulation; smoked marijuana, however, is a crude THC delivery system that also delivers harmful substances. Recommendation 2: Clinical trials of cannabinoid drugs for symptom management should be conducted with the goal of developing rapid-onset, reliable, and safe delivery systems. Influence of Psychological Effects on Therapeutic Effects The psychological effects of THC and similar cannabinoids pose three issues for the therapeutic use of cannabinoid drugs. First, for some patients--particularly older patients with no previous marijuana experience--the psychological effects are disturbing. Those patients report experiencing unpleasant feelings and disorientation after being treated with THC, generally more severe for oral THC than for smoked marijuana. Second, for conditions such as movement disorders or nausea, in which anxiety exacerbates the symptoms, the antianxiety effects of cannabinoid drugs can influence symptoms indirectly. This can be beneficial or can create false impressions of the drug effect. Third, for cases in which symptoms are multifaceted, the combination of THC effects might provide a form of adjunctive therapy; for example, AIDS wasting patients would likely benefit from a medication that simultaneously reduces anxiety, pain, and nausea while stimulating appetite. Conclusion: The psychological effects of cannabinoids, such as anxiety reduction, sedation, and euphoria can influence their potential therapeutic value. Those effects are potentially undesirable for certain patients and situations and beneficial for others. In addition, psychological effects can complicate the interpretation of other aspects of the drug's effect. Recommendation 3: Psychological effects of cannabinoids such as anxiety reduction and sedation, which can influence medical benefits, should be evaluated in clinical trials. RISKS ASSOCIATED WITH MEDICAL USE OF MARIJUANA Physiological Risks Marijuana is not a completely benign substance. It is a powerful drug with a variety of effects. However, except for the harms associated with smoking, the adverse effects of marijuana use are within the range of effects tolerated for other medications. The harmful effects to individuals from the perspective of possible medical use of marijuana are not necessarily the same as the harmful physical effects of drug abuse. When interpreting studies purporting to show the harmful effects of marijuana, it is important to keep in mind that the majority of those studies are based on smoked marijuana, and cannabinoid effects cannot be separated from the effects of inhaling smoke from burning plant material and contaminants. For most people the primary adverse effect of acute marijuana use is diminished psychomotor performance. It is, therefore, inadvisable to operate any vehicle or potentially dangerous equipment while under the influence of marijuana, THC, or any cannabinoid drug with comparable effects. In addition, a minority of marijuana users experience dysphoria, or unpleasant feelings. Finally, the short-term immunosuppressive effects are not well established but, if they exist, are not likely great enough to preclude a legitimate medical use. The chronic effects of marijuana are of greater concern for medical use and fall into two categories: the effects of chronic smoking and the effects of THC. Marijuana smoking is associated with abnormalities of cells lining the human respiratory tract. Marijuana smoke, like tobacco smoke, is associated with increased risk of cancer, lung damage, and poor pregnancy outcomes. Although cellular, genetic, and human studies all suggest that marijuana smoke is an important risk factor for the development of respiratory cancer, proof that habitual marijuana smoking does or does not cause cancer awaits the results of well-designed studies. Conclusion: Numerous studies suggest that marijuana smoke is an important risk factor in the development of respiratory disease. Recommendation 4: Studies to define the individual health risks of smoking marijuana should be conducted, particularly among populations in which marijuana use is prevalent. Marijuana Dependence and Withdrawal A second concern associated with chronic marijuana use is dependence on the psychoactive effects of THC. Although few marijuana users develop dependence, some do. Risk factors for marijuana dependence are similar to those for other forms of substance abuse. In particular, anti-social personality and conduct disorders are closely associated with substance abuse. Conclusion: A distinctive marijuana withdrawal syndrome has been identified, but it is mild and short lived. The syndrome includes restlessness, irritability, mild agitation, insomnia, sleep disturbance, nausea, and cramping. Marijuana as a "Gateway" Drug Patterns in progression of drug use from adolescence to adulthood are strikingly regular. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug most people encounter. Not surprisingly, most users of other illicit drugs have used marijuana first. In fact, most drug users begin with alcohol and nicotine before marijuana--usually before they are of legal age. In the sense that marijuana use typically precedes rather than follows initiation of other illicit drug use, it is indeed a "gateway" drug. But because underage smoking and alcohol use typically precede marijuana use, marijuana is not the most common, and is rarely the first, "gateway" to illicit drug use. There is no conclusive evidence that the drug effects of marijuana are causally linked to the subsequent abuse of other illicit drugs. An important caution is that data on drug use progression cannot be assumed to apply to the use of drugs for medical purposes. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would remain the same as seen in illicit use. Finally, there is a broad social concern that sanctioning the medical use of marijuana might increase its use among the general population. At this point there are no convincing data to support this concern. The existing data are consistent with the idea that this would not be a problem if the medical use of marijuana were as closely regulated as other medications with abuse potential. Conclusion: Present data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse. However, this question is beyond the issues normally considered for medical uses of drugs and should not be a factor in evaluating the therapeutic potential of marijuana or cannabinoids. USE OF SMOKED MARIJUANA Because of the health risks associated with smoking, smoked marijuana should generally not be recommended for long-term medical use. Nonetheless, for certain patients, such as the terminally ill or those with debilitating symptoms, the long-term risks are not of great concern. Further, despite the legal, social, and health problems associated with smoking marijuana, it is widely used by certain patient groups. Recommendation 5: Clinical trials of marijuana use for medical purposes should be conducted under the following limited circumstances: trials should involve only short-term marijuana use (less than six months), should be conducted in patients with conditions for which there is reasonable expectation of efficacy, should be approved by institutional review boards, and should collect data about efficacy. The goal of clinical trials of smoked marijuana would not be to develop marijuana as a licensed drug but rather to serve as a first step toward the possible development of nonsmoked rapid-onset cannabinoid delivery systems. However, it will likely be many years before a safe and effective cannabinoid delivery system, such as an inhaler, is available for patients. In the meantime there are patients with debilitating symptoms for whom smoked marijuana might provide relief. The use of smoked marijuana for those patients should weigh both the expected efficacy of marijuana and ethical issues in patient care, including providing information about the known and suspected risks of smoked marijuana use. Recommendation 6: Short-term use of smoked marijuana (less than six months) for patients with debilitating symptoms (such as intractable pain or vomiting) must meet the following conditions: o failure of all approved medications to provide relief has been documented, o the symptoms can reasonably be expected to be relieved by rapid- onset cannabinoid drugs, o such treatment is administered under medical supervision in a manner that allows for assessment of treatment effectiveness, and o involves an oversight strategy comparable to an institutional review board process that could provide guidance within 24 hours of a submission by a physician to provide marijuana to a patient for a specified use. Until a nonsmoked rapid-onset cannabinoid drug delivery system becomes available, we acknowledge that there is no clear alternative for people suffering from chronic conditions that might be relieved by smoking marijuana, such as pain or AIDS wasting. One possible approach is to treat patients as n-of-1 clinical trials (single-patient trials), in which patients are fully informed of their status as experimental subjects using a harmful drug delivery system and in which their condition is closely monitored and documented under medical supervision, thereby increasing the knowledge base of the risks and benefits of marijuana use under such conditions.data, it is important to understand that decisions about drug regulation are based on a variety of moral and social considerations, as well as on medical and scientific ones. Even when a drug is used only for medical purposes, value judgments affect policy decisions concerning its medical use. For example, the magnitude of a drug's expected medical benefit affects regulatory judgments about the acceptability of risks associated with its use. Also, although a drug is normally approved for medical use only on proof of its "safety and efficacy," patients with life-threatening conditions are sometimes (under protocols for "compassionate use") allowed access to unapproved drugs whose benefits and risks are uncertain. Value judgments play an even more substantial role in regulatory decisions concerning drugs, such as marijuana, that are sought and used for nonmedical purposes. Then policymakers must take into account not only the risks and benefits associated with medical use but also possible interactions between the regulatory arrangements governing medical use and the integrity of the legal controls set up to restrict nonmedical use. It should be clear that many elements of drug control policy lie outside the realm of biology and medicine. Ultimately, the complex moral and social judgments that underlie drug control policy must be made by the American people and their elected officials. A goal of this report is to evaluate the biological and medical factors that should be taken into account in making those judgments. HOW THIS STUDY WAS CONDUCTED Information was gathered through scientific workshops, site visits, analysis of the relevant scientific literature, and extensive consultation with biomedical and social scientists. The three 2-day workshops--in Irvine, California; New Orleans, Louisiana; and Washington, D.C.--were open to the public and included scientific presentations and reports, mostly from patients and their families, about their experiences with and perspectives on the medical use of marijuana. Scientific experts in various fields were selected to talk about the latest research on marijuana, cannabinoids, and related topics (listed in Appendix B). Selection of the experts was based on recommendations by their peers, who ranked them among the most accomplished scientists and the most knowledgeable about marijuana and cannabinoids in their own fields. In addition, advocates for (John Morgan) and against (Eric A. Voth) the medical use of marijuana were invited to present scientific evidence in support of their positions. Information presented at the scientific workshops was supplemented by analysis of the scientific literature and evaluating the methods used in various studies and the validity of the authors' conclusions. Different kinds of clinical studies are useful in different ways: results of a controlled double-blind study with adequate sample sizes can be expected to apply to the general population from which study subjects were drawn; an isolated case report can suggest further studies but cannot be presumed to be broadly applicable; and survey data can be highly informative but are generally limited by the need to rely on self-reports of drug use and on unconfirmed medical diagnoses. This report relies mainly on the most relevant and methodologically rigorous studies available and treats the results of more limited studies cautiously. In addition, study results are presented in such a way as to allow thoughtful readers to judge the results themselves. The Institute of Medicine (IOM) appointed a panel of nine experts to advise the study team on technical issues. These included neurology and the treatment of pain (Howard Fields); regulation of prescription drugs (J. Richard Crout); AIDS wasting and clinical trials (Judith Feinberg); treatment and pathology of multiple sclerosis (Timothy Vollmer); drug dependence among adolescents (Thomas Crowley); varieties of drug dependence (Dorothy Hatsukami); internal medicine, health care delivery, and clinical epidemiology (Eric B. Larson); cannabinoids and marijuana pharmacology (Billy R. Martin); and cannabinoid neuroscience (Steven R. Childers). Public outreach included setting up a Web site that provided information about the study and asked for input from the public. The Web site was open for comment from November 1997 until November 1998. Some 130 organizations were invited to participate in the public workshops. Many people in the organizations--particularly those opposed to the medical use of marijuana--felt that a public forum was not conducive to expressing their views; they were invited to communicate their opinions (and reasons for holding them) by mail or telephone. As a result, roughly equal numbers of persons and organizations opposed to and in favor of the medical use of marijuana were heard from. The study team visited four cannabis buyers' clubs in California (the Oakland Cannabis Buyers' Cooperative, the San Francisco Cannabis Cultivators Club, the Los Angeles Cannabis Resource Center, and Californians Helping Alleviate Medical Problems, or CHAMPS) and two HIV/AIDS clinics (AIDS Health Care Foundation in Los Angeles and Louisiana State University Medical Center in New Orleans). We listened to many individual stories from the buyers' clubs about using marijuana to treat a variety of symptoms and heard clinical observations on the use of Marinol to treat AIDS 9 patients. Marinol is the brand name for dronabinol, which is (THC) in pill form and is available by prescription for the treatment of nausea associated with chemotherapy and AIDS wasting. MARIJUANA TODAY The Changing Legal Landscape In the 20th century, marijuana has been used more for its euphoric effects than as a medicine. Its psychological and behavioral effects have concerned public officials since the drug first appeared in the southwestern and southern states during the first two decades of the century. By 1931, at least 29 states had prohibited use of the drug for 3 nonmedicalpurposes. MarijuanawasfirstregulatedatthefederallevelbytheMarijuana Tax Act of 1937, which required anyone producing, distributing, or using marijuana for medical purposes to register and pay a tax and which effectively prohibited nonmedical use of the drug. Although the act did not make medical use of marijuana illegal, it did make it expensive and inconvenient. In 1942, marijuana was removed from the U.S. -tetrahydrocannabinol Pharmacopoeia because it was believed to be a harmful and addictive drug that caused psychoses, mental deterioration, and violent behavior. In the late 1960s and early 1970s, there was a sharp increase in marijuana use among adolescents and young adults. The current legal status of marijuana was established in 1970 with the passage of the Controlled Substances Act, which divided drugs into five schedules and placed marijuana in Schedule I, the category for drugs with high potential for abuse and no accepted medical use (see Appendix C, Scheduling Definitions). In 1972, the National Organization for the Reform of Marijuana Legislation (NORML), an organization that supports decriminalization of marijuana, unsuccessfully petitioned the Bureau of Narcotics and Dangerous Drugs to move marijuana from Schedule I to Schedule II. NORML argued that marijuana is therapeutic in numerous serious ailments, 13 less toxic, and in many cases more effective than conventional medicines. years the medical marijuana movement has been closely linked with the marijuana decriminalization movement, which has colored the debate. Many people criticized that association in their letters to IOM and during the public workshops of this study. The argument against the medical use of marijuana presented most often to the IOM study team was that "the medical marijuana movement is a Trojan horse"; that is, it is a deceptive tactic used by advocates of marijuana decriminalization who would exploit the public's sympathy for seriously ill patients. Since NORML's petition in 1972, there have been a variety of legal decisions concerning marijuana. From 1973 to 1978, 11 states adopted statutes that decriminalized use of marijuana, although some of them recriminalized marijuana use in the 1980s and 1990s. During the 1970s, reports of the medical value of marijuana began to appear, particularly claims that marijuana relieved the nausea associated with chemotherapy. Health departments in six states conducted small studies to investigate the reports. When the AIDS epidemic spread in the 1980s, patients found that marijuana sometimes relieved their symptoms, most dramatically those associated with AIDS wasting. Over this period a number of defendants charged with unlawful possession of marijuana claimed that they were using the drug to treat medical conditions and that violation of the law was therefore justified (the so-called medical necessity defense). Although most courts rejected these 8 Against that backdrop, voters in California and Arizona in 1996 passed two referenda that attempted to legalize the medical use of marijuana under particular conditions. Public support for patient access to marijuana for medical use appears substantial; public opinion polls taken during 1997 and 1998 generally reported 60—70 percent of 15 However, those referenda are at odds with federal laws regulating marijuana, and their implementation raises complex legal questions. Despite the current level of interest, referenda and public discussions have not been well informed by carefully reasoned scientific debate. Although previous reports have all called for more research, the nature of the research that will be most helpful depends greatly on the specific health conditions to be addressed. And while there have been claims, some accepted them. respondents in favor of allowing medical uses of marijuana. Thus, for 25 important recent advances in our understanding of the physiological effects of marijuana, few of the recent investigators have had the time or resources to permit detailed analysis. The results of those advances, only now beginning to be explored, have significant implications for the medical marijuana debate. Several months after the passage of the California and Arizona medical marijuana referendums, the Office of National Drug Control Policy (ONDCP) asked whether IOM would conduct a scientific review of the medical value of marijuana and its constituent compounds. In August 1997, IOM formally began the study and appointed John A. Benson Jr. and Stanley J. Watson Jr. to serve as principal investigators for the study. The charge to IOM was to review the medical use of marijuana and the harms and benefits attributed to it (details are given in Appendix D). Marijuana plants have been used since antiquity for both herbal medication and intoxication. The current debate over the medical use of marijuana is essentially a debate over the value of its medicinal properties relative to the risk posed by its use. 1,10,11 Marijuana's use as an herbal remedy before the 20th century is well documented. However, modern medicine adheres to different standards from those used in the past. The question is not whether marijuana can be used as an herbal remedy but rather how well this remedy meets today's standards of efficacy and safety. We understand much more than previous generations about medical risks. Our society generally expects its licensed medications to be safe, reliable, and of proven efficacy; contaminants and inconsistent ingredients in our health treatments are not tolerated. That refers not only to prescription and over-the-counter drugs but also to vitamin supplements and herbal remedies purchased at the grocery store. For example, the essential amino acid l- tryptophan was widely sold in health food stores as a natural remedy for insomnia until early 1990 when it became linked to an epidemic of a new and potentially fatal illness 9,12 When it was removed from the market shortly thereafter, there was little protest, despite the fact that it was safe for the vast majority of the population. The 1,536 cases and 27 deaths were later traced to contaminants in a batch produced by a single Japanese manufacturer. Although few herbal medicines meet today's standards, they have provided the foundation for modern Western pharmaceuticals. Most current prescriptions have their 7 rootseitherdirectlyorindirectlyinplantremedies. Atthesametime,mostcurrent prescriptions are synthetic compounds that are only distantly related to the natural compounds that led to their development. Digitalis was discovered in foxglove, morphine in poppies, and taxol in the yew tree. Even aspirin (acetylsalicylic acid) has its counterpart in herbal medicine: for many generations, American Indians relieved headaches by chewing the bark of the willow tree, which is rich in a related form of salicylic acid. Although plants continue to be valuable resources for medical advances, drug development is likely to be less and less reliant on plants and more reliant on the tools of (eosinophilia-myalgia syndrome). modern science. Molecular biology, bioinformatics software, and DNA array-based analyses of genes and chemistry are all beginning to yield great advances in drug discovery and development. Until recently, drugs could only be discovered; now they can be designed. Even the discovery process has been accelerated through the use of modern drug-screening techniques. It is increasingly possible to identify or isolate the chemical compounds in a plant, determine which compounds are responsible for the plant's effects, and select the most effective and safe compounds--either for use as purified substances or as tools to develop even more effective, safer, or less expensive compounds. Yet even as the modern pharmacological toolbox becomes more sophisticated and biotechnology yields an ever greater abundance of therapeutic drugs, people increasingly 4,5 In 1997, 46 percent of Americans sought nontraditional medicines and spent over 27 billion unreimbursed dollars; the total number of visits to alternative medicine practitioners appears to have exceeded the number of 5,6 Recent interest in the medical use of marijuana coincides with this trend toward self-help and a search for "natural" therapies. Indeed, several people who spoke at the IOM public hearings in support of the medical use of marijuana said that they generally preferred herbal medicines to standard pharmaceuticals. However, few alternative therapies have been carefully and systematically tested for safety and efficacy, as is required for medications approved by 2 WHO USES MEDICAL MARIJUANA? There have been no comprehensive surveys of the demographics and medical conditions of medical marijuana users, but a few reports provide some indication. In each case, survey results should be understood to reflect the situation in which they were conducted and are not necessarily characteristic of medical marijuana users as a whole. Respondents to surveys reported to the IOM study team were all members of "buyers' clubs," organizations that provide their members with marijuana, although not necessarily through direct cash transactions. The atmosphere of the marijuana buyers' clubs ranges from that of the comparatively formal and closely regulated Oakland Cannabis Buyers' Cooperative to that of a "country club for the indigent," as Denis Peron described the San Francisco Cannabis Cultivators Club (SFCCC), which he directed. John Mendelson, an internist and pharmacologist at the University of California, San Francisco (UCSF) Pain Management Center, surveyed 100 members of the SFCCC who were using marijuana at least weekly. Most of the respondents were unemployed men in their forties. Subjects were paid $50 to participate in the survey; this might have encouraged a greater representation of unemployed subjects. All subjects were tested for drug use. About half tested positive for marijuana only; the other half tested positive for drugs in addition to marijuana (23% for cocaine and 13% for amphetamines). The predominant disorder was AIDS, followed by roughly equal numbers of members who reported chronic pain, mood disorders, and musculoskeletal disorders (Table 1.1). seek alternative, low-technology therapies. visits to primary care physicians. the FDA (Food and Drug Administration). The membership profile of the San Francisco club was similar to that of the Los Angeles Cannabis Resource Center (LACRC), where 83% of the 739 patients were men, 45% were 36—45 years old, and 71% were HIV positive. Table 1.2 shows a distribution of conditions somewhat different from that in SFCCC respondents, probably because of a different membership profile. For example, cancer is generally a disease that occurs late in life; 34 (4.7%) of LACRC members were over 55 years old; only 2% of survey respondents in the SFCCC study were over 55 years old. Jeffrey Jones, executive director of the Oakland Cannabis Buyers' Cooperative, reported that its largest group of patients is HIV-positive men in their forties. The second- largest group is patients with chronic pain. Among the 42 people who spoke at the public workshops or wrote to the study team, only six identified themselves as members of marijuana buyers' clubs. Nonetheless, they presented a similar profile: HIV/AIDS was the predominant disorder, followed by chronic pain (Tables 1.3 and 1.4). All HIV/AIDS patients reported that marijuana relieved nausea and vomiting and improved their appetite. About half the patients who reported using marijuana for chronic pain also reported that it reduced nausea and vomiting. Note that the medical conditions referred to are only those reported to the study team or to interviewers; they cannot be assumed to represent complete or accurate diagnoses. Michael Rowbotham, a neurologist at the UCSF Pain Management Center, noted that many pain patients referred to that center arrive with incorrect diagnoses or with pain of unknown origin. At that center the patients who report medical benefit from marijuana say that it does not reduce their pain but enables them to cope with it. Most--not all--people who use marijuana to relieve medical conditions have previously used it recreationally. An estimated 95% of the LACRC members had used marijuana before joining the club. It is important to emphasize the absence of comprehensive information on marijuana use before its use for medical conditions. Frequency of prior use almost certainly depends on many factors, including membership in a buyers' club, membership in a population sector that uses marijuana more often than others (for example, men 20—30 years old), and the medical condition being treated with marijuana (for example, there are probably relatively fewer recreational marijuana users among cancer patients than among AIDS patients). Patients who reported their experience with marijuana at the public workshops said that marijuana provided them with great relief from symptoms associated with disparate diseases and ailments, including AIDS wasting, spasticity from multiple sclerosis, depression, chronic pain, and nausea associated with chemotherapy. Their circumstances and symptoms were varied, and the IOM study team was not in a position to make medical evaluations or confirm diagnoses. Three representative cases presented to the IOM study team are presented in Box 1.1; the stories have been edited for brevity, but each case is presented in the patient's words and with the patient's permission. The variety of stories presented left the study team with a clear view of people's beliefs about how marijuana had helped them. But this collection of anecdotal data, although useful, is limited. We heard many positive stories but no stories from people who had tried marijuana but found it ineffective. This is a fraction with an unknown denominator. For the numerator we have a sample of positive responses; for the denominator we have no idea of the total number of people who have tried marijuana for medical purposes. Hence, it is impossible to estimate the clinical value of marijuana or cannabinoids in the general population based on anecdotal reports. Marijuana clearly seems to relieve some symptoms for some people--even if only as a placebo effect. But what is the balance of harmful and beneficial effects? That is the essential medical question that can be answered only by careful analysis of data collected under controlled conditions. CANNABIS AND THE CANNABINOIDS Marijuana is the common name for Cannabis sativa, a hemp plant that grows throughout temperate and tropical climates. The most recent review of the constituents of 16 marijuana lists 66 cannabinoids (Table 1.5). different cannabinoid effects or interactions. Most of the cannabinoids are closely related; they fall into only 10 groups of closely related cannabinoids, many of which differ by only a single chemical moiety and might be midpoints along biochemical pathways--that -tetrahydrocannabinol ( THC) is the primary psychoactive ingredient; depending on the particular plant, either THC or cannabidiol is the most abundant cannabinoid in marijuana (Figure 1.1). 9 is, degradation products, precursors, or byproducts. - But that does not mean there are 66 16,18 9 9 Throughout this report, THC is used to indicate of THC are discussed, the full names are used. All the cannabinoids are lipophilic--they are highly soluble in fatty fluids and tissues but not in water. Indeed, THC is so lipophilic that it is aptly described as "greasy." Throughout this report, marijuana refers to unpurified plant extracts, including leaves and flower tops, regardless of how they are consumed--whether by ingestion or by smoking. References to the effects of marijuana should be understood to include the composite effects of its various components; that is, the effects of THC are included among the effects of marijuana, but not all the effects of marijuana are necessarily due to THC. Discussions concerning cannabinoids refer only to those particular compounds and not to the plant extract. This distinction is important; it is often blurred or exaggerated. Cannabinoids are produced in epidermal glands on the leaves (especially the upper ones), stems, and the bracts that support the flowers of the marijuana plant. Although the flower itself has no epidermal glands, it has the highest cannabinoid content anywhere on the plant, probably because of the accumulation of resin secreted by the supporting bracteole (the small leaf-like part below the flower). The amounts of cannabinoids and their relative abundance in a marijuana plant vary with growing conditions, including 14 ). The chemical stability of cannabinoids in harvested plant material is also affected by moisture, temperature, sunlight, and storage. They degrade under any storage condition. humidity, temperature, and soil nutrients (reviewed in Pate, 1994 -THC. In the few cases where variants ORGANIZATION OF THE REPORT Throughout the report, steps that might be taken to fill the gaps in understanding both the potential harms and benefits of marijuana and cannabinoid use are identified. Those steps include identifying knowledge gaps, promising research directions, and potential therapies based on scientific advances in cannabinoid biology. Chapter 2 reviews basic cannabinoid biology and provides a foundation to understand the medical value of marijuana or its constituent cannabinoids. In consideration of the physician's first rule, "first, do no harm," the potential harms attributed to the medical use of marijuana are reviewed before the potential medical benefits. Chapter 3 reviews the risks posed by marijuana use, with emphasis on medical use. Chapter 4 analyzes the most credible clinical data relevant to the medical use of marijuana. It reviews what is known about the physiological mechanisms underlying particular conditions (for example, chronic pain, vomiting, anorexia, and muscle spasticity), what is known about the cellular actions of cannabinoids, and the levels of proof needed to show that marijuana is an effective treatment for specific symptoms. It does not analyze the historical literature; history is informative in enumerating uses of marijuana, but it does not provide the sort of information needed for a scientifically sound evaluation of the efficacy and safety of marijuana for clinical use. Because marijuana is advocated primarily as affording relief from the symptoms of disease rather than as a cure, this chapter is organized largely by symptoms as opposed to disease categories. Finally, chapter 4 compares the conclusions of this report with those of other recent reports on the medical use of marijuana. Chapter 5 describes the process of and analyzes the prospects for cannabinoid drug development. Three focal concerns in evaluating the medical use of marijuana are: 1. Evaluation of the effects of isolated cannabinoids; 2. Evaluation of the risks associated with the medical use of marijuana; and 3. Evaluation of the use of smoked marijuana. EFFECTS OF ISOLATED CANNABINOIDS Cannabinoid Biology Much has been learned since the 1982 IOM report Marijuana and Health. Although it was clear then that most of the effects of marijuana were due to its actions on the brain, there was little information about how THC acted on brain cells (neurons), which cells were affected by THC, or even what general areas of the brain were most affected by THC. In addition, too little was known about cannabinoid physiology to offer any scientific insights into the harmful or therapeutic effects of marijuana. That all changed with the identification and characterization of cannabinoid receptors in the 1980s and 1990s. During the past 16 years, science has advanced greatly and can tell us much more about the potential medical benefits of cannabinoids. Conclusion: At this point, our knowledge about the biology of marijuana and cannabinoids allows us to make some general conclusions: o Cannabinoids likely have a natural role in pain modulation, control of movement, and memory. o The natural role of cannabinoids in immune systems is likely multi-faceted and remains unclear. o The brain develops tolerance to cannabinoids. o Animal research demonstrates the potential for dependence, but this potential is observed under a narrower range of conditions than with benzodiazepines, opiates, cocaine, or nicotine. o Withdrawal symptoms can be observed in animals but appear to be mild compared to opiates or benzodiazepines, such as diazepam (Valium). Conclusion: The different cannabinoid receptor types found in the body appear to play different roles in normal human physiology. In addition, some effects of cannabinoids appear to be independent of those receptors. The variety of mechanisms through which cannabinoids can influence human physiology underlies the variety of potential therapeutic uses for drugs that might act selectively on different cannabinoid systems. Recommendation 1: Research should continue into the physiological effects of synthetic and plant-derived cannabinoids and the natural function of cannabinoids found in the body. Because different cannabinoids appear to have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC alone. Efficacy of Cannabinoid Drugs The accumulated data indicate a potential therapeutic value for cannabinoid drugs, particularly for symptoms such as pain relief, control of nausea and vomiting, and appetite stimulation. The therapeutic effects of cannabinoids are best established for THC, which is generally one of the two most abundant of the cannabinoids in marijuana. (Cannabidiol is generally the other most abundant cannabinoid.) The effects of cannabinoids on the symptoms studied are generally modest, and in most cases there are more effective medications. However, people vary in their responses to medications, and there will likely always be a subpopulation of patients who do not respond well to other medications. The combination of cannabinoid drug effects (anxiety reduction, appetite stimulation, nausea reduction, and pain relief) suggests that cannabinoids would be moderately well suited for particular conditions, such as chemotherapy-induced nausea and vomiting and AIDS wasting. Defined substances, such as purified cannabinoid compounds, are preferable to plant products, which are of variable and uncertain composition. Use of defined cannabinoids permits a more precise evaluation of their effects, whether in combination or alone. Medications that can maximize the desired effects of cannabinoids and minimize the undesired effects can very likely be identified. Although most scientists who study cannabinoids agree that the pathways to cannabinoid drug development are clearly marked, there is no guarantee that the fruits of scientific research will be made available to the public for medical use. Cannabinoid- based drugs will only become available if public investment in cannabinoid drug research is sustained and if there is enough incentive for private enterprise to develop and market such drugs. Conclusion: Scientific data indicate the potential therapeutic value of cannabinoid drugs, primarily THC, for pain relief, control of nausea and vomiting, and appetite stimulation; smoked marijuana, however, is a crude THC delivery system that also delivers harmful substances. Recommendation 2: Clinical trials of cannabinoid drugs for symptom management should be conducted with the goal of developing rapid-onset, reliable, and safe delivery systems. Influence of Psychological Effects on Therapeutic Effects The psychological effects of THC and similar cannabinoids pose three issues for the therapeutic use of cannabinoid drugs. First, for some patients--particularly older patients with no previous marijuana experience--the psychological effects are disturbing. Those patients report experiencing unpleasant feelings and disorientation after being treated with THC, generally more severe for oral THC than for smoked marijuana. Second, for conditions such as movement disorders or nausea, in which anxiety exacerbates the symptoms, the antianxiety effects of cannabinoid drugs can influence symptoms indirectly. This can be beneficial or can create false impressions of the drug effect. Third, for cases in which symptoms are multifaceted, the combination of THC effects might provide a form of adjunctive therapy; for example, AIDS wasting patients would likely benefit from a medication that simultaneously reduces anxiety, pain, and nausea while stimulating appetite. Conclusion: The psychological effects of cannabinoids, such as anxiety reduction, sedation, and euphoria can influence their potential therapeutic value. Those effects are potentially undesirable for certain patients and situations and beneficial for others. In addition, psychological effects can complicate the interpretation of other aspects of the drug's effect. Recommendation 3: Psychological effects of cannabinoids such as anxiety reduction and sedation, which can influence medical benefits, should be evaluated in clinical trials. RISKS ASSOCIATED WITH MEDICAL USE OF MARIJUANA Physiological Risks Marijuana is not a completely benign substance. It is a powerful drug with a variety of effects. However, except for the harms associated with smoking, the adverse effects of marijuana use are within the range of effects tolerated for other medications. The harmful effects to individuals from the perspective of possible medical use of marijuana are not necessarily the same as the harmful physical effects of drug abuse. When interpreting studies purporting to show the harmful effects of marijuana, it is important to keep in mind that the majority of those studies are based on smoked marijuana, and cannabinoid effects cannot be separated from the effects of inhaling smoke from burning plant material and contaminants. For most people the primary adverse effect of acute marijuana use is diminished psychomotor performance. It is, therefore, inadvisable to operate any vehicle or potentially dangerous equipment while under the influence of marijuana, THC, or any cannabinoid drug with comparable effects. In addition, a minority of marijuana users experience dysphoria, or unpleasant feelings. Finally, the short-term immunosuppressive effects are not well established but, if they exist, are not likely great enough to preclude a legitimate medical use. The chronic effects of marijuana are of greater concern for medical use and fall into two categories: the effects of chronic smoking and the effects of THC. Marijuana smoking is associated with abnormalities of cells lining the human respiratory tract. Marijuana smoke, like tobacco smoke, is associated with increased risk of cancer, lung damage, and poor pregnancy outcomes. Although cellular, genetic, and human studies all suggest that marijuana smoke is an important risk factor for the development of respiratory cancer, proof that habitual marijuana smoking does or does not cause cancer awaits the results of well-designed studies. Conclusion: Numerous studies suggest that marijuana smoke is an important risk factor in the development of respiratory disease. Recommendation 4: Studies to define the individual health risks of smoking marijuana should be conducted, particularly among populations in which marijuana use is prevalent. Marijuana Dependence and Withdrawal A second concern associated with chronic marijuana use is dependence on the psychoactive effects of THC. Although few marijuana users develop dependence, some do. Risk factors for marijuana dependence are similar to those for other forms of substance abuse. In particular, anti-social personality and conduct disorders are closely associated with substance abuse. Conclusion: A distinctive marijuana withdrawal syndrome has been identified, but it is mild and short lived. The syndrome includes restlessness, irritability, mild agitation, insomnia, sleep disturbance, nausea, and cramping. Marijuana as a "Gateway" Drug Patterns in progression of drug use from adolescence to adulthood are strikingly regular. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug most people encounter. Not surprisingly, most users of other illicit drugs have used marijuana first. In fact, most drug users begin with alcohol and nicotine before marijuana--usually before they are of legal age. In the sense that marijuana use typically precedes rather than follows initiation of other illicit drug use, it is indeed a "gateway" drug. But because underage smoking and alcohol use typically precede marijuana use, marijuana is not the most common, and is rarely the first, "gateway" to illicit drug use. There is no conclusive evidence that the drug effects of marijuana are causally linked to the subsequent abuse of other illicit drugs. An important caution is that data on drug use progression cannot be assumed to apply to the use of drugs for medical purposes. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would remain the same as seen in illicit use. Finally, there is a broad social concern that sanctioning the medical use of marijuana might increase its use among the general population. At this point there are no convincing data to support this concern. The existing data are consistent with the idea that this would not be a problem if the medical use of marijuana were as closely regulated as other medications with abuse potential. Conclusion: Present data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse. However, this question is beyond the issues normally considered for medical uses of drugs and should not be a factor in evaluating the therapeutic potential of marijuana or cannabinoids. USE OF SMOKED MARIJUANA Because of the health risks associated with smoking, smoked marijuana should generally not be recommended for long-term medical use. Nonetheless, for certain patients, such as the terminally ill or those with debilitating symptoms, the long-term risks are not of great concern. Further, despite the legal, social, and health problems associated with smoking marijuana, it is widely used by certain patient groups. Recommendation 5: Clinical trials of marijuana use for medical purposes should be conducted under the following limited circumstances: trials should involve only short-term marijuana use (less than six months), should be conducted in patients with conditions for which there is reasonable expectation of efficacy, should be approved by institutional review boards, and should collect data about efficacy. The goal of clinical trials of smoked marijuana would not be to develop marijuana as a licensed drug but rather to serve as a first step toward the possible development of nonsmoked rapid-onset cannabinoid delivery systems. However, it will likely be many years before a safe and effective cannabinoid delivery system, such as an inhaler, is available for patients. In the meantime there are patients with debilitating symptoms for whom smoked marijuana might provide relief. The use of smoked marijuana for those patients should weigh both the expected efficacy of marijuana and ethical issues in patient care, including providing information about the known and suspected risks of smoked marijuana use. Recommendation 6: Short-term use of smoked marijuana (less than six months) for patients with debilitating symptoms (such as intractable pain or vomiting) must meet the following conditions: o failure of all approved medications to provide relief has been documented, o the symptoms can reasonably be expected to be relieved by rapid- onset cannabinoid drugs, o such treatment is administered under medical supervision in a manner that allows for assessment of treatment effectiveness, and o involves an oversight strategy comparable to an institutional review board process that could provide guidance within 24 hours of a submission by a physician to provide marijuana to a patient for a specified use. Until a nonsmoked rapid-onset cannabinoid drug delivery system becomes available, we acknowledge that there is no clear alternative for people suffering from chronic conditions that might be relieved by smoking marijuana, such as pain or AIDS wasting. One possible approach is to treat patients as n-of-1 clinical trials (single-patient trials), in which patients are fully informed of their status as experimental subjects using a harmful drug delivery system and in which their condition is closely monitored and documented under medical supervision, thereby increasing the knowledge base of the risks and benefits of marijuana use under such conditions.data, it is important to understand that decisions about drug regulation are based on a variety of moral and social considerations, as well as on medical and scientific ones. Even when a drug is used only for medical purposes, value judgments affect policy decisions concerning its medical use. For example, the magnitude of a drug's expected medical benefit affects regulatory judgments about the acceptability of risks associated with its use. Also, although a drug is normally approved for medical use only on proof of its "safety and efficacy," patients with life-threatening conditions are sometimes (under protocols for "compassionate use") allowed access to unapproved drugs whose benefits and risks are uncertain. Value judgments play an even more substantial role in regulatory decisions concerning drugs, such as marijuana, that are sought and used for nonmedical purposes. Then policymakers must take into account not only the risks and benefits associated with medical use but also possible interactions between the regulatory arrangements governing medical use and the integrity of the legal controls set up to restrict nonmedical use. It should be clear that many elements of drug control policy lie outside the realm of biology and medicine. Ultimately, the complex moral and social judgments that underlie drug control policy must be made by the American people and their elected officials. A goal of this report is to evaluate the biological and medical factors that should be taken into account in making those judgments. HOW THIS STUDY WAS CONDUCTED Information was gathered through scientific workshops, site visits, analysis of the relevant scientific literature, and extensive consultation with biomedical and social scientists. The three 2-day workshops--in Irvine, California; New Orleans, Louisiana; and Washington, D.C.--were open to the public and included scientific presentations and reports, mostly from patients and their families, about their experiences with and perspectives on the medical use of marijuana. Scientific experts in various fields were selected to talk about the latest research on marijuana, cannabinoids, and related topics (listed in Appendix B). Selection of the experts was based on recommendations by their peers, who ranked them among the most accomplished scientists and the most knowledgeable about marijuana and cannabinoids in their own fields. In addition, advocates for (John Morgan) and against (Eric A. Voth) the medical use of marijuana were invited to present scientific evidence in support of their positions. Information presented at the scientific workshops was supplemented by analysis of the scientific literature and evaluating the methods used in various studies and the validity of the authors' conclusions. Different kinds of clinical studies are useful in different ways: results of a controlled double-blind study with adequate sample sizes can be expected to apply to the general population from which study subjects were drawn; an isolated case report can suggest further studies but cannot be presumed to be broadly applicable; and survey data can be highly informative but are generally limited by the need to rely on self-reports of drug use and on unconfirmed medical diagnoses. This report relies mainly on the most relevant and methodologically rigorous studies available and treats the results of more limited studies cautiously. In addition, study results are presented in such a way as to allow thoughtful readers to judge the results themselves. The Institute of Medicine (IOM) appointed a panel of nine experts to advise the study team on technical issues. These included neurology and the treatment of pain (Howard Fields); regulation of prescription drugs (J. Richard Crout); AIDS wasting and clinical trials (Judith Feinberg); treatment and pathology of multiple sclerosis (Timothy Vollmer); drug dependence among adolescents (Thomas Crowley); varieties of drug dependence (Dorothy Hatsukami); internal medicine, health care delivery, and clinical epidemiology (Eric B. Larson); cannabinoids and marijuana pharmacology (Billy R. Martin); and cannabinoid neuroscience (Steven R. Childers). Public outreach included setting up a Web site that provided information about the study and asked for input from the public. The Web site was open for comment from November 1997 until November 1998. Some 130 organizations were invited to participate in the public workshops. Many people in the organizations--particularly those opposed to the medical use of marijuana--felt that a public forum was not conducive to expressing their views; they were invited to communicate their opinions (and reasons for holding them) by mail or telephone. As a result, roughly equal numbers of persons and organizations opposed to and in favor of the medical use of marijuana were heard from. The study team visited four cannabis buyers' clubs in California (the Oakland Cannabis Buyers' Cooperative, the San Francisco Cannabis Cultivators Club, the Los Angeles Cannabis Resource Center, and Californians Helping Alleviate Medical Problems, or CHAMPS) and two HIV/AIDS clinics (AIDS Health Care Foundation in Los Angeles and Louisiana State University Medical Center in New Orleans). We listened to many individual stories from the buyers' clubs about using marijuana to treat a variety of symptoms and heard clinical observations on the use of Marinol to treat AIDS 9 patients. Marinol is the brand name for dronabinol, which is (THC) in pill form and is available by prescription for the treatment of nausea associated with chemotherapy and AIDS wasting. MARIJUANA TODAY The Changing Legal Landscape In the 20th century, marijuana has been used more for its euphoric effects than as a medicine. Its psychological and behavioral effects have concerned public officials since the drug first appeared in the southwestern and southern states during the first two decades of the century. By 1931, at least 29 states had prohibited use of the drug for 3 nonmedicalpurposes. MarijuanawasfirstregulatedatthefederallevelbytheMarijuana Tax Act of 1937, which required anyone producing, distributing, or using marijuana for medical purposes to register and pay a tax and which effectively prohibited nonmedical use of the drug. Although the act did not make medical use of marijuana illegal, it did make it expensive and inconvenient. In 1942, marijuana was removed from the U.S. -tetrahydrocannabinol Pharmacopoeia because it was believed to be a harmful and addictive drug that caused psychoses, mental deterioration, and violent behavior. In the late 1960s and early 1970s, there was a sharp increase in marijuana use among adolescents and young adults. The current legal status of marijuana was established in 1970 with the passage of the Controlled Substances Act, which divided drugs into five schedules and placed marijuana in Schedule I, the category for drugs with high potential for abuse and no accepted medical use (see Appendix C, Scheduling Definitions). In 1972, the National Organization for the Reform of Marijuana Legislation (NORML), an organization that supports decriminalization of marijuana, unsuccessfully petitioned the Bureau of Narcotics and Dangerous Drugs to move marijuana from Schedule I to Schedule II. NORML argued that marijuana is therapeutic in numerous serious ailments, 13 less toxic, and in many cases more effective than conventional medicines. years the medical marijuana movement has been closely linked with the marijuana decriminalization movement, which has colored the debate. Many people criticized that association in their letters to IOM and during the public workshops of this study. The argument against the medical use of marijuana presented most often to the IOM study team was that "the medical marijuana movement is a Trojan horse"; that is, it is a deceptive tactic used by advocates of marijuana decriminalization who would exploit the public's sympathy for seriously ill patients. Since NORML's petition in 1972, there have been a variety of legal decisions concerning marijuana. From 1973 to 1978, 11 states adopted statutes that decriminalized use of marijuana, although some of them recriminalized marijuana use in the 1980s and 1990s. During the 1970s, reports of the medical value of marijuana began to appear, particularly claims that marijuana relieved the nausea associated with chemotherapy. Health departments in six states conducted small studies to investigate the reports. When the AIDS epidemic spread in the 1980s, patients found that marijuana sometimes relieved their symptoms, most dramatically those associated with AIDS wasting. Over this period a number of defendants charged with unlawful possession of marijuana claimed that they were using the drug to treat medical conditions and that violation of the law was therefore justified (the so-called medical necessity defense). Although most courts rejected these 8 Against that backdrop, voters in California and Arizona in 1996 passed two referenda that attempted to legalize the medical use of marijuana under particular conditions. Public support for patient access to marijuana for medical use appears substantial; public opinion polls taken during 1997 and 1998 generally reported 60—70 percent of 15 However, those referenda are at odds with federal laws regulating marijuana, and their implementation raises complex legal questions. Despite the current level of interest, referenda and public discussions have not been well informed by carefully reasoned scientific debate. Although previous reports have all called for more research, the nature of the research that will be most helpful depends greatly on the specific health conditions to be addressed. And while there have been claims, some accepted them. respondents in favor of allowing medical uses of marijuana. Thus, for 25 important recent advances in our understanding of the physiological effects of marijuana, few of the recent investigators have had the time or resources to permit detailed analysis. The results of those advances, only now beginning to be explored, have significant implications for the medical marijuana debate. Several months after the passage of the California and Arizona medical marijuana referendums, the Office of National Drug Control Policy (ONDCP) asked whether IOM would conduct a scientific review of the medical value of marijuana and its constituent compounds. In August 1997, IOM formally began the study and appointed John A. Benson Jr. and Stanley J. Watson Jr. to serve as principal investigators for the study. The charge to IOM was to review the medical use of marijuana and the harms and benefits attributed to it (details are given in Appendix D). Marijuana plants have been used since antiquity for both herbal medication and intoxication. The current debate over the medical use of marijuana is essentially a debate over the value of its medicinal properties relative to the risk posed by its use. 1,10,11 Marijuana's use as an herbal remedy before the 20th century is well documented. However, modern medicine adheres to different standards from those used in the past. The question is not whether marijuana can be used as an herbal remedy but rather how well this remedy meets today's standards of efficacy and safety. We understand much more than previous generations about medical risks. Our society generally expects its licensed medications to be safe, reliable, and of proven efficacy; contaminants and inconsistent ingredients in our health treatments are not tolerated. That refers not only to prescription and over-the-counter drugs but also to vitamin supplements and herbal remedies purchased at the grocery store. For example, the essential amino acid l- tryptophan was widely sold in health food stores as a natural remedy for insomnia until early 1990 when it became linked to an epidemic of a new and potentially fatal illness 9,12 When it was removed from the market shortly thereafter, there was little protest, despite the fact that it was safe for the vast majority of the population. The 1,536 cases and 27 deaths were later traced to contaminants in a batch produced by a single Japanese manufacturer. Although few herbal medicines meet today's standards, they have provided the foundation for modern Western pharmaceuticals. Most current prescriptions have their 7 rootseitherdirectlyorindirectlyinplantremedies. Atthesametime,mostcurrent prescriptions are synthetic compounds that are only distantly related to the natural compounds that led to their development. Digitalis was discovered in foxglove, morphine in poppies, and taxol in the yew tree. Even aspirin (acetylsalicylic acid) has its counterpart in herbal medicine: for many generations, American Indians relieved headaches by chewing the bark of the willow tree, which is rich in a related form of salicylic acid. Although plants continue to be valuable resources for medical advances, drug development is likely to be less and less reliant on plants and more reliant on the tools of (eosinophilia-myalgia syndrome). modern science. Molecular biology, bioinformatics software, and DNA array-based analyses of genes and chemistry are all beginning to yield great advances in drug discovery and development. Until recently, drugs could only be discovered; now they can be designed. Even the discovery process has been accelerated through the use of modern drug-screening techniques. It is increasingly possible to identify or isolate the chemical compounds in a plant, determine which compounds are responsible for the plant's effects, and select the most effective and safe compounds--either for use as purified substances or as tools to develop even more effective, safer, or less expensive compounds. Yet even as the modern pharmacological toolbox becomes more sophisticated and biotechnology yields an ever greater abundance of therapeutic drugs, people increasingly 4,5 In 1997, 46 percent of Americans sought nontraditional medicines and spent over 27 billion unreimbursed dollars; the total number of visits to alternative medicine practitioners appears to have exceeded the number of 5,6 Recent interest in the medical use of marijuana coincides with this trend toward self-help and a search for "natural" therapies. Indeed, several people who spoke at the IOM public hearings in support of the medical use of marijuana said that they generally preferred herbal medicines to standard pharmaceuticals. However, few alternative therapies have been carefully and systematically tested for safety and efficacy, as is required for medications approved by 2 WHO USES MEDICAL MARIJUANA? There have been no comprehensive surveys of the demographics and medical conditions of medical marijuana users, but a few reports provide some indication. In each case, survey results should be understood to reflect the situation in which they were conducted and are not necessarily characteristic of medical marijuana users as a whole. Respondents to surveys reported to the IOM study team were all members of "buyers' clubs," organizations that provide their members with marijuana, although not necessarily through direct cash transactions. The atmosphere of the marijuana buyers' clubs ranges from that of the comparatively formal and closely regulated Oakland Cannabis Buyers' Cooperative to that of a "country club for the indigent," as Denis Peron described the San Francisco Cannabis Cultivators Club (SFCCC), which he directed. John Mendelson, an internist and pharmacologist at the University of California, San Francisco (UCSF) Pain Management Center, surveyed 100 members of the SFCCC who were using marijuana at least weekly. Most of the respondents were unemployed men in their forties. Subjects were paid $50 to participate in the survey; this might have encouraged a greater representation of unemployed subjects. All subjects were tested for drug use. About half tested positive for marijuana only; the other half tested positive for drugs in addition to marijuana (23% for cocaine and 13% for amphetamines). The predominant disorder was AIDS, followed by roughly equal numbers of members who reported chronic pain, mood disorders, and musculoskeletal disorders (Table 1.1). seek alternative, low-technology therapies. visits to primary care physicians. the FDA (Food and Drug Administration). The membership profile of the San Francisco club was similar to that of the Los Angeles Cannabis Resource Center (LACRC), where 83% of the 739 patients were men, 45% were 36—45 years old, and 71% were HIV positive. Table 1.2 shows a distribution of conditions somewhat different from that in SFCCC respondents, probably because of a different membership profile. For example, cancer is generally a disease that occurs late in life; 34 (4.7%) of LACRC members were over 55 years old; only 2% of survey respondents in the SFCCC study were over 55 years old. Jeffrey Jones, executive director of the Oakland Cannabis Buyers' Cooperative, reported that its largest group of patients is HIV-positive men in their forties. The second- largest group is patients with chronic pain. Among the 42 people who spoke at the public workshops or wrote to the study team, only six identified themselves as members of marijuana buyers' clubs. Nonetheless, they presented a similar profile: HIV/AIDS was the predominant disorder, followed by chronic pain (Tables 1.3 and 1.4). All HIV/AIDS patients reported that marijuana relieved nausea and vomiting and improved their appetite. About half the patients who reported using marijuana for chronic pain also reported that it reduced nausea and vomiting. Note that the medical conditions referred to are only those reported to the study team or to interviewers; they cannot be assumed to represent complete or accurate diagnoses. Michael Rowbotham, a neurologist at the UCSF Pain Management Center, noted that many pain patients referred to that center arrive with incorrect diagnoses or with pain of unknown origin. At that center the patients who report medical benefit from marijuana say that it does not reduce their pain but enables them to cope with it. Most--not all--people who use marijuana to relieve medical conditions have previously used it recreationally. An estimated 95% of the LACRC members had used marijuana before joining the club. It is important to emphasize the absence of comprehensive information on marijuana use before its use for medical conditions. Frequency of prior use almost certainly depends on many factors, including membership in a buyers' club, membership in a population sector that uses marijuana more often than others (for example, men 20—30 years old), and the medical condition being treated with marijuana (for example, there are probably relatively fewer recreational marijuana users among cancer patients than among AIDS patients). Patients who reported their experience with marijuana at the public workshops said that marijuana provided them with great relief from symptoms associated with disparate diseases and ailments, including AIDS wasting, spasticity from multiple sclerosis, depression, chronic pain, and nausea associated with chemotherapy. Their circumstances and symptoms were varied, and the IOM study team was not in a position to make medical evaluations or confirm diagnoses. Three representative cases presented to the IOM study team are presented in Box 1.1; the stories have been edited for brevity, but each case is presented in the patient's words and with the patient's permission. The variety of stories presented left the study team with a clear view of people's beliefs about how marijuana had helped them. But this collection of anecdotal data, although useful, is limited. We heard many positive stories but no stories from people who had tried marijuana but found it ineffective. This is a fraction with an unknown denominator. For the numerator we have a sample of positive responses; for the denominator we have no idea of the total number of people who have tried marijuana for medical purposes. Hence, it is impossible to estimate the clinical value of marijuana or cannabinoids in the general population based on anecdotal reports. Marijuana clearly seems to relieve some symptoms for some people--even if only as a placebo effect. But what is the balance of harmful and beneficial effects? That is the essential medical question that can be answered only by careful analysis of data collected under controlled conditions. CANNABIS AND THE CANNABINOIDS Marijuana is the common name for Cannabis sativa, a hemp plant that grows throughout temperate and tropical climates. The most recent review of the constituents of 16 marijuana lists 66 cannabinoids (Table 1.5). different cannabinoid effects or interactions. Most of the cannabinoids are closely related; they fall into only 10 groups of closely related cannabinoids, many of which differ by only a single chemical moiety and might be midpoints along biochemical pathways--that -tetrahydrocannabinol ( THC) is the primary psychoactive ingredient; depending on the particular plant, either THC or cannabidiol is the most abundant cannabinoid in marijuana (Figure 1.1). 9 is, degradation products, precursors, or byproducts. - But that does not mean there are 66 16,18 9 9 Throughout this report, THC is used to indicate of THC are discussed, the full names are used. All the cannabinoids are lipophilic--they are highly soluble in fatty fluids and tissues but not in water. Indeed, THC is so lipophilic that it is aptly described as "greasy." Throughout this report, marijuana refers to unpurified plant extracts, including leaves and flower tops, regardless of how they are consumed--whether by ingestion or by smoking. References to the effects of marijuana should be understood to include the composite effects of its various components; that is, the effects of THC are included among the effects of marijuana, but not all the effects of marijuana are necessarily due to THC. Discussions concerning cannabinoids refer only to those particular compounds and not to the plant extract. This distinction is important; it is often blurred or exaggerated. Cannabinoids are produced in epidermal glands on the leaves (especially the upper ones), stems, and the bracts that support the flowers of the marijuana plant. Although the flower itself has no epidermal glands, it has the highest cannabinoid content anywhere on the plant, probably because of the accumulation of resin secreted by the supporting bracteole (the small leaf-like part below the flower). The amounts of cannabinoids and their relative abundance in a marijuana plant vary with growing conditions, including 14 ). The chemical stability of cannabinoids in harvested plant material is also affected by moisture, temperature, sunlight, and storage. They degrade under any storage condition. humidity, temperature, and soil nutrients (reviewed in Pate, 1994 -THC. In the few cases where variants ORGANIZATION OF THE REPORT Throughout the report, steps that might be taken to fill the gaps in understanding both the potential harms and benefits of marijuana and cannabinoid use are identified. Those steps include identifying knowledge gaps, promising research directions, and potential therapies based on scientific advances in cannabinoid biology. Chapter 2 reviews basic cannabinoid biology and provides a foundation to understand the medical value of marijuana or its constituent cannabinoids. In consideration of the physician's first rule, "first, do no harm," the potential harms attributed to the medical use of marijuana are reviewed before the potential medical benefits. Chapter 3 reviews the risks posed by marijuana use, with emphasis on medical use. Chapter 4 analyzes the most credible clinical data relevant to the medical use of marijuana. It reviews what is known about the physiological mechanisms underlying particular conditions (for example, chronic pain, vomiting, anorexia, and muscle spasticity), what is known about the cellular actions of cannabinoids, and the levels of proof needed to show that marijuana is an effective treatment for specific symptoms. It does not analyze the historical literature; history is informative in enumerating uses of marijuana, but it does not provide the sort of information needed for a scientifically sound evaluation of the efficacy and safety of marijuana for clinical use. Because marijuana is advocated primarily as affording relief from the symptoms of disease rather than as a cure, this chapter is organized largely by symptoms as opposed to disease categories. Finally, chapter 4 compares the conclusions of this report with those of other recent reports on the medical use of marijuana. Chapter 5 describes the process of and analyzes the prospects for cannabinoid drug development. Primum non nocere. This is the physician's first rule: whatever treatment a physician prescribes to a patient--first, that treatment must not harm the patient. The most contentious aspect of the medical marijuana debate is not whether marijuana can alleviate particular symptoms but rather the degree of harm associated with its use. This chapter explores the negative health consequences of marijuana use, first with respect to drug abuse, then from a psychological perspective, and finally from a physiological perspective. THE MARIJUANA "HIGH" The most commonly reported effects of smoked marijuana are a sense of well-being or euphoria and increased talkativeness and laughter alternating with periods of introspective dreaminess followed by lethargy and sleepiness (see reviews by Adams and 1 59 60 Martin, 1996, Hall and Solowij, and Hall et al. ). A characteristic feature of a marijuana "high" is a distortion in the sense of time associated with deficits in short-term memory and learning. A marijuana smoker typically has a sense of enhanced physical and emotional sensitivity, including a feeling of greater interpersonal closeness. The most obvious behavioral abnormality displayed by someone under the influence of marijuana is difficulty in carrying on an intelligible conversation, perhaps because of an inability to remember what was just said even a few words earlier. The high associated with marijuana is not generally claimed to be integral to its therapeutic value. But mood enhancement, anxiety reduction, and mild sedation can be desirable qualities in medications--particularly for patients suffering pain and anxiety. Thus, although the psychological effects of marijuana are merely side effects in the treatment of some symptoms, they might contribute directly to relief of other symptoms. They also must be monitored in controlled clinical trials to discern which effect of cannabinoids is beneficial. These possibilities are discussed later under the discussions of specific symptoms in chapter 4. The effects of various doses and routes of delivery of THC are shown in Table 3.1. Adverse Mood Reactions Although euphoria is the more common reaction to smoking marijuana, adverse mood reactions can occur. Such reactions occur most frequently in inexperienced users after large doses of smoked or oral marijuana. They usually disappear within hours and respond well to reassurance and a supportive environment. Anxiety and paranoia are the 59 most common acute adverse reactions; others include panic, depression, dysphoria, 1,40,66,69 depersonalization, delusions, illusions, and hallucinations. Of regular marijuana smokers, 17% report that they have experienced at least one of the symptoms, usually 145 early in their use of marijuana. of medical marijuana in people who have not previously used marijuana. DRUG DYNAMICS There are many misunderstandings about drug abuse and dependence (see reviews by 114 54 Those observations are particularly relevant for the use O'Brien themostrecentDiagnosticandStatisticalManualofMentalDisorders(DSM-IV), the most influential system in the United States for diagnoses of mental disorders, including substance abuse (see Box 3.1). Tolerance, dependence, and withdrawal are often presumed to imply abuse or addiction, but this is not the case. Tolerance and dependence are normal physiological adaptations to repeated use of any drug. The correct use of prescribed medications for pain, anxiety, and even hypertension commonly produces tolerance and some measure of physiological dependence. Even a patient who takes a medicine for appropriate medical indications and at the correct dosage can develop tolerance, physical dependence, and withdrawal symptoms if the drug is stopped abruptly rather than gradually. For example, a hypertensive patient receiving a beta-adrenergic receptor blocker, such as propranolol, might have a good therapeutic response; but if the drug is stopped abruptly, there can be a withdrawal syndrome that consists of tachycardia and a rebound increase in blood pressure to a point that is temporarily higher than before administration of the medication began. Because it is an illegal substance, some people consider any use of marijuana as substance abuse. However, this report uses the medical definition; that is, substance abuse is a maladaptive pattern of repeated substance use manifested by recurrent and 3 significantadverseconsequences. Substanceabuseanddependencearebothdiagnoses of pathological substance use. Dependence is the more serious diagnosis and implies compulsive drug use that is difficult to stop despite significant substance-related problems (see Box 3.2). Reinforcement Drugs vary in their ability to produce good feelings in users, and the more strongly reinforcing a drug is, the more likely it will be abused (G. Koob, Institute of Medicine (IOM) workshop). Marijuana is indisputably reinforcing for many people. The reinforcing properties of even so mild a stimulant as caffeine are typical of reinforcement 54 in 1994). Caffeine is reinforcing for many people at low doses (100—200 mg, the average amount of caffeine in one to two cups of and Goldstein ). The terms and concepts used in this report are as defined in 3 by addicting drugs (reviewed by Goldstein coffee) and is aversive at high doses (600 mg, the average amount of caffeine in six cups of coffee). The reinforcing effects of many drugs are different for different people. For example, caffeine was most reinforcing for test subjects who scored lowest on tests of anxiety but tended not to be reinforcing for the most anxious subjects. As an argument to dispute the abuse potential of marijuana, some have cited the observation that animals do not willingly self-administer THC, as they will cocaine. Even if that were true, it would not be relevant to human use of marijuana. The value in animal models of drug self-administration is not that they are necessary to show that a drug is reinforcing but rather that they provide a model in which the effects of a drug can be studied. Furthermore, THC is indeed rewarding to animals at some doses but, like many 93 reinforcing drugs, is aversive at high doses (4.0 mg/kg). in experiments conducted in animals outfitted with intravenous catheters that allow them 100 A specific set of neural pathways has been proposed to be a "reward system" that 51 to self-administer WIN 55,212, a drug that mimics the effects of THC. underlies the reinforcement of drugs of abuse and other pleasurable stimuli. properties of drugs are associated with their ability to increase concentrations of particular neurotransmitters in areas that are part of the proposed brain reward system. The median forebrain bundle and the nucleus accumbens are associated with brain reward 88 144 Cocaine, amphetamine, alcohol, opioids, nicotine, and THC extracellular fluid dopamine in the nucleus accumbens region (reviewed by Koob and Le pathways. all increase 88 110 Moal brain reward systems are not strictly "drug reinforcement centers." Rather, their biological role is to respond to a range of positive stimuli, including sweet foods and sexual attraction. Tolerance The rate at which tolerance to the various effects of any drug develops is an important consideration for its safety and efficacy. For medical use, tolerance to some effects of cannabinoids might be desirable. Differences in the rates at which tolerance to the multiple effects of a drug develops can be dangerous. For example, tolerance to the euphoric effects of heroin develops faster than tolerance to its respiratory depressant effects, so heroin users tend to increase their daily doses to reach their desired level of euphoria, thereby putting themselves at risk for respiratory arrest. Because tolerance to the various effects of cannabinoids might develop at different rates, it is important to evaluate independently their effects on mood, motor performance, memory, and attention, as well as any therapeutic use under investigation. Tolerance to most of the effects of marijuana can develop rapidly after only a few doses, and it also disappears rapidly. Tolerance to large doses has been found to persist in experimental animals for long periods after cessation of drug use. Performance impairment is less among people who use marijuana heavily than it is among those who 29,104,124 and Nestler and Aghajanian in 1997). However, it is important to note that possibly because of tolerance. Heavy users tend to reach higher plasma concentrations of THC than light users after similar doses of use marijuana only occasionally, Similar effects have been found Reinforcing THC, arguing against the possibility that heavy users show less performance impairment because they somehow absorb less THC (perhaps due to differences in smoking 95 There appear to be variations in the development of tolerance to the different effects of marijuana and oral THC. For example, daily marijuana smokers participated in a residential laboratory study to compare the development of tolerance to THC pills and to 61,62 behavior). smoked marijuana. day for four consecutive days; another group was given THC pills on the same schedule. During the four-day period, both groups became tolerant to feeling "high" and what they reported as a "good drug effect." In contrast, neither group became tolerant to the stimulatory effects of marijuana or THC on appetite. "Tolerance" does not mean that the drug no longer produced the effects but simply that the effects were less at the end than at the beginning of the four-day period. The marijuana smoking group reported feeling "mellow" after smoking and did not show tolerance to this effect; the group that took THC pills did not report feeling "mellow." The difference was also reported by many people who described their experiences to the IOM study team. The oral and smoked doses were designed to deliver roughly equivalent amounts of THC to a subject. Each smoked marijuana dose consisted of five 10-second puffs of a marijuana cigarette containing 3.1% THC; the pills contained 30 mg of THC. Both groups also received placebo drugs during other four-day periods. Although the dosing of the two groups was comparable, different routes of administration resulted in different patterns of drug effect. The peak effect of smoked marijuana is usually felt within 68,95 One group was given marijuana cigarettes to smoke four times per minutes and declines sharply after 30 minutes not felt until about an hour and lasts for several hours. Withdrawal A distinctive marijuana and THC withdrawal syndrome has been identified, but it is mild and subtle compared with the profound physical syndrome of alcohol or heroin withdrawal. The symptoms of marijuana withdrawal include restlessness, irritability, 31,74 mild agitation, insomnia, sleep EEG disturbance, nausea, and cramping (Table 3.2). In addition to those symptoms, two recent studies noted several more. A group of adolescents under treatment for conduct disorders also reported fatigue and illusions or hallucinations after marijuana abstinence (this study is discussed further in the section on 31 In a residential study of daily marijuana users, withdrawal symptoms included sweating and 62 A marijuana withdrawal syndrome, however, has been reported only in a group of adolescents in treatment for substance 31 ; the peak effect of oral THC is usually 118 "Prevalence and Predictors of Dependence on Marijuana and Other Drugs"). runny nose, in addition to those listed above. abuse problems 62,74 daily. and in a research setting where subjects were given marijuana or THC Withdrawal symptoms have been observed in carefully controlled laboratory studies 61,62 of people after use of both oral THC and smoked marijuana. were given very high doses of oral THC: 180—210 mg per day for 10—20 days, roughly In one study, subjects equivalent to smoking 9—10 2% THC cigarettes per day. During the abstinence period at the end of the study, the study subjects were irritable and showed insomnia, runny nose, sweating, and decreased appetite. The withdrawal symptoms, however, were short lived. In four days they had abated. The time course contrasts with that in another study in which lower doses of oral THC were used (80—120 mg/day for four days) and 61,62 withdrawal symptoms were still near maximal after four days. In animals, simply discontinuing chronic heavy dosing of THC does not reveal withdrawal symptoms, but the "removal" of THC from the brain can be made abrupt by another drug that blocks THC at its receptor if administered when the chronic THC is withdrawn. The withdrawal syndrome is pronounced, and the behavior of the animals 153 becomes hyperactive and disorganized. 16,24 The half-life of THC in brain is about an Although traces of THC can remain in the brain for much longer periods, the hour. amounts are not physiologically significant. Thus, the lack of a withdrawal syndrome when THC is abruptly withdrawn without administration of a receptor-blocking drug is probably not due to a prolonged decline in brain concentrations. Craving Craving, the intense desire for a drug, is the most difficult aspect of addiction to overcome. Research on craving has focused on nicotine, alcohol, cocaine, and opiates but 115 has not specifically addressed marijuana. is known about drug craving, its relevance to marijuana use has not been established. Most people who suffer from addiction relapse within a year of abstinence, and they 58 As addiction develops, craving increases even as maladaptive consequences accumulate. Animal studies indicate that the tendency to relapse is based on changes in brain function that continue for months or years after the 115 last use of the drug. manifestation of an abstinence syndrome remains an unanswered question in drug abuse 88 The "liking" of sweet foods, for example, is mediated by opioid forebrain systems and by brain stem systems, whereas "wanting" seems to be mediated by 109 Anticraving medications have been developed for nicotine and alcohol. The antidepressant, bupropion, blocks nicotine craving, while naltrexone blocks alcohol 115 often attribute their relapse to craving. research. ascending dopamine neurons that project to the nucleus accumbens. Another category of addiction medication includes drugs that block other drugs' effects. Some of those drugs also block craving. For example, methadone blocks the euphoric effects of heroin and also reduces craving. MARIJUANA USE AND DEPENDENCE Prevalence of Use Millions of Americans have tried marijuana, but most are not regular users. In 1996, 68.6 million people--32% of the U.S. population over 12 years old--had tried marijuana craving. 74 Thus, while this section briefly reviews what Whether neurobiological conditions change during the 132 or hashish at least once in their lifetime, but only 5% were current users. is most prevalent among 18- to 25-year-olds and declines sharply after the age of 34 77,132 (Figure 3.1). although the difference decreases by adulthood. Whites are more likely than blacks to use marijuana in adolescence, 132 Marijuana use Most people who have used marijuana did so first during adolescence. Social influences, such as peer pressure and prevalence of use by peers, are highly predictive of 9 initiationintomarijuanause. Initiationisnot,ofcourse,synonymouswithcontinuedor regular use. A cohort of 456 students who experimented with marijuana during their high school years were surveyed about their reasons for initiating, continuing, and stopping 9 theirmarijuanause. Studentswhobeganasheavyuserswereexcludedfromthe analysis. Those who did not become regular marijuana users cited two types of reasons for discontinuing. The first was related to health and well-being; that is, they felt that marijuana was bad for their health or for their family and work relationships. The second type was based on age-related changes in circumstances, including increased responsibility and decreased regular contact with other marijuana users. Among high school students who quit, parental disapproval was a stronger influence than peer disapproval in discontinuing marijuana use. In the initiation of marijuana use, the reverse was true. The reasons cited by those who continued to use marijuana were to "get in a better mood or feel better." Social factors were not a significant predictor of continued use. Data on young adults show similar trends. Those who use drugs in response to social influences are more likely to stop using them than those who also use them for 80 The age distribution of marijuana users among the general population contrasts with that of medical marijuana users. Marijuana use generally declines sharply after the age of 34 years, whereas medical marijuana users tend to be over 35. That raises the question of what, if any, relationship exists between abuse and medical use of marijuana; however, no studies reported in the scientific literature have addressed this question. Prevalence and Predictors of Dependence on Marijuana and Other Drugs Many factors influence the likelihood that a particular person will become a drug abuser or an addict; the user, the environment, and the drug are all important factors 114 (Table 3.3). people who are vulnerable to drug abuse for individual reasons and who find themselves in an environment that encourages drug abuse are initially likely to abuse the most readily available drug--regardless of its unique set of effects on the brain. The third category includes drug-specific effects that influence the abuse liability of a particular drug. As discussed earlier in this chapter, the more strongly reinforcing a drug is, the more likely that it will be abused. The abuse liability of a drug is enhanced by how quickly its effects are felt, and this is determined by how the drug is delivered. In general, the effects of drugs that are inhaled or injected are felt within minutes, and the effects of drugs that are ingested take a half hour or more. psychological reasons. The first two categories apply to potential abuse of any substance; that is, The proportion of people who become addicted varies among drugs. Table 3.4 shows estimates for the proportion of people among the general population who used or became dependent on different types of drugs. The proportion of users that ever became dependent includes anyone who was ever dependent--whether it was for a period of weeks or years--and thus includes more than those who are currently dependent. Compared to most other drugs listed in this table, dependence among marijuana users is relatively rare. This might be due to differences in specific drug effects, the availability of or penalties associated with the use of the different drugs, or some combination. Daily use of most illicit drugs is extremely rare in the general population. In 1989, daily use of marijuana among high school seniors was less than that of alcohol (2.9% and 76 Drug dependence is more prevalent in some sectors of the population than in others. 8 Age,gender,andraceorethnicgroupareallimportant. Excludingtobaccoandalcohol, 8 thefollowingtrendsofdrugdependencearestatisticallysignificant: Menare1.6times as likely than women to become drug dependent, non-Hispanic whites are about twice as likely as blacks to become drug dependent (the difference between non-Hispanic and Hispanic whites was not significant), and people 25—44 years old are more than three times as likely as those over 45 years old to become drug dependent. More often than not, drug dependence co-occurs with other psychiatric disorders. Most people with a diagnosis of drug dependence disorder also have a diagnosis of a 76 The most frequent co- occurring disorder is alcohol abuse; 60% of men and 30% of women with a diagnosis of drug dependence also abuse alcohol. In women who are drug dependent, phobic disorders and major depression are almost equally common (29% and 28%, respectively). Note that this study distinguished only between alcohol, nicotine and "other drugs"; marijuana was grouped among "other drugs." The frequency with which drug dependence and other psychiatric disorders co-occur might not be the same for marijuana and other drugs that were included in that category. A strong association between drug dependence and antisocial personality or its precursor, conduct disorder, is also widely reported in children and adults (reviewed in 126 ). Although the causes of the association are uncertain, Robins recently concluded that it is more likely that conduct disorders generally lead to substance abuse 126 Such a trend might, however, depend on the age at which the conduct disorder is manifested. A longitudinal study by Brooks and co-workers noted a significant relationship between adolescent drug use and disruptive disorders in young adulthood; except for earlier psychopathology, such as childhood conduct disorder, the drug use preceded the 18 In contrast with use of other illicit drugs and tobacco, moderate (less than once a week and more than once a month) to heavy marijuana use did not predict anxiety or depressive disorders; but it was similar to those other drugs in predicting antisocial personality disorder. The rates of disruptive disorders increased with 4.2%, respectively). another psychiatric disorder (76% of men and 65% of women). 1998 by Robins than the reverse. psychiatric disorders. increased drug use. Thus, heavy drug use among adolescents can be a warning sign for later psychiatric disorders; whether it is an early manifestation of or a cause of those disorders remains to be determined. Psychiatric disorders are more prevalent among adolescents who use drugs--including 79 alcohol and nicotine--than among those who do not. Table 3.5 indicates that adolescent boys who smoke cigarettes daily are about 10 times as likely to have a psychiatric disorder diagnosis as those who do not smoke. However, the table does not compare intensity of use among the different drug classes. Thus, although daily cigarette smoking among adolescent boys is more strongly associated with psychiatric disorders than is any use of illicit substances, it does not follow that this comparison is true for every amount 79 Few marijuana users become dependent on it (Table 3.4), but those who do encounter 19,143 of cigarette smoking. problems similar to those associated with dependence on other drugs. appears to be less severe among people who use only marijuana than among those who 19,143 abuse cocaine or those who abuse marijuana with other drugs (including alcohol). Data gathered in 1990—1992 from the National Comorbidity Study of over 8,000 persons 15—54 years old indicate that 4.2% of the general population were dependent on 8 marijuanaatsometime. Similarresultsforthefrequencyofsubstanceabuseamongthe general population were obtained from the Epidemiological Catchment Area Program, a survey of over 19,000 people. According to data collected in the early 1980s for that study, 4.4% of adults have, at one time, met the criteria for marijuana dependence. In comparison, 13.8% of adults met the criteria for alcohol dependence and 36.0% for tobacco dependence. After alcohol and nicotine, marijuana was the substance most frequently associated with a diagnosis of substance dependence. In a 15-year study begun in 1979, 7.3% of 1,201 adolescents and young adults in suburban New Jersey at some time met the criteria for marijuana dependence; this indicates that the rate of marijuana dependence might be even higher in some groups of 71 Adolescents meet the criteria for drug dependence at lower rates of marijuana use than do adults, and this 25 adolescents and young adults than in the general population. suggests that they are more vulnerable to dependence than adults (see Box 3.2). Dependence Youths who are already dependent on other substances are particularly vulnerable to 31 marijuana dependence. For example, Crowley and co-workers 229 adolescent patients in a residential treatment program for delinquent, substance- involved youth and found that those patients were dependent on an average of 3.2 substances. The adolescents had previously been diagnosed as dependent on at least one substance (including nicotine and alcohol) and had three or more conduct disorder symptoms during their life. About 83% of those who had used marijuana at least six times went on to develop marijuana dependence. About equal numbers of youths in the study had a diagnosis of marijuana dependence and a diagnosis of alcohol dependence; fewer were nicotine dependent. Comparisons of dependence potential between different drugs should be made cautiously. The probability that a particular drug will be abused is interviewed a group of influenced by many factors, including the specific drug effects and availability of the drug. Although parents often state that marijuana caused their children to be rebellious, the troubled adolescents in the study by Crowley and co-workers developed conduct disorders before marijuana abuse. That is consistent with reports that the more symptoms 127 of conduct disorders children have, the younger they begin drug abuse, earlier they begin drug use, the more likely it is to be followed by abuse or 125 Genetic factors are known to play a role in the likelihood of abuse for drugs other than 7,129 dependence. marijuana, and it is not unexpected that genetic factors play a role in the marijuana experience, including the likelihood of abuse. A study of over 8,000 male twins listed in the Vietnam Era Twin Registry indicated that genes have a statistically significant 97 influence on whether a person finds the effects of marijuana pleasant. Not surprisingly, people who found marijuana to be pleasurable used it more often than those who found it unpleasant. The study suggested that, although social influences play an important role in the initiation of use, individual differences--perhaps associated with the brain's reward system--influence whether a person will continue using marijuana. Similar results were 86 Family and social environment strongly influenced the likelihood of ever using marijuana but had little effect on the likelihood of heavy use or abuse. The latter were more influenced by genetic factors. Those results are consistent with the finding that the degree to which rats find THC rewarding is genetically based. In summary, although few marijuana users develop dependence, some do. But they appear to be less likely to do so than users of other drugs (including alcohol and nicotine), and marijuana dependence appears to be less severe than dependence on other drugs. Drug dependence is more prevalent in some sectors of the population than others, but no group has been identified as particularly vulnerable to the drug-specific effects of marijuana. Adolescents, especially troubled ones, and people with psychiatric disorders (including substance abuse) appear to be more likely than the general population to become dependent on marijuana. If marijuana or cannabinoid drugs were approved for therapeutic uses, it would be important to consider the possibility of dependence, particularly for patients at high risk for substance dependence. Some controlled substances that are approved medications produce dependence after long-term use; this, however, is a normal part of patient management and does not generally present undue risk to the patient. Progression from Marijuana to Other Drugs The fear that marijuana use might cause, as opposed to merely precede, the use of drugs that are more harmful is of great concern. To judge from comments submitted to the IOM study team, it appears to be of greater concern than the harms directly related to marijuana itself. The discussion that marijuana is a "gateway" drug implicitly recognizes that other illicit drugs might inflict greater damage to health or social relations than found in a study of female twins. and that the 92 marijuana. Although the scientific literature generally discusses drug use progression between a variety of drug classes, including alcohol and tobacco, the public discussion has focused on marijuana as a "gateway" drug that leads to abuse of more harmful illicit drugs, such as cocaine and heroin. There are strikingly regular patterns in the progression of drug use from adolescence to adulthood. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug that most people encounter. Not surprisingly, most users of other illicit 81,82 drugs used marijuana first. marijuana--they begin with alcohol and nicotine, usually when they are too young to do 82,90 so legally. The gateway analogy evokes two ideas that are often confused. The first, more often referred to as the "stepping stone" hypothesis, is the idea that progression from marijuana 82 In fact, most drug users do not begin their drug use with to other drugs arises from pharmacological properties of marijuana itself. that marijuana serves as a gateway to the world of illegal drugs in which youths have greater opportunity and are under greater social pressure to try other illegal drugs. The latter interpretation is most often used in the scientific literature, and it is supported, although not proven, by the available data. The stepping stone hypothesis applies to marijuana only in the broadest sense. People who enjoy the effects of marijuana are, logically, more likely to be willing to try other mood-altering drugs than are people who are not willing to try marijuana or who dislike its effects. In other words, many of the factors associated with a willingness to use marijuana are, presumably, the same as those associated with a willingness to use other illicit drugs. Those factors include physiological reactions to the drug effect, which are consistent with the stepping stone hypothesis, but also psychosocial factors, which are independent of drug-specific effects. There is no evidence that marijuana serves as a stepping stone on the basis of its particular physiological effect. One might argue that marijuana is generally used before other illicit mood-altering drugs, in part, because its effects are milder; in that case, marijuana is a stepping stone only in the same sense as taking a small dose of a particular drug and then increasing that dose over time is a stepping stone to increased drug use. Whereas the stepping stone hypothesis presumes a predominantly physiological component of drug progression, the gateway theory is a social theory. The latter does not suggest that the pharmacological qualities of marijuana make it a risk factor for progression to other drug use. Instead, the legal status of marijuana makes it a gateway 82 Psychiatric disorders are associated with substance dependence and are probably risk factors for progression in drug use. For example, the troubled adolescents studied by 31 were dependent on an average of 3.2 substances, and this suggests that their conduct disorders were associated with increased risk of progressing from one drug to another. Abuse of a single substance is probably also a risk factor for later multiple drug use. For example, in a longitudinal study that examined drug use and drug. Crowley and co-workers The second is dependence, about 26% of problem drinkers reported that they first used marijuana after the onset of alcohol-related problems (R. Pandina, IOM workshop). The study also found that 11% of marijuana users developed chronic marijuana problems; most also had alcohol problems. Intensity of drug use is an important risk factor in progression. Daily marijuana users are more likely than their peers to be extensive users of other substances (for review, see 78 Kandel and Davies by the age 24—25, 75% never used any other illicit drug; 53% of those who had used it 78 The factors that best predict use of illicit drugs other than marijuana are probably the following: age of first alcohol or nicotine use, heavy marijuana use, and psychiatric disorders. However, progression to illicit drug use is not synonymous with heavy or persistent drug use. Indeed, although the age of onset of use of licit drugs (alcohol and nicotine) predicts later illicit drug use, it does not appear to predict persistent or heavy 90 use of illicit drugs. Data on the gateway phenomenon are often overinterpreted. For example, one study 55 ). Of 34- to 35-year- old men who had used marijuana 10—99 times more than 100 times did progress to using other illicit drugs 10 or more times. Comparable proportions for women are 64% and 50%. reports that "marijuana's role as a gateway drug appears to have increased." It was a retrospective study based on interviews of drug abusers who reported smoking crack or injecting heroin daily. The data from the study provide no indication of what proportion of marijuana users become serious drug abusers; rather, they indicate that serious drug abusers usually use marijuana before they smoke crack or inject heroin. Only a small percentage of the adult population uses crack or heroin daily; during the five-year period from 1993 to 1997, an average of three people per 1,000 used crack and about two per 132 Many of the data on which the gateway theory is based do not measure dependence; instead, they measure use--even once-only use. Thus, they show only that marijuana users are more likely to use other illicit drugs (even if only once) than are people who never use marijuana, not that they become dependent or even frequent users. The authors of these studies are careful to point out that their data should not be used as evidence of an inexorable causal progression; rather they note that identifying stage-based user groups makes it possible to identify the specific risk factors that predict movement from 25 In the sense that marijuana use typically precedes rather than follows initiation into the use of other illicit drugs, it is indeed a gateway drug. However, it does not appear to be a gateway drug to the extent that it is the cause or even that it is the most significant predictor of serious drug abuse; that is, care must be taken not to attribute cause to association. The most consistent predictors of serious drug use appear to be the intensity of marijuana use and co-occurring psychiatric disorders or a family history of 78,83 psychopathology (including alcoholism). 1,000 used heroin in the preceding month. one stage of drug use to the next--the real issue in the gateway discussion. An important caution is that data on drug use progression pertain to nonmedical drug use. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would be the same. Kandel and co-workers also included nonmedical use of prescription psychoactive drugs in their study of drug use 82 progression. a clear and consistent sequence of drug use involving the abuse of prescription psychoactive drugs. The current data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse among medical marijuana users. Whether the medical use of marijuana might encourage drug abuse among the general community--not among medical marijuana users themselves but among others simply because of the fact that marijuana would be used for medical purposes--is another question. LINK BETWEEN MEDICAL USE AND DRUG ABUSE Almost everyone who spoke or wrote to the IOM study team about the potential harms posed by the medical use of marijuana felt that it would send the wrong message to children and teenagers. They stated that information about the harms caused by marijuana is undermined by claims that marijuana might have medical value. Yet many of our powerful medicines are also dangerous medicines. These two facets of medicine-- effectiveness and risk--are inextricably linked. The question here is not whether marijuana can be both harmful and helpful but whether the perception of its benefits will increase its abuse. For now any answer to the question remains conjecture. Because marijuana is not an approved medicine, there is little information about the consequences of its medical use in modern society. Reasonable inferences might be drawn from some examples. Opiates, such as morphine and codeine, are an example of a class of drugs that is both abused to great harm and used to great medical benefit, and it would be useful to examine the relationship between their medical use and their abuse. In a "natural experiment" during 1973—1978 some states decriminalized marijuana, and others did not. Finally, one can examine the short-term consequences of the publicity surrounding the 1996 medical marijuana campaign in California and ask whether it had any measurable impact on marijuana consumption among youth in California; the consequences of "message" that marijuana might have medical use are examined below. Medical Use and Abuse of Opiates Two highly influential papers published in the 1920s and 1950s led to widespread concern among physicians and medical licensing boards that liberal use of opiates would 106 in 1996). Such fears have proven unfounded; it is now recognized that fear of producing addicts through medical treatment resulted in needless suffering among patients with pain as physicians 27,44 In contrast with the use of alcohol, nicotine, and illicit drugs, there was not result in many addicts (reviewed by Moulin and co-workers needlessly limited appropriate doses of medications. addiction problems with misuse of drugs that have been prescribed for medical use. Few people begin their drug 114 Opiates are carefully regulated in the medical setting, and diversion of medically prescribed opiates to the black market is not generally considered to be a major problem. No evidence suggests that the use of opiates or cocaine for medical purposes has increased the perception that their illicit use is safe or acceptable. Clearly, there are risks that patients will abuse marijuana for its psychoactive effects and some likelihood of diversion of marijuana from legitimate medical channels into the illicit market. But those risks do not differentiate marijuana from many accepted medications that are abused by some patients or diverted from medical channels for nonmedical use. Medications with abuse potential are placed in Schedule II of the Controlled Substances Act, which brings them under stricter control, including quotas on the amount that can be legally manufactured (see chapter 5 for discussion of the Controlled Substances Act). That scheduling also signals to physicians that a drug has abuse potential and that they should monitor its use by patients who could be at risk for drug abuse. Marijuana Decriminalization Monitoring the Future, the annual survey of values and lifestyles of high school seniors, revealed that high school seniors in decriminalized states reported using no more 72 marijuana than did their counterparts in states where marijuana was not decriminalized. Another study reported somewhat conflicting evidence indicating that decriminalization 105 had increased marijuana use. Network (DAWN), which has collected data on drug-related emergency room (ER) cases since 1975. There was a greater increase from 1975 to 1978 in the proportion of ER patients who had used marijuana in states that had decriminalized marijuana in 1975— 1976 than in states that had not decriminalized it (Table 3.6). Despite the greater increase among decriminalized states, the proportion of marijuana users among ER patients by 1978 was about equal in states that had and states that had not decriminalized marijuana. That is because the non-decriminalized states had higher rates of marijuana use before decriminalization. In contrast with marijuana use, rates of other illicit drug use among ER patients were substantially higher in states that did not decriminalize marijuana use. Thus, there are different possible reasons for the greater increase in marijuana use in the decriminalized states. On the one hand, decriminalization might have led to an increased use of marijuana (at least among people who sought health care in hospital ERs). On the other hand, the lack of decriminalization might have encouraged greater use of drugs that are even more dangerous than marijuana. The differences between the results for high school seniors from the Monitoring the Future study and the DAWN data are unclear, although the author of the latter study suggests that the reasons might lie in limitations inherent in how the DAWN data are 105 In 1976, the Netherlands adopted a policy of toleration for possession of up to 30 g of marijuana. There was little change in marijuana use during the seven years after the policy change, which suggests that the change itself had little effect; however, in 1984, when Dutch "coffee shops" that sold marijuana commercially spread throughout That study used data from the Drug Awareness Warning collected. Amsterdam, marijuana use began to increase. continued to increase in the Netherlands at the same rate as in the United States and Norway--two countries that strictly forbid marijuana sale and possession. Furthermore, during this period, approximately equal percentages of American and Dutch 18 year olds used marijuana; Norwegian 18 year olds were about half as likely to have used marijuana. The authors of this study conclude that there is little evidence that the Dutch marijuana depenalization policy led to increased marijuana use, although they note that commercialization of marijuana might have contributed to its increased use. Thus, there is little evidence that decriminalization of marijuana use necessarily leads to a substantial increase in marijuana use. The Medical Marijuana Debate The most recent National Household Survey on Drug Abuse showed that among people 12—17 years old the perceived risk associated with smoking marijuana once or 132 (Perceived risk is measured as the percentage of survey respondents who report that they "perceive great risk of harm" in using a drug at a specified frequency.) At first glance, that might seem to validate the fear that the medical marijuana debate of 1996--before passage of the California medical marijuana referendum in November 1997--had sent a message that marijuana use is safe. But a closer analysis of the data shows that Californian youth were an exception to the national trend. In contrast to the national trend, the perceived risk of 1321 In summary, there is no evidence that the medical marijuana debate has altered adolescents' 132 PSYCHOLOGICAL HARMS In assessing the relative risks and benefits related to the medical use of marijuana, the psychological effects of marijuana can be viewed both as unwanted side effects and as potentially desirable end points in medical treatment. However, the vast majority of research on the psychological effects of marijuana has been in the context of assessing the drug's intoxicating effects when it is used for nonmedical purposes. Thus, the literature does not directly address the effects of marijuana taken for medical purposes. There are some important caveats to consider in attempting to extrapolate from the research mentioned above to the medical use of marijuana. The circumstances under which psychoactive drugs are taken are an important influence on their psychological effects. Furthermore, research protocols to study marijuana's psychological effects in most instances were required to use participants who already had experience with marijuana. People who might have had adverse reactions to marijuana either would choose not to participate in this type of study or would be screened out by the investigator. Therefore, the incidence of adverse reactions to marijuana that might occur in people with no marijuana experience cannot be estimated from such studies. A further complicating factor concerns the dose regimen used for laboratory studies. In most instances, laboratory research studies have looked at the effects of single doses of twice a week had decreased significantly between 1996 and 1997. marijuana use did not change among California youth between 1996 and 1997. perceptions of the risks associated with marijuana use. 98 During the 1990s, marijuana use has marijuana, which might be different from those observed when the drug is taken repeatedly for a chronic medical condition. Nonetheless, laboratory studies are useful in suggesting what psychological functions might be studied when marijuana is evaluated for medical purposes. Results of laboratory studies indicate that acute and chronic marijuana use has pronounced effects on mood, psychomotor, and cognitive functions. These psychological domains should therefore be considered in assessing the relative risks and therapeutic benefits related to marijuana or cannabinoids for any medical condition. Psychiatric Disorders A major question remains as to whether marijuana can produce lasting mood disorders 52 or psychotic disorders, such as schizophrenia. Georgotas and Zeidenberg reported that smoking 10—22 marijuana cigarettes per day was associated with a gradual waning of the positive mood and social facilitating effects of marijuana and an increase in irritability, social isolation, and paranoid thinking. Inasmuch as smoking one cigarette is 68,95,118 enough to make a person feel "high" for about 1—3 hours, the subjects in that study were taking very high doses of marijuana. Reports have described the development of apathy, lowered motivation, and impaired educational performance in heavy marijuana 121,122 There are clinical reports of marijuana-induced psychosis-like states (schizophrenia-like, 112 depression, and/or mania) lasting for a week or more. of the varied nature of the psychotic states induced by marijuana, there is no specific "marijuana psychosis." Rather, the marijuana experience might trigger latent users who do not appear to be behaviorally impaired in other ways. psychopathology of many types. concluded that disorder. As noted earlier, drug abuse is common among people with psychiatric 66 60 More recently, Hall and colleagues "there is reasonable evidence that heavy cannabis use, and perhaps acute use in sensitive individuals, can produce an acute psychosis in which confusion, amnesia, delusions, hallucinations, anxiety, agitation and hypomanic symptoms predominate." Regardless of which of those interpretations is correct, the two reports agree that there is little evidence that marijuana alone produces a psychosis that persists after the period of intoxication. Schizophrenia The association between marijuana and schizophrenia is not well understood. The scientific literature indicates general agreement that heavy marijuana use can precipitate schizophrenic episodes but not that marijuana use can cause the underlying psychotic 59,96,151 disorders. Estimates of the prevalence of marijuana use among schizophrenics vary considerably but are in general agreement that it is at least as great as that among the general population. 35 Schizophrenics prefer the effects of marijuana to those of alcohol 134 134 and cocaine, reasons for this are unknown, but it raises the possibility that schizophrenics might obtain some symptomatic relief from moderate marijuana use. But overall, compared with the general population, people with schizophrenia or with a family history of schizophrenia which they seem to use less often than does the general population. The Hollister suggests that, because are likely to be at greater risk for adverse psychiatric effects from the use of cannabinoids. Cognition As discussed earlier, acutely administered marijuana impairs cognition. 60,66,112 Positron emission tomography (PET) imaging allows investigators to measure the acute effects of marijuana smoking on active brain function. Human volunteers who perform auditory attention tasks before and after smoking a marijuana cigarette show impaired performance while under the influence of marijuana; this is associated with substantial reduction in blood flow to the temporal lobe of the brain, an area that is sensitive to such 116,117 tasks. Marijuana smoking increases blood flow in other brain regions, such as the 101,155 frontal lobes and lateral cerebellum. Earlier studies purporting to show structural 22 changes in the brains of heavy marijuana users have not been replicated with more sophisticated techniques. 28,89 14,122 Nevertheless, recent studies marijuana users after a brief period (19—24 hours) of marijuana abstinence. Longer term 140 Although these studies have attempted to match heavy marijuana users with subjects of similar cognitive abilities before exposure to marijuana use, the adequacy of this matching has been 133 cognitive deficits in heavy marijuana users have also been reported. have found subtle defects in cognitive tasks in heavy questioned. reviewed in an article by Pope and colleagues. are designed to differentiate between changes in brain function caused the effects of marijuana and by the illness for which marijuana is being given. AIDS dementia is an obvious example of this possible confusion. It is also important to determine whether repeated use of marijuana at therapeutic dosages produces any irreversible cognitive effects. Psychomotor Performance Marijuana administration has been reported to affect psychomotor performance on a 23 not only details the studies that have been done but also points out the inconsistencies among studies, the methodological shortcomings of many studies, and the large individual differences among the studies attributable to subject, situational, and methodological factors. Those factors must be considered in studies of psychomotor performance when participants are involved in a clinical trial of the efficacy of marijuana. The types of psychomotor functions that have been shown to be disrupted by the acute administration of marijuana include body sway, hand steadiness, rotary pursuit, driving and flying simulation, divided attention, sustained attention, and the digit-symbol substitution test. A study of experienced airplane pilots showed that even 24 hours after a single marijuana cigarette their performance on flight 163 Before the tests, however, they told the study investigators that they were sure their performance would be unaffected. The complex methodological issues facing research in this area are well number of tasks. The review by Chait and Pierri simulator tests was impaired. 121 Care must be exercised so that studies Cognitive impairments associated with acutely administered marijuana limit the activities that people would be able to do safely or productively. For example, no one under the influence of marijuana or THC should drive a vehicle or operate potentially dangerous equipment. Amotivational Syndrome One of the more controversial effects claimed for marijuana is the production of an "amotivational syndrome." This syndrome is not a medical diagnosis, but it has been used to describe young people who drop out of social activities and show little interest in school, work, or other goal-directed activity. When heavy marijuana use accompanies these symptoms, the drug is often cited as the cause, but no convincing data demonstrate 23 a causal relationship between marijuana smoking and these behavioral characteristics. is not enough to observe that a chronic marijuana user lacks motivation. Instead, relevant personality traits and behavior of subjects must be assessed before and after the subject becomes a heavy marijuana user. Because such research can only be done on subjects who become heavy marijuana users on their own, a large population study--such as the Epidemiological Catchment Area study described earlier in this chapter--would be needed to shed light on the relationship between motivation and marijuana use. Even then, although a causal relationship between the two could, in theory, be dismissed by an epidemiological study, causality could not be proven. USER: Whether currently available or unavailable, what is an example of a smokeless cannabis delivery method that clinical trials hope to help develop? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
22
22
19,801
null
297
You must respond to the prompt using only information provided in the context block. Please limit your response to about 150 words.
What is the relationship between operating flexibility and the amount of cash a firm holds?
2.2.3. How does D&I affect financial policies? The previous section argues that diversity and inclusion (D&I) could affect a firm’s operating flexibility. In addition, a literature in financial economics indicates that a firm’s operating flexibility affects its financial policies. Thus, D&I could affect a firm’s financial policies as well. A literature in finance theorizes and documents that more operating flexibility allows a firm to hold less cash. Opler et al. (1999) argue that firms hold cash for precautionary motive, e.g., in case of an unexpected loss or an unexpected opportunity to invest (see Almeida et al. (2014) for a review). Since operating flexibility could help a firm mitigate losses from negative shocks and expand more easily following positive shocks, more operating flexibility would imply less of a precautionary motive to hold cash. Empirically, Gu and Li (2021) document that flexible firms hold less cash, and Ghaly, Anh Dang, and Stathopoulos (2017) show that firms with more inflexibility due to a dependence on skilled labor hold more cash. Another literature in finance argues that more operating flexibility could affect a firm’s debt policies. Kraus and Litzenberger (1973) theorize that a firm chooses its optimal debt ratio by trading off the tax shield benefit of debt and the cost of financial distress related to debt, both of which Gu, Hackbarth, and Li (2020) argue could be affected by operating flexibility. The argument is that a firm’s flexibility to downsize mitigates its losses in bad times, leading to a lower expected cost of financial distress. In addition, a firm’s flexibility to scale up in good times results in a higher taxable income, which increases the value of the debt tax shield. In other words, operating flexibility could both decrease the cost and increase the benefit of using debt, so a more flexible firm would optimally use more debt in its capital structure. This prediction holds in many empirical studies across different dimensions of operating flexibility, including production flexibility (Reinartz and Schmid (2016)), pricing flexibility (D’Acunto et al. (2018)), and workforce flexibility (Simintzi, Vig, and Volpin (2015), Serfling (2016), Bates, Du, and Wang (2020)). Because D&I can affect operating flexibility, and operating flexibility can affect cash holdings and debt usage, D&I can affect these financial policies. If D&I increases a firm’s operating flexibility, then a diverse and inclusive firm (D&I firm) would hold less cash and use more debt. If D&I decreases a firm’s operating flexibility, then I would expect the opposite. Beyond an indirect channel, D&I considerations could directly affect a firm’s cash and debt holdings as well. On the one hand, direct spending on D&I practices, such as the costs of sexual harassment training or diversity hiring, could reduce a firm’s financial resources, e.g., less cash. On the other hand, because building a D&I culture is likely costly (Gorton and Zentefis (2020)), a firm could have an incentive to hold more cash and use less debt to keep the financial flexibility needed to maintain such a culture. Overall, it is an empirical question how a firm’s D&I affects its financial policies. I formally state these hypotheses below in their null forms below: H2a: a D&I firm on average does not use more debt in its capital structure than a nonD&I firm. H2b: a D&I firm on average does not hold more cash on its balance sheet than a non-D&I firm.
You must respond to the prompt using only information provided in the context block. Please limit your response to about 150 words. What is the relationship between operating flexibility and the amount of cash a firm holds? 2.2.3. How does D&I affect financial policies? The previous section argues that diversity and inclusion (D&I) could affect a firm’s operating flexibility. In addition, a literature in financial economics indicates that a firm’s operating flexibility affects its financial policies. Thus, D&I could affect a firm’s financial policies as well. A literature in finance theorizes and documents that more operating flexibility allows a firm to hold less cash. Opler et al. (1999) argue that firms hold cash for precautionary motive, e.g., in case of an unexpected loss or an unexpected opportunity to invest (see Almeida et al. (2014) for a review). Since operating flexibility could help a firm mitigate losses from negative shocks and expand more easily following positive shocks, more operating flexibility would imply less of a precautionary motive to hold cash. Empirically, Gu and Li (2021) document that flexible firms hold less cash, and Ghaly, Anh Dang, and Stathopoulos (2017) show that firms with more inflexibility due to a dependence on skilled labor hold more cash. Another literature in finance argues that more operating flexibility could affect a firm’s debt policies. Kraus and Litzenberger (1973) theorize that a firm chooses its optimal debt ratio by trading off the tax shield benefit of debt and the cost of financial distress related to debt, both of which Gu, Hackbarth, and Li (2020) argue could be affected by operating flexibility. The argument is that a firm’s flexibility to downsize mitigates its losses in bad times, leading to a lower expected cost of financial distress. In addition, a firm’s flexibility to scale up in good times results in a higher taxable income, which increases the value of the debt tax shield. In other words, operating flexibility could both decrease the cost and increase the benefit of using debt, so a more flexible firm would optimally use more debt in its capital structure. This prediction holds in many empirical studies across different dimensions of operating flexibility, including production flexibility (Reinartz and Schmid (2016)), pricing flexibility (D’Acunto et al. (2018)), and workforce flexibility (Simintzi, Vig, and Volpin (2015), Serfling (2016), Bates, Du, and Wang (2020)). Because D&I can affect operating flexibility, and operating flexibility can affect cash holdings and debt usage, D&I can affect these financial policies. If D&I increases a firm’s operating flexibility, then a diverse and inclusive firm (D&I firm) would hold less cash and use more debt. If D&I decreases a firm’s operating flexibility, then I would expect the opposite. Beyond an indirect channel, D&I considerations could directly affect a firm’s cash and debt holdings as well. On the one hand, direct spending on D&I practices, such as the costs of sexual harassment training or diversity hiring, could reduce a firm’s financial resources, e.g., less cash. On the other hand, because building a D&I culture is likely costly (Gorton and Zentefis (2020)), a firm could have an incentive to hold more cash and use less debt to keep the financial flexibility needed to maintain such a culture. Overall, it is an empirical question how a firm’s D&I affects its financial policies. I formally state these hypotheses below in their null forms below: H2a: a D&I firm on average does not use more debt in its capital structure than a nonD&I firm. H2b: a D&I firm on average does not hold more cash on its balance sheet than a non-D&I firm.
You must respond to the prompt using only information provided in the context block. Please limit your response to about 150 words. EVIDENCE: 2.2.3. How does D&I affect financial policies? The previous section argues that diversity and inclusion (D&I) could affect a firm’s operating flexibility. In addition, a literature in financial economics indicates that a firm’s operating flexibility affects its financial policies. Thus, D&I could affect a firm’s financial policies as well. A literature in finance theorizes and documents that more operating flexibility allows a firm to hold less cash. Opler et al. (1999) argue that firms hold cash for precautionary motive, e.g., in case of an unexpected loss or an unexpected opportunity to invest (see Almeida et al. (2014) for a review). Since operating flexibility could help a firm mitigate losses from negative shocks and expand more easily following positive shocks, more operating flexibility would imply less of a precautionary motive to hold cash. Empirically, Gu and Li (2021) document that flexible firms hold less cash, and Ghaly, Anh Dang, and Stathopoulos (2017) show that firms with more inflexibility due to a dependence on skilled labor hold more cash. Another literature in finance argues that more operating flexibility could affect a firm’s debt policies. Kraus and Litzenberger (1973) theorize that a firm chooses its optimal debt ratio by trading off the tax shield benefit of debt and the cost of financial distress related to debt, both of which Gu, Hackbarth, and Li (2020) argue could be affected by operating flexibility. The argument is that a firm’s flexibility to downsize mitigates its losses in bad times, leading to a lower expected cost of financial distress. In addition, a firm’s flexibility to scale up in good times results in a higher taxable income, which increases the value of the debt tax shield. In other words, operating flexibility could both decrease the cost and increase the benefit of using debt, so a more flexible firm would optimally use more debt in its capital structure. This prediction holds in many empirical studies across different dimensions of operating flexibility, including production flexibility (Reinartz and Schmid (2016)), pricing flexibility (D’Acunto et al. (2018)), and workforce flexibility (Simintzi, Vig, and Volpin (2015), Serfling (2016), Bates, Du, and Wang (2020)). Because D&I can affect operating flexibility, and operating flexibility can affect cash holdings and debt usage, D&I can affect these financial policies. If D&I increases a firm’s operating flexibility, then a diverse and inclusive firm (D&I firm) would hold less cash and use more debt. If D&I decreases a firm’s operating flexibility, then I would expect the opposite. Beyond an indirect channel, D&I considerations could directly affect a firm’s cash and debt holdings as well. On the one hand, direct spending on D&I practices, such as the costs of sexual harassment training or diversity hiring, could reduce a firm’s financial resources, e.g., less cash. On the other hand, because building a D&I culture is likely costly (Gorton and Zentefis (2020)), a firm could have an incentive to hold more cash and use less debt to keep the financial flexibility needed to maintain such a culture. Overall, it is an empirical question how a firm’s D&I affects its financial policies. I formally state these hypotheses below in their null forms below: H2a: a D&I firm on average does not use more debt in its capital structure than a nonD&I firm. H2b: a D&I firm on average does not hold more cash on its balance sheet than a non-D&I firm. USER: What is the relationship between operating flexibility and the amount of cash a firm holds? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
22
15
558
null
777
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. Draw your answer from the below text only. Respond in 3 sentences.
Summarize PHSA Section 514
Section 1411. Comprehensive Community Mental Health Services for Children with Serious Emotional Disturbances Background PHSA Title V, Part E (Sections 561-565), authorizes SAMHSA’s Children’s Mental Health Services. PHSA Title V, Part E, requires the Secretary to award grants to support “comprehensive community mental health services for children with a serious emotional disturbance.” 157 Reauthorized by the Cures Act, 158 the authorization specifies reporting requirements, technical assistance requirements, and the ages of children to be served, among other things. PHSA Section 565 (“General Provisions”) provides definitions for terms used in the Title V, Part E, authorizations and includes the authorization of appropriations, among other things. PHSA Section 565 previously authorized $119 million (rounded) to be appropriated for each of FY2018-FY2022. Provision Section 1411 amends PHSA Section by adding “kinship caregivers” to the definition of “family” and reauthorizing SAMHSA’s Children’s Mental Health Services. PHSA Section 565 now authorizes $125 million to be appropriated for each of FY2023-FY2027. Section. 1412. Substance Use Disorder Treatment and Early Intervention Services for Children and Adolescents Background PHSA Section 514 (“Substance Use Disorder Treatment and Early Intervention Services for Children, Adolescents, and Young Adults”) authorizes SAMHSA’s Children and Families program. PHSA Section 514 requires the Secretary to award grants, contracts, or cooperative agreements to support substance use disorder services for children and adolescents. Eligible entities include public and private nonprofit entities, including Native Alaskan entities and Indian Tribes and Tribal organizations. PHSA Section 514 requires the Secretary to give priority to applicants meeting specified criteria (e.g., providing gender-specific and culturally appropriate treatment). The Cures Act reauthorized the activities in this provision in 2016, further specifying definitions for Indian Tribes or Tribal Organizations and Indian Health Service facilities, among other things. 159 157 42 U.S.C. §290ff. 158 Cures Act §10001. 159 Cures Act §10003. The Restoring Hope for Mental Health and Well-Being Act of 2022 Congressional Research Service 48 PHSA Section 514 previously authorized $29.6 million (rounded) to be appropriated for each of FY2018-FY2022. Provision Section 1412 amends PHSA Section 514 by making technical edits to Tribal terms and reauthorizing $29.6 million (rounded) for each of FY2023-2027 for SAMHSA’s Children and Families program. Chapter 3—Garrett Lee Smith Memorial Reauthorization Sections 1421-1424 Background SAMHSA supports several suicide prevention initiatives, including the National Strategy for Suicide Prevention, a suicide prevention technical assistance center, and the Garrett Lee Smith (GLS) State and Campus suicide grant programs, among others. In 2004, the Garrett Lee Smith Memorial Act (P.L. 108-355 ) explicitly authorized three of these suicide prevention programs in PHSA Title V. PHSA Section 520C (“Suicide Prevention Technical Assistance Center”) authorizes the Garrett Lee Smith (GLS) Suicide Prevention Resource Center. Amended by the Cures Act, 160 PHSA Section 520C requires the Secretary, acting through the SAMHSA Assistant Secretary, to operate a technical assistance center focused on suicide prevention. The provision specifies the program’s focus on suicide prevention across the lifespan and requires the Secretary to submit to Congress a report on the activities carried out by the center. PHSA Section 520C previously authorized $6 million (rounded) to be appropriated annually for each of FY2018-FY2022 for the center. PHSA Sections 520E (“Youth Suicide Early Intervention and Prevention Strategies”) and 520E-2 (“Mental Health and Substance Use Disorder Services on Campus”) authorize the Garrett Lee Smith (GLS) State and Campus suicide grant programs. The GLS State grant program—entitled the GLS State/Tribal Youth Suicide Prevention and Early Intervention grant program—awards grants to states to support comprehensive statewide youth suicide prevention and early intervention strategies. The GLS Campus Suicide Prevention grant program provides institutions of higher education with grants to implement an array of suicide prevention initiatives on campus. 161 Both authorizing provisions were previously amended by the Cures Act in 2016.162 PHSA Section 520E previously authorized $30 million for each of FY2018-FY2022. PHSA Section 520E-2 previously authorized $7 million for each of FY2018-FY2022.
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. Draw your answer from the below text only. Respond in 3 sentences. Section 1411. Comprehensive Community Mental Health Services for Children with Serious Emotional Disturbances Background PHSA Title V, Part E (Sections 561-565), authorizes SAMHSA’s Children’s Mental Health Services. PHSA Title V, Part E, requires the Secretary to award grants to support “comprehensive community mental health services for children with a serious emotional disturbance.” 157 Reauthorized by the Cures Act, 158 the authorization specifies reporting requirements, technical assistance requirements, and the ages of children to be served, among other things. PHSA Section 565 (“General Provisions”) provides definitions for terms used in the Title V, Part E, authorizations and includes the authorization of appropriations, among other things. PHSA Section 565 previously authorized $119 million (rounded) to be appropriated for each of FY2018-FY2022. Provision Section 1411 amends PHSA Section by adding “kinship caregivers” to the definition of “family” and reauthorizing SAMHSA’s Children’s Mental Health Services. PHSA Section 565 now authorizes $125 million to be appropriated for each of FY2023-FY2027. Section. 1412. Substance Use Disorder Treatment and Early Intervention Services for Children and Adolescents Background PHSA Section 514 (“Substance Use Disorder Treatment and Early Intervention Services for Children, Adolescents, and Young Adults”) authorizes SAMHSA’s Children and Families program. PHSA Section 514 requires the Secretary to award grants, contracts, or cooperative agreements to support substance use disorder services for children and adolescents. Eligible entities include public and private nonprofit entities, including Native Alaskan entities and Indian Tribes and Tribal organizations. PHSA Section 514 requires the Secretary to give priority to applicants meeting specified criteria (e.g., providing gender-specific and culturally appropriate treatment). The Cures Act reauthorized the activities in this provision in 2016, further specifying definitions for Indian Tribes or Tribal Organizations and Indian Health Service facilities, among other things. 159 157 42 U.S.C. §290ff. 158 Cures Act §10001. 159 Cures Act §10003. The Restoring Hope for Mental Health and Well-Being Act of 2022 Congressional Research Service 48 PHSA Section 514 previously authorized $29.6 million (rounded) to be appropriated for each of FY2018-FY2022. Provision Section 1412 amends PHSA Section 514 by making technical edits to Tribal terms and reauthorizing $29.6 million (rounded) for each of FY2023-2027 for SAMHSA’s Children and Families program. Chapter 3—Garrett Lee Smith Memorial Reauthorization Sections 1421-1424 Background SAMHSA supports several suicide prevention initiatives, including the National Strategy for Suicide Prevention, a suicide prevention technical assistance center, and the Garrett Lee Smith (GLS) State and Campus suicide grant programs, among others. In 2004, the Garrett Lee Smith Memorial Act (P.L. 108-355 ) explicitly authorized three of these suicide prevention programs in PHSA Title V. PHSA Section 520C (“Suicide Prevention Technical Assistance Center”) authorizes the Garrett Lee Smith (GLS) Suicide Prevention Resource Center. Amended by the Cures Act, 160 PHSA Section 520C requires the Secretary, acting through the SAMHSA Assistant Secretary, to operate a technical assistance center focused on suicide prevention. The provision specifies the program’s focus on suicide prevention across the lifespan and requires the Secretary to submit to Congress a report on the activities carried out by the center. PHSA Section 520C previously authorized $6 million (rounded) to be appropriated annually for each of FY2018-FY2022 for the center. PHSA Sections 520E (“Youth Suicide Early Intervention and Prevention Strategies”) and 520E-2 (“Mental Health and Substance Use Disorder Services on Campus”) authorize the Garrett Lee Smith (GLS) State and Campus suicide grant programs. The GLS State grant program—entitled the GLS State/Tribal Youth Suicide Prevention and Early Intervention grant program—awards grants to states to support comprehensive statewide youth suicide prevention and early intervention strategies. The GLS Campus Suicide Prevention grant program provides institutions of higher education with grants to implement an array of suicide prevention initiatives on campus. 161 Both authorizing provisions were previously amended by the Cures Act in 2016.162 PHSA Section 520E previously authorized $30 million for each of FY2018-FY2022. PHSA Section 520E-2 previously authorized $7 million for each of FY2018-FY2022. Summarize PHSA Section 514
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. Draw your answer from the below text only. Respond in 3 sentences. EVIDENCE: Section 1411. Comprehensive Community Mental Health Services for Children with Serious Emotional Disturbances Background PHSA Title V, Part E (Sections 561-565), authorizes SAMHSA’s Children’s Mental Health Services. PHSA Title V, Part E, requires the Secretary to award grants to support “comprehensive community mental health services for children with a serious emotional disturbance.” 157 Reauthorized by the Cures Act, 158 the authorization specifies reporting requirements, technical assistance requirements, and the ages of children to be served, among other things. PHSA Section 565 (“General Provisions”) provides definitions for terms used in the Title V, Part E, authorizations and includes the authorization of appropriations, among other things. PHSA Section 565 previously authorized $119 million (rounded) to be appropriated for each of FY2018-FY2022. Provision Section 1411 amends PHSA Section by adding “kinship caregivers” to the definition of “family” and reauthorizing SAMHSA’s Children’s Mental Health Services. PHSA Section 565 now authorizes $125 million to be appropriated for each of FY2023-FY2027. Section. 1412. Substance Use Disorder Treatment and Early Intervention Services for Children and Adolescents Background PHSA Section 514 (“Substance Use Disorder Treatment and Early Intervention Services for Children, Adolescents, and Young Adults”) authorizes SAMHSA’s Children and Families program. PHSA Section 514 requires the Secretary to award grants, contracts, or cooperative agreements to support substance use disorder services for children and adolescents. Eligible entities include public and private nonprofit entities, including Native Alaskan entities and Indian Tribes and Tribal organizations. PHSA Section 514 requires the Secretary to give priority to applicants meeting specified criteria (e.g., providing gender-specific and culturally appropriate treatment). The Cures Act reauthorized the activities in this provision in 2016, further specifying definitions for Indian Tribes or Tribal Organizations and Indian Health Service facilities, among other things. 159 157 42 U.S.C. §290ff. 158 Cures Act §10001. 159 Cures Act §10003. The Restoring Hope for Mental Health and Well-Being Act of 2022 Congressional Research Service 48 PHSA Section 514 previously authorized $29.6 million (rounded) to be appropriated for each of FY2018-FY2022. Provision Section 1412 amends PHSA Section 514 by making technical edits to Tribal terms and reauthorizing $29.6 million (rounded) for each of FY2023-2027 for SAMHSA’s Children and Families program. Chapter 3—Garrett Lee Smith Memorial Reauthorization Sections 1421-1424 Background SAMHSA supports several suicide prevention initiatives, including the National Strategy for Suicide Prevention, a suicide prevention technical assistance center, and the Garrett Lee Smith (GLS) State and Campus suicide grant programs, among others. In 2004, the Garrett Lee Smith Memorial Act (P.L. 108-355 ) explicitly authorized three of these suicide prevention programs in PHSA Title V. PHSA Section 520C (“Suicide Prevention Technical Assistance Center”) authorizes the Garrett Lee Smith (GLS) Suicide Prevention Resource Center. Amended by the Cures Act, 160 PHSA Section 520C requires the Secretary, acting through the SAMHSA Assistant Secretary, to operate a technical assistance center focused on suicide prevention. The provision specifies the program’s focus on suicide prevention across the lifespan and requires the Secretary to submit to Congress a report on the activities carried out by the center. PHSA Section 520C previously authorized $6 million (rounded) to be appropriated annually for each of FY2018-FY2022 for the center. PHSA Sections 520E (“Youth Suicide Early Intervention and Prevention Strategies”) and 520E-2 (“Mental Health and Substance Use Disorder Services on Campus”) authorize the Garrett Lee Smith (GLS) State and Campus suicide grant programs. The GLS State grant program—entitled the GLS State/Tribal Youth Suicide Prevention and Early Intervention grant program—awards grants to states to support comprehensive statewide youth suicide prevention and early intervention strategies. The GLS Campus Suicide Prevention grant program provides institutions of higher education with grants to implement an array of suicide prevention initiatives on campus. 161 Both authorizing provisions were previously amended by the Cures Act in 2016.162 PHSA Section 520E previously authorized $30 million for each of FY2018-FY2022. PHSA Section 520E-2 previously authorized $7 million for each of FY2018-FY2022. USER: Summarize PHSA Section 514 Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
40
4
637
null
395
Do not refer to any outside information not found in this document to answer the prompt. Answer in a single sentence, but do not quote directly from the document.
What are the implications for AI in the medical space?
The Potential for Artificial Intelligence In Healthcare Artificial intelligence (AI) and related technologies are increasingly prevalent in business and society, and are beginning to be applied to healthcare. These technologies have the potential to transform many aspects of patient care, as well as administrative processes within provider, payer and pharmaceutical organizations. There are already a number of research studies suggesting that AI can perform as well as or better than humans at key healthcare tasks, such as diagnosing disease. Today, algorithms are already outperforming radiologists at spotting malignant tumors, and guiding researchers in how to construct cohorts for costly clinical trials. However, for a variety of reasons, we believe that it will be many years before AI replaces humans for broad medical process domains. In this article, we describe both the potential that AI offers to automate aspects of care and some of the barriers to rapid implementation of AI in healthcare. Types of AI of relevance to healthcare Artificial intelligence is not one technology, but rather a collection of them. Most of these technologies have immediate relevance to the healthcare field, but the specific processes and tasks they support vary widely. Some particular AI technologies of high importance to healthcare are defined and described below. Machine learning – neural networks and deep learning Machine learning is a statistical technique for fitting models to data and to ‘learn’ by training models with data. Machine learning is one of the most common forms of AI; in a 2018 Deloitte survey of 1,100 US managers whose organizations were already pursuing AI, 63% of companies surveyed were employing machine learning in their businesses. 1 It is a broad technique at the core of many approaches to AI and there are many versions of it. In healthcare, the most common application of traditional machine learning is precision medicine – predicting what treatment protocols are likely to succeed on a patient based on various patient attributes and the treatment context. 2 The great majority of machine learning and precision medicine applications require a training dataset for which the outcome variable (eg onset of disease) is known; this is called supervised learning. A more complex form of machine learning is the neural network – a technology that has been available since the 1960s has been well established in healthcare research for several decades 3 and has been used for categorisation applications like determining whether a patient will acquire a particular disease. It views problems in terms of inputs, outputs and weights of variables or ‘features’ that associate inputs with outputs. It has been likened to the way that neurons process signals, but the analogy to the brain's function is relatively weak. The most complex forms of machine learning involve deep learning, or neural network models with many levels of features or variables that predict outcomes. There may be thousands of hidden features in such models, which are uncovered by the faster processing of today's graphics processing units and cloud architectures. A common application of deep learning in healthcare is recognition of potentially cancerous lesions in radiology images. 4 Deep learning is increasingly being applied to radiomics, or the detection of clinically relevant features in imaging data beyond what can be perceived by the human eye. 5 Both radiomics and deep learning are most commonly found in oncology-oriented image analysis. Their combination appears to promise greater accuracy in diagnosis than the previous generation of automated tools for image analysis, known as computer-aided detection or CAD. Deep learning is also increasingly used for speech recognition and, as such, is a form of natural language processing (NLP), described below. Unlike earlier forms of statistical analysis, each feature in a deep learning model typically has little meaning to a human observer. As a result, the explanation of the model's outcomes may be very difficult or impossible to interpret. Diagnosis and treatment applications Diagnosis and treatment of disease has been a focus of AI since at least the 1970s, when MYCIN was developed at Stanford for diagnosing blood-borne bacterial infections. 8 This and other early rule-based systems showed promise for accurately diagnosing and treating disease, but were not adopted for clinical practice. They were not substantially better than human diagnosticians, and they were poorly integrated with clinician workflows and medical record systems. More recently, IBM's Watson has received considerable attention in the media for its focus on precision medicine, particularly cancer diagnosis and treatment. Watson employs a combination of machine learning and NLP capabilities. However, early enthusiasm for this application of the technology has faded as customers realized the difficulty of teaching Watson how to address particular types of cancer 9 and of integrating Watson into care processes and systems. 10 Watson is not a single product but a set of ‘cognitive services’ provided through application programming interfaces (APIs), including speech and language, vision, and machine learning-based data-analysis programs. Most observers feel that the Watson APIs are technically capable, but taking on cancer treatment was an overly ambitious objective. Watson and other proprietary programs have also suffered from competition with free ‘open source’ programs provided by some vendors, such as Google's TensorFlow. Implementation issues with AI bedevil many healthcare organizations. Although rule-based systems are incorporated within EHR systems are widely used, including at the NHS, 11 they lack the precision of more algorithmic systems based on machine learning. These rule-based clinical decision support systems are difficult to maintain as medical knowledge changes and are often not able to handle the explosion of data and knowledge based on genomic, proteomic, metabolic and other ‘omic-based’ approaches to care. This situation is beginning to change, but it is mostly present in research labs and in tech firms, rather than in clinical practice. Scarcely a week goes by without a research lab claiming that it has developed an approach to using AI or big data to diagnose and treat a disease with equal or greater accuracy than human clinicians. Many of these findings are based on radiological image analysis, 12 though some involve other types of images such as retinal scanning 13 or genomic-based precision medicine. 14 Since these types of findings are based on statistically-based machine learning models, they are ushering in an era of evidence- and probability-based medicine, which is generally regarded as positive but brings with it many challenges in medical ethics and patient/ clinician relationships. 15 Tech firms and startups are also working assiduously on the same issues. Google, for example, is collaborating with health delivery networks to build prediction models from big data to warn clinicians of high-risk conditions, such as sepsis and heart failure. 16 Google, Enlitic and a variety of other startups are developing AI-derived image interpretation algorithms. Jvion offers a ‘clinical success machine’ that identifies the patients most at risk as well as those most likely to respond to treatment protocols. Each of these could provide decision support to clinicians seeking to find the best diagnosis and treatment for patients. There are also several firms that focus specifically on diagnosis and treatment recommendations for certain cancers based on their genetic profiles. Since many cancers have a genetic basis, human clinicians have found it increasingly complex to understand all genetic variants of cancer and their response to new drugs and protocols. Firms like Foundation Medicine and Flatiron Health, both now owned by Roche, specialise in this approach. Both providers and payers for care are also using ‘population health’ machine learning models to predict populations at risk of particular diseases 17 or accidents 18 or to predict hospital readmission. 19 These models can be effective at prediction, although they sometimes lack all the relevant data that might add predictive capability, such as patient socio-economic status. But whether rules-based or algorithmic in nature, AI-based diagnosis and treatment recommendations are sometimes challenging to embed in clinical workflows and EHR systems. Such integration issues have probably been a greater barrier to broad implementation of AI than any inability to provide accurate and effective recommendations; and many AI-based capabilities for diagnosis and treatment from tech firms are standalone in nature or address only a single aspect of care. Some EHR vendors have begun to embed limited AI functions (beyond rule-based clinical decision support) into their offerings, 20 but these are in the early stages. Providers will either have to undertake substantial integration projects themselves or wait until EHR vendors add more AI capabilities.
[Task Instructions] ================== Do not refer to any outside information not found in this document to answer the prompt. Answer in a single sentence, but do not quote directly from the document. [Query] ================== What are the implications for AI in the medical space? [Text] ================== The Potential for Artificial Intelligence In Healthcare Artificial intelligence (AI) and related technologies are increasingly prevalent in business and society, and are beginning to be applied to healthcare. These technologies have the potential to transform many aspects of patient care, as well as administrative processes within provider, payer and pharmaceutical organizations. There are already a number of research studies suggesting that AI can perform as well as or better than humans at key healthcare tasks, such as diagnosing disease. Today, algorithms are already outperforming radiologists at spotting malignant tumors, and guiding researchers in how to construct cohorts for costly clinical trials. However, for a variety of reasons, we believe that it will be many years before AI replaces humans for broad medical process domains. In this article, we describe both the potential that AI offers to automate aspects of care and some of the barriers to rapid implementation of AI in healthcare. Types of AI of relevance to healthcare Artificial intelligence is not one technology, but rather a collection of them. Most of these technologies have immediate relevance to the healthcare field, but the specific processes and tasks they support vary widely. Some particular AI technologies of high importance to healthcare are defined and described below. Machine learning – neural networks and deep learning Machine learning is a statistical technique for fitting models to data and to ‘learn’ by training models with data. Machine learning is one of the most common forms of AI; in a 2018 Deloitte survey of 1,100 US managers whose organizations were already pursuing AI, 63% of companies surveyed were employing machine learning in their businesses. 1 It is a broad technique at the core of many approaches to AI and there are many versions of it. In healthcare, the most common application of traditional machine learning is precision medicine – predicting what treatment protocols are likely to succeed on a patient based on various patient attributes and the treatment context. 2 The great majority of machine learning and precision medicine applications require a training dataset for which the outcome variable (eg onset of disease) is known; this is called supervised learning. A more complex form of machine learning is the neural network – a technology that has been available since the 1960s has been well established in healthcare research for several decades 3 and has been used for categorisation applications like determining whether a patient will acquire a particular disease. It views problems in terms of inputs, outputs and weights of variables or ‘features’ that associate inputs with outputs. It has been likened to the way that neurons process signals, but the analogy to the brain's function is relatively weak. The most complex forms of machine learning involve deep learning, or neural network models with many levels of features or variables that predict outcomes. There may be thousands of hidden features in such models, which are uncovered by the faster processing of today's graphics processing units and cloud architectures. A common application of deep learning in healthcare is recognition of potentially cancerous lesions in radiology images. 4 Deep learning is increasingly being applied to radiomics, or the detection of clinically relevant features in imaging data beyond what can be perceived by the human eye. 5 Both radiomics and deep learning are most commonly found in oncology-oriented image analysis. Their combination appears to promise greater accuracy in diagnosis than the previous generation of automated tools for image analysis, known as computer-aided detection or CAD. Deep learning is also increasingly used for speech recognition and, as such, is a form of natural language processing (NLP), described below. Unlike earlier forms of statistical analysis, each feature in a deep learning model typically has little meaning to a human observer. As a result, the explanation of the model's outcomes may be very difficult or impossible to interpret. Diagnosis and treatment applications Diagnosis and treatment of disease has been a focus of AI since at least the 1970s, when MYCIN was developed at Stanford for diagnosing blood-borne bacterial infections. 8 This and other early rule-based systems showed promise for accurately diagnosing and treating disease, but were not adopted for clinical practice. They were not substantially better than human diagnosticians, and they were poorly integrated with clinician workflows and medical record systems. More recently, IBM's Watson has received considerable attention in the media for its focus on precision medicine, particularly cancer diagnosis and treatment. Watson employs a combination of machine learning and NLP capabilities. However, early enthusiasm for this application of the technology has faded as customers realized the difficulty of teaching Watson how to address particular types of cancer 9 and of integrating Watson into care processes and systems. 10 Watson is not a single product but a set of ‘cognitive services’ provided through application programming interfaces (APIs), including speech and language, vision, and machine learning-based data-analysis programs. Most observers feel that the Watson APIs are technically capable, but taking on cancer treatment was an overly ambitious objective. Watson and other proprietary programs have also suffered from competition with free ‘open source’ programs provided by some vendors, such as Google's TensorFlow. Implementation issues with AI bedevil many healthcare organizations. Although rule-based systems are incorporated within EHR systems are widely used, including at the NHS, 11 they lack the precision of more algorithmic systems based on machine learning. These rule-based clinical decision support systems are difficult to maintain as medical knowledge changes and are often not able to handle the explosion of data and knowledge based on genomic, proteomic, metabolic and other ‘omic-based’ approaches to care. This situation is beginning to change, but it is mostly present in research labs and in tech firms, rather than in clinical practice. Scarcely a week goes by without a research lab claiming that it has developed an approach to using AI or big data to diagnose and treat a disease with equal or greater accuracy than human clinicians. Many of these findings are based on radiological image analysis, 12 though some involve other types of images such as retinal scanning 13 or genomic-based precision medicine. 14 Since these types of findings are based on statistically-based machine learning models, they are ushering in an era of evidence- and probability-based medicine, which is generally regarded as positive but brings with it many challenges in medical ethics and patient/ clinician relationships. 15 Tech firms and startups are also working assiduously on the same issues. Google, for example, is collaborating with health delivery networks to build prediction models from big data to warn clinicians of high-risk conditions, such as sepsis and heart failure. 16 Google, Enlitic and a variety of other startups are developing AI-derived image interpretation algorithms. Jvion offers a ‘clinical success machine’ that identifies the patients most at risk as well as those most likely to respond to treatment protocols. Each of these could provide decision support to clinicians seeking to find the best diagnosis and treatment for patients. There are also several firms that focus specifically on diagnosis and treatment recommendations for certain cancers based on their genetic profiles. Since many cancers have a genetic basis, human clinicians have found it increasingly complex to understand all genetic variants of cancer and their response to new drugs and protocols. Firms like Foundation Medicine and Flatiron Health, both now owned by Roche, specialise in this approach. Both providers and payers for care are also using ‘population health’ machine learning models to predict populations at risk of particular diseases 17 or accidents 18 or to predict hospital readmission. 19 These models can be effective at prediction, although they sometimes lack all the relevant data that might add predictive capability, such as patient socio-economic status. But whether rules-based or algorithmic in nature, AI-based diagnosis and treatment recommendations are sometimes challenging to embed in clinical workflows and EHR systems. Such integration issues have probably been a greater barrier to broad implementation of AI than any inability to provide accurate and effective recommendations; and many AI-based capabilities for diagnosis and treatment from tech firms are standalone in nature or address only a single aspect of care. Some EHR vendors have begun to embed limited AI functions (beyond rule-based clinical decision support) into their offerings, 20 but these are in the early stages. Providers will either have to undertake substantial integration projects themselves or wait until EHR vendors add more AI capabilities.
Do not refer to any outside information not found in this document to answer the prompt. Answer in a single sentence, but do not quote directly from the document. EVIDENCE: The Potential for Artificial Intelligence In Healthcare Artificial intelligence (AI) and related technologies are increasingly prevalent in business and society, and are beginning to be applied to healthcare. These technologies have the potential to transform many aspects of patient care, as well as administrative processes within provider, payer and pharmaceutical organizations. There are already a number of research studies suggesting that AI can perform as well as or better than humans at key healthcare tasks, such as diagnosing disease. Today, algorithms are already outperforming radiologists at spotting malignant tumors, and guiding researchers in how to construct cohorts for costly clinical trials. However, for a variety of reasons, we believe that it will be many years before AI replaces humans for broad medical process domains. In this article, we describe both the potential that AI offers to automate aspects of care and some of the barriers to rapid implementation of AI in healthcare. Types of AI of relevance to healthcare Artificial intelligence is not one technology, but rather a collection of them. Most of these technologies have immediate relevance to the healthcare field, but the specific processes and tasks they support vary widely. Some particular AI technologies of high importance to healthcare are defined and described below. Machine learning – neural networks and deep learning Machine learning is a statistical technique for fitting models to data and to ‘learn’ by training models with data. Machine learning is one of the most common forms of AI; in a 2018 Deloitte survey of 1,100 US managers whose organizations were already pursuing AI, 63% of companies surveyed were employing machine learning in their businesses. 1 It is a broad technique at the core of many approaches to AI and there are many versions of it. In healthcare, the most common application of traditional machine learning is precision medicine – predicting what treatment protocols are likely to succeed on a patient based on various patient attributes and the treatment context. 2 The great majority of machine learning and precision medicine applications require a training dataset for which the outcome variable (eg onset of disease) is known; this is called supervised learning. A more complex form of machine learning is the neural network – a technology that has been available since the 1960s has been well established in healthcare research for several decades 3 and has been used for categorisation applications like determining whether a patient will acquire a particular disease. It views problems in terms of inputs, outputs and weights of variables or ‘features’ that associate inputs with outputs. It has been likened to the way that neurons process signals, but the analogy to the brain's function is relatively weak. The most complex forms of machine learning involve deep learning, or neural network models with many levels of features or variables that predict outcomes. There may be thousands of hidden features in such models, which are uncovered by the faster processing of today's graphics processing units and cloud architectures. A common application of deep learning in healthcare is recognition of potentially cancerous lesions in radiology images. 4 Deep learning is increasingly being applied to radiomics, or the detection of clinically relevant features in imaging data beyond what can be perceived by the human eye. 5 Both radiomics and deep learning are most commonly found in oncology-oriented image analysis. Their combination appears to promise greater accuracy in diagnosis than the previous generation of automated tools for image analysis, known as computer-aided detection or CAD. Deep learning is also increasingly used for speech recognition and, as such, is a form of natural language processing (NLP), described below. Unlike earlier forms of statistical analysis, each feature in a deep learning model typically has little meaning to a human observer. As a result, the explanation of the model's outcomes may be very difficult or impossible to interpret. Diagnosis and treatment applications Diagnosis and treatment of disease has been a focus of AI since at least the 1970s, when MYCIN was developed at Stanford for diagnosing blood-borne bacterial infections. 8 This and other early rule-based systems showed promise for accurately diagnosing and treating disease, but were not adopted for clinical practice. They were not substantially better than human diagnosticians, and they were poorly integrated with clinician workflows and medical record systems. More recently, IBM's Watson has received considerable attention in the media for its focus on precision medicine, particularly cancer diagnosis and treatment. Watson employs a combination of machine learning and NLP capabilities. However, early enthusiasm for this application of the technology has faded as customers realized the difficulty of teaching Watson how to address particular types of cancer 9 and of integrating Watson into care processes and systems. 10 Watson is not a single product but a set of ‘cognitive services’ provided through application programming interfaces (APIs), including speech and language, vision, and machine learning-based data-analysis programs. Most observers feel that the Watson APIs are technically capable, but taking on cancer treatment was an overly ambitious objective. Watson and other proprietary programs have also suffered from competition with free ‘open source’ programs provided by some vendors, such as Google's TensorFlow. Implementation issues with AI bedevil many healthcare organizations. Although rule-based systems are incorporated within EHR systems are widely used, including at the NHS, 11 they lack the precision of more algorithmic systems based on machine learning. These rule-based clinical decision support systems are difficult to maintain as medical knowledge changes and are often not able to handle the explosion of data and knowledge based on genomic, proteomic, metabolic and other ‘omic-based’ approaches to care. This situation is beginning to change, but it is mostly present in research labs and in tech firms, rather than in clinical practice. Scarcely a week goes by without a research lab claiming that it has developed an approach to using AI or big data to diagnose and treat a disease with equal or greater accuracy than human clinicians. Many of these findings are based on radiological image analysis, 12 though some involve other types of images such as retinal scanning 13 or genomic-based precision medicine. 14 Since these types of findings are based on statistically-based machine learning models, they are ushering in an era of evidence- and probability-based medicine, which is generally regarded as positive but brings with it many challenges in medical ethics and patient/ clinician relationships. 15 Tech firms and startups are also working assiduously on the same issues. Google, for example, is collaborating with health delivery networks to build prediction models from big data to warn clinicians of high-risk conditions, such as sepsis and heart failure. 16 Google, Enlitic and a variety of other startups are developing AI-derived image interpretation algorithms. Jvion offers a ‘clinical success machine’ that identifies the patients most at risk as well as those most likely to respond to treatment protocols. Each of these could provide decision support to clinicians seeking to find the best diagnosis and treatment for patients. There are also several firms that focus specifically on diagnosis and treatment recommendations for certain cancers based on their genetic profiles. Since many cancers have a genetic basis, human clinicians have found it increasingly complex to understand all genetic variants of cancer and their response to new drugs and protocols. Firms like Foundation Medicine and Flatiron Health, both now owned by Roche, specialise in this approach. Both providers and payers for care are also using ‘population health’ machine learning models to predict populations at risk of particular diseases 17 or accidents 18 or to predict hospital readmission. 19 These models can be effective at prediction, although they sometimes lack all the relevant data that might add predictive capability, such as patient socio-economic status. But whether rules-based or algorithmic in nature, AI-based diagnosis and treatment recommendations are sometimes challenging to embed in clinical workflows and EHR systems. Such integration issues have probably been a greater barrier to broad implementation of AI than any inability to provide accurate and effective recommendations; and many AI-based capabilities for diagnosis and treatment from tech firms are standalone in nature or address only a single aspect of care. Some EHR vendors have begun to embed limited AI functions (beyond rule-based clinical decision support) into their offerings, 20 but these are in the early stages. Providers will either have to undertake substantial integration projects themselves or wait until EHR vendors add more AI capabilities. USER: What are the implications for AI in the medical space? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
29
10
1,390
null
245
Use only the information provided in the text to form your response, do not use any external sources or prior knowledge. Give your answers in a numbered list with an explanation or context following each one.
What are the challenges to Hokkaido's economy that can be found in the article?
After the Second World War, Hokkaido drew up 6 Development Plans. These Plans showed the basic development concept and direction. Based on that, important large scale infrastructure projects such as roads, harbors, rail ways, air ports and large scale industrial parks were projected. Then necessary investment amount and their economic and social effects on Hokkaido region were calculated. The Plans were finally decided by the Cabinet after the discussions in Hokkaido Development Council which is operated by Hokkaido Development Agency (now Ministry of Infrastructure, Land and Transport). Targets and contents of the Plans were changed according to the changes of global and Japanese economic situation and Hokkaido’s development stage. The basic development strategy of Hokkaido was to utilize and develop the remaining rich natural resources and support settlement of the people safely and happily. At the first stage just after the war, Hokkaido was expected to become a base of supplying foodstuffs and to accept growing and returning population from former overseas territories. Then later, various large infrastructure projects were planed and implemented for developing her economy and now-a-days development of Hokkaido’s identities has come to be emphasized, such as remaining natural beauty, her northern location and cultural heritages. Today, through the 6 Development Plans, Hokkaido, which has 22% of total Japanese territory but only 4.5% population is enjoying well developed infrastructure compared with other regions. But from the view point of industrial structure, it is still biased to natural resources supply industry. Manufacturing industry has not been developed enough yet. For instance, share of manufacturing industries in the total Hokkaido GDP is less than 10%, compared with more than 20% of Japanese average. More over if we see her manufacturing sector’s structure, in 2005 food processing industry dominated, and share of assembly industry is only 12%, compared with 48% in Japanese average. So far, the basic development strategy for Hokkaido has been characterized by heavy infrastructure investment. It has developed excessive construction industry and they have come to require continuous infrastructure investment. And this situation could not give a good condition for diversifying its industrial structure. Now Hokkaido has faced with the problems decreasing natural forestry and fishery resources. And there are many impoverishing areas which used to depend on coal mine industry. This change of the basic economic and social condition has to be considered in the future program of Hokkaido development. As for fishery industry, 200 mile economic zone system was introduced internationally in 1976. After that, Japanese northern sea fishery, which was used to be main profit sources for Hokkaido fishery industry, was damaged greatly. In the case of coal industry, since around 1960’s, energy revolution progressed rapidly and the shift from coal to oil as energy source happened. Then the important industrial sector, coal industry in Hokkaido started to decline. Hokkaido used to produce 20 million tons of coal annually but in 2002 the last coal mine was closed and the history of coal producing industry in Hokkaido was finished. Some areas, where depended on coal industry were greatly affected by closing mines and still have struggled with their economic decline and collapsing communities. As for the agricultural products, Hokkaido is ranked as No.1 producing place for many products in Japan. Even such situation, global competition has been getting severe and has required the efforts to differentiate them from foreign and other regions’ products. Now Hokkaido’s main development strategy seems to promote the change of her industrial structure to more advanced and high value added ones by not only inviting outside capitals but also fostering indigenous companies. Also, new type tourism industry becomes one of the important target to be promoted. However under the long lasted economic recession, Hokkaido’s regional economy has been weakened and financial condition of many local self-governments is facing with very critical conditions.
System instruction: Use only the information provided in the text to form your response, do not use any external sources or prior knowledge. Give your answers in a numbered list with an explanation or context following each one. Question: What are the challenges to Hokkaido's economy that can be found in the article? Context block: After the Second World War, Hokkaido drew up 6 Development Plans. These Plans showed the basic development concept and direction. Based on that, important large scale infrastructure projects such as roads, harbors, rail ways, air ports and large scale industrial parks were projected. Then necessary investment amount and their economic and social effects on Hokkaido region were calculated. The Plans were finally decided by the Cabinet after the discussions in Hokkaido Development Council which is operated by Hokkaido Development Agency (now Ministry of Infrastructure, Land and Transport). Targets and contents of the Plans were changed according to the changes of global and Japanese economic situation and Hokkaido’s development stage. The basic development strategy of Hokkaido was to utilize and develop the remaining rich natural resources and support settlement of the people safely and happily. At the first stage just after the war, Hokkaido was expected to become a base of supplying foodstuffs and to accept growing and returning population from former overseas territories. Then later, various large infrastructure projects were planed and implemented for developing her economy and now-a-days development of Hokkaido’s identities has come to be emphasized, such as remaining natural beauty, her northern location and cultural heritages. Today, through the 6 Development Plans, Hokkaido, which has 22% of total Japanese territory but only 4.5% population is enjoying well developed infrastructure compared with other regions. But from the view point of industrial structure, it is still biased to natural resources supply industry. Manufacturing industry has not been developed enough yet. For instance, share of manufacturing industries in the total Hokkaido GDP is less than 10%, compared with more than 20% of Japanese average. More over if we see her manufacturing sector’s structure, in 2005 food processing industry dominated, and share of assembly industry is only 12%, compared with 48% in Japanese average. So far, the basic development strategy for Hokkaido has been characterized by heavy infrastructure investment. It has developed excessive construction industry and they have come to require continuous infrastructure investment. And this situation could not give a good condition for diversifying its industrial structure. Now Hokkaido has faced with the problems decreasing natural forestry and fishery resources. And there are many impoverishing areas which used to depend on coal mine industry. This change of the basic economic and social condition has to be considered in the future program of Hokkaido development. As for fishery industry, 200 mile economic zone system was introduced internationally in 1976. After that, Japanese northern sea fishery, which was used to be main profit sources for Hokkaido fishery industry, was damaged greatly. In the case of coal industry, since around 1960’s, energy revolution progressed rapidly and the shift from coal to oil as energy source happened. Then the important industrial sector, coal industry in Hokkaido started to decline. Hokkaido used to produce 20 million tons of coal annually but in 2002 the last coal mine was closed and the history of coal producing industry in Hokkaido was finished. Some areas, where depended on coal industry were greatly affected by closing mines and still have struggled with their economic decline and collapsing communities. As for the agricultural products, Hokkaido is ranked as No.1 producing place for many products in Japan. Even such situation, global competition has been getting severe and has required the efforts to differentiate them from foreign and other regions’ products. Now Hokkaido’s main development strategy seems to promote the change of her industrial structure to more advanced and high value added ones by not only inviting outside capitals but also fostering indigenous companies. Also, new type tourism industry becomes one of the important target to be promoted. However under the long lasted economic recession, Hokkaido’s regional economy has been weakened and financial condition of many local self-governments is facing with very critical conditions.
Use only the information provided in the text to form your response, do not use any external sources or prior knowledge. Give your answers in a numbered list with an explanation or context following each one. EVIDENCE: After the Second World War, Hokkaido drew up 6 Development Plans. These Plans showed the basic development concept and direction. Based on that, important large scale infrastructure projects such as roads, harbors, rail ways, air ports and large scale industrial parks were projected. Then necessary investment amount and their economic and social effects on Hokkaido region were calculated. The Plans were finally decided by the Cabinet after the discussions in Hokkaido Development Council which is operated by Hokkaido Development Agency (now Ministry of Infrastructure, Land and Transport). Targets and contents of the Plans were changed according to the changes of global and Japanese economic situation and Hokkaido’s development stage. The basic development strategy of Hokkaido was to utilize and develop the remaining rich natural resources and support settlement of the people safely and happily. At the first stage just after the war, Hokkaido was expected to become a base of supplying foodstuffs and to accept growing and returning population from former overseas territories. Then later, various large infrastructure projects were planed and implemented for developing her economy and now-a-days development of Hokkaido’s identities has come to be emphasized, such as remaining natural beauty, her northern location and cultural heritages. Today, through the 6 Development Plans, Hokkaido, which has 22% of total Japanese territory but only 4.5% population is enjoying well developed infrastructure compared with other regions. But from the view point of industrial structure, it is still biased to natural resources supply industry. Manufacturing industry has not been developed enough yet. For instance, share of manufacturing industries in the total Hokkaido GDP is less than 10%, compared with more than 20% of Japanese average. More over if we see her manufacturing sector’s structure, in 2005 food processing industry dominated, and share of assembly industry is only 12%, compared with 48% in Japanese average. So far, the basic development strategy for Hokkaido has been characterized by heavy infrastructure investment. It has developed excessive construction industry and they have come to require continuous infrastructure investment. And this situation could not give a good condition for diversifying its industrial structure. Now Hokkaido has faced with the problems decreasing natural forestry and fishery resources. And there are many impoverishing areas which used to depend on coal mine industry. This change of the basic economic and social condition has to be considered in the future program of Hokkaido development. As for fishery industry, 200 mile economic zone system was introduced internationally in 1976. After that, Japanese northern sea fishery, which was used to be main profit sources for Hokkaido fishery industry, was damaged greatly. In the case of coal industry, since around 1960’s, energy revolution progressed rapidly and the shift from coal to oil as energy source happened. Then the important industrial sector, coal industry in Hokkaido started to decline. Hokkaido used to produce 20 million tons of coal annually but in 2002 the last coal mine was closed and the history of coal producing industry in Hokkaido was finished. Some areas, where depended on coal industry were greatly affected by closing mines and still have struggled with their economic decline and collapsing communities. As for the agricultural products, Hokkaido is ranked as No.1 producing place for many products in Japan. Even such situation, global competition has been getting severe and has required the efforts to differentiate them from foreign and other regions’ products. Now Hokkaido’s main development strategy seems to promote the change of her industrial structure to more advanced and high value added ones by not only inviting outside capitals but also fostering indigenous companies. Also, new type tourism industry becomes one of the important target to be promoted. However under the long lasted economic recession, Hokkaido’s regional economy has been weakened and financial condition of many local self-governments is facing with very critical conditions. USER: What are the challenges to Hokkaido's economy that can be found in the article? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
36
14
632
null
319
This task requires you to answer a question based only on the information provided in the prompt. You should not use external resources or prior knowledge to answer it. Please answer using language that is sophisticated and would be understood by someone who is familiar with the topic but not an expert.
Please define Common Law Offenses, Surety Statues, and Statutory Prohibitions as they relate to gun laws.
(2) The burden then falls on respondents to show that New York’s proper-cause requirement is consistent with this Nation’s historical tradition of firearm regulation. To do so, respondents appeal to a variety of historical sources from the late 1200s to the early 1900s. But when it comes to interpreting the Constitution, not all history is created equal. “Constitutional rights are enshrined with the scope they were understood to have when the people adopted them.” Heller, 554 U. S., at 634–635. The Second Amendment was adopted in 1791; the Fourteenth in 1868. Historical evidence that long predates or postdates either time may not illuminate the scope of the right. With these principles in mind, the Court concludes that respondents have failed to meet their burden to identify an American tradition justifying New York’s proper-cause requirement. Pp. 24–62. (i) Respondents’ substantial reliance on English history and custom before the founding makes some sense given Heller’s statement that the Second Amendment “codified a right ‘inherited from our English ancestors.’ ” 554 U. S., at 599. But the Court finds that history ambiguous at best and sees little reason to think that the Framers would have thought it applicable in the New World. The Court cannot conclude from this historical record that, by the time of the founding, English law would have justified restricting the right to publicly bear arms suited for self-defense only to those who demonstrate some special need for self-protection. Pp. 30–37. (ii) Respondents next direct the Court to the history of the Colonies and early Republic, but they identify only three restrictions on public carry from that time. While the Court doubts that just three colonial regulations could suffice to show a tradition of public-carry regulation, even looking at these laws on their own terms, the Court is not convinced that they regulated public carry akin to the New York law at issue. The statutes essentially prohibited bearing arms in a way that spread “fear” or “terror” among the people, including by carrying of “dangerous and unusual weapons.” See 554 U. S., at 627. Whatever the likelihood that handguns were considered “dangerous and unusual” during the colonial period, they are today “the quintessential self-defense weapon.” Id., at 629. Thus, these colonial laws provide no justification for laws restricting the public carry of weapons that are unquestionably in common use today. Pp. 37–42. (iii) Only after the ratification of the Second Amendment in 1791 did public-carry restrictions proliferate. Respondents rely heavily on these restrictions, which generally fell into three categories: common-law offenses, statutory prohibitions, and “surety” statutes. None of these restrictions imposed a substantial burden on public carry analogous to that imposed by New York’s restrictive licensing regime. Common-Law Offenses. As during the colonial and founding periods, the common-law offenses of “affray” or going armed “to the terror of the people” continued to impose some limits on firearm carry in the antebellum period. But there is no evidence indicating that these common-law limitations impaired the right of the general population to peaceable public carry. Statutory Prohibitions. In the early to mid-19th century, some States began enacting laws that proscribed the concealed carry of pistols and other small weapons. But the antebellum state-court decisions upholding them evince a consensus view that States could not altogether prohibit the public carry of arms protected by the Second Amendment or state analogues. Surety Statutes. In the mid-19th century, many jurisdictions began adopting laws that required certain individuals to post bond before carrying weapons in public. Contrary to respondents’ position, these surety statutes in no way represented direct precursors to New York’s proper-cause requirement. While New York presumes that individuals have no public carry right without a showing of heightened need, the surety statutes presumed that individuals had a right to public carry that could be burdened only if another could make out a specific showing of “reasonable cause to fear an injury, or breach of the peace.” Mass. Rev. Stat., ch. 134, §16 (1836). Thus, unlike New York’s regime, a showing of special need was required only after an individual was reasonably accused of intending to injure another or breach the peace. And, even then, proving special need simply avoided a fee. In sum, the historical evidence from antebellum America does demonstrate that the manner of public carry was subject to reasonable regulation, but none of these limitations on the right to bear arms operated to prevent law-abiding citizens with ordinary self-defense needs from carrying arms in public for that purpose. Pp. 42–51. (iv) Evidence from around the adoption of the Fourteenth Amendment also does not support respondents’ position. The “discussion of the [right to keep and bear arms] in Congress and in public discourse, as people debated whether and how to secure constitutional rights for newly free slaves,” Heller, 554 U. S., at 614, generally demonstrates that during Reconstruction the right to keep and bear arms had limits that were consistent with a right of the public to peaceably carry handguns for self-defense. The Court acknowledges two Texas cases—English v. State, 35 Tex. 473 and State v. Duke, 42 Tex. 455—that approved a statutory “reasonable grounds” standard for public carry analogous to New York’s proper-cause requirement. But these decisions were outliers and therefore provide little insight into how postbellum courts viewed the right to carry protected arms in public. See Heller, 554 U. S., at 632. Pp. 52–58.
system instruction: [This task requires you to answer a question based only on the information provided in the prompt. You should not use external resources or prior knowledge to answer it. Please answer using language that is sophisticated and would be understood by someone who is familiar with the topic but not an expert.] question: [Please define Common Law Offenses, Surety Statues, and Statutory Prohibitions as they relate to gun laws.] context block: [(2) The burden then falls on respondents to show that New York’s proper-cause requirement is consistent with this Nation’s historical tradition of firearm regulation. To do so, respondents appeal to a variety of historical sources from the late 1200s to the early 1900s. But when it comes to interpreting the Constitution, not all history is created equal. “Constitutional rights are enshrined with the scope they were understood to have when the people adopted them.” Heller, 554 U. S., at 634–635. The Second Amendment was adopted in 1791; the Fourteenth in 1868. Historical evidence that long predates or postdates either time may not illuminate the scope of the right. With these principles in mind, the Court concludes that respondents have failed to meet their burden to identify an American tradition justifying New York’s proper-cause requirement. Pp. 24–62. (i) Respondents’ substantial reliance on English history and custom before the founding makes some sense given Heller’s statement that the Second Amendment “codified a right ‘inherited from our English ancestors.’ ” 554 U. S., at 599. But the Court finds that history ambiguous at best and sees little reason to think that the Framers would have thought it applicable in the New World. The Court cannot conclude from this historical record that, by the time of the founding, English law would have justified restricting the right to publicly bear arms suited for self-defense only to those who demonstrate some special need for self-protection. Pp. 30–37. (ii) Respondents next direct the Court to the history of the Colonies and early Republic, but they identify only three restrictions on public carry from that time. While the Court doubts that just three colonial regulations could suffice to show a tradition of public-carry regulation, even looking at these laws on their own terms, the Court is not convinced that they regulated public carry akin to the New York law at issue. The statutes essentially prohibited bearing arms in a way that spread “fear” or “terror” among the people, including by carrying of “dangerous and unusual weapons.” See 554 U. S., at 627. Whatever the likelihood that handguns were considered “dangerous and unusual” during the colonial period, they are today “the quintessential self-defense weapon.” Id., at 629. Thus, these colonial laws provide no justification for laws restricting the public carry of weapons that are unquestionably in common use today. Pp. 37–42. (iii) Only after the ratification of the Second Amendment in 1791 did public-carry restrictions proliferate. Respondents rely heavily on these restrictions, which generally fell into three categories: common-law offenses, statutory prohibitions, and “surety” statutes. None of these restrictions imposed a substantial burden on public carry analogous to that imposed by New York’s restrictive licensing regime. Common-Law Offenses. As during the colonial and founding periods, the common-law offenses of “affray” or going armed “to the terror of the people” continued to impose some limits on firearm carry in the antebellum period. But there is no evidence indicating that these common-law limitations impaired the right of the general population to peaceable public carry. Statutory Prohibitions. In the early to mid-19th century, some States began enacting laws that proscribed the concealed carry of pistols and other small weapons. But the antebellum state-court decisions upholding them evince a consensus view that States could not altogether prohibit the public carry of arms protected by the Second Amendment or state analogues. Surety Statutes. In the mid-19th century, many jurisdictions began adopting laws that required certain individuals to post bond before carrying weapons in public. Contrary to respondents’ position, these surety statutes in no way represented direct precursors to New York’s proper-cause requirement. While New York presumes that individuals have no public carry right without a showing of heightened need, the surety statutes presumed that individuals had a right to public carry that could be burdened only if another could make out a specific showing of “reasonable cause to fear an injury, or breach of the peace.” Mass. Rev. Stat., ch. 134, §16 (1836). Thus, unlike New York’s regime, a showing of special need was required only after an individual was reasonably accused of intending to injure another or breach the peace. And, even then, proving special need simply avoided a fee. In sum, the historical evidence from antebellum America does demonstrate that the manner of public carry was subject to reasonable regulation, but none of these limitations on the right to bear arms operated to prevent law-abiding citizens with ordinary self-defense needs from carrying arms in public for that purpose. Pp. 42–51. (iv) Evidence from around the adoption of the Fourteenth Amendment also does not support respondents’ position. The “discussion of the [right to keep and bear arms] in Congress and in public discourse, as people debated whether and how to secure constitutional rights for newly free slaves,” Heller, 554 U. S., at 614, generally demonstrates that during Reconstruction the right to keep and bear arms had limits that were consistent with a right of the public to peaceably carry handguns for self-defense. The Court acknowledges two Texas cases—English v. State, 35 Tex. 473 and State v. Duke, 42 Tex. 455—that approved a statutory “reasonable grounds” standard for public carry analogous to New York’s proper-cause requirement. But these decisions were outliers and therefore provide little insight into how postbellum courts viewed the right to carry protected arms in public. See Heller, 554 U. S., at 632. Pp. 52–58.]
This task requires you to answer a question based only on the information provided in the prompt. You should not use external resources or prior knowledge to answer it. Please answer using language that is sophisticated and would be understood by someone who is familiar with the topic but not an expert. EVIDENCE: (2) The burden then falls on respondents to show that New York’s proper-cause requirement is consistent with this Nation’s historical tradition of firearm regulation. To do so, respondents appeal to a variety of historical sources from the late 1200s to the early 1900s. But when it comes to interpreting the Constitution, not all history is created equal. “Constitutional rights are enshrined with the scope they were understood to have when the people adopted them.” Heller, 554 U. S., at 634–635. The Second Amendment was adopted in 1791; the Fourteenth in 1868. Historical evidence that long predates or postdates either time may not illuminate the scope of the right. With these principles in mind, the Court concludes that respondents have failed to meet their burden to identify an American tradition justifying New York’s proper-cause requirement. Pp. 24–62. (i) Respondents’ substantial reliance on English history and custom before the founding makes some sense given Heller’s statement that the Second Amendment “codified a right ‘inherited from our English ancestors.’ ” 554 U. S., at 599. But the Court finds that history ambiguous at best and sees little reason to think that the Framers would have thought it applicable in the New World. The Court cannot conclude from this historical record that, by the time of the founding, English law would have justified restricting the right to publicly bear arms suited for self-defense only to those who demonstrate some special need for self-protection. Pp. 30–37. (ii) Respondents next direct the Court to the history of the Colonies and early Republic, but they identify only three restrictions on public carry from that time. While the Court doubts that just three colonial regulations could suffice to show a tradition of public-carry regulation, even looking at these laws on their own terms, the Court is not convinced that they regulated public carry akin to the New York law at issue. The statutes essentially prohibited bearing arms in a way that spread “fear” or “terror” among the people, including by carrying of “dangerous and unusual weapons.” See 554 U. S., at 627. Whatever the likelihood that handguns were considered “dangerous and unusual” during the colonial period, they are today “the quintessential self-defense weapon.” Id., at 629. Thus, these colonial laws provide no justification for laws restricting the public carry of weapons that are unquestionably in common use today. Pp. 37–42. (iii) Only after the ratification of the Second Amendment in 1791 did public-carry restrictions proliferate. Respondents rely heavily on these restrictions, which generally fell into three categories: common-law offenses, statutory prohibitions, and “surety” statutes. None of these restrictions imposed a substantial burden on public carry analogous to that imposed by New York’s restrictive licensing regime. Common-Law Offenses. As during the colonial and founding periods, the common-law offenses of “affray” or going armed “to the terror of the people” continued to impose some limits on firearm carry in the antebellum period. But there is no evidence indicating that these common-law limitations impaired the right of the general population to peaceable public carry. Statutory Prohibitions. In the early to mid-19th century, some States began enacting laws that proscribed the concealed carry of pistols and other small weapons. But the antebellum state-court decisions upholding them evince a consensus view that States could not altogether prohibit the public carry of arms protected by the Second Amendment or state analogues. Surety Statutes. In the mid-19th century, many jurisdictions began adopting laws that required certain individuals to post bond before carrying weapons in public. Contrary to respondents’ position, these surety statutes in no way represented direct precursors to New York’s proper-cause requirement. While New York presumes that individuals have no public carry right without a showing of heightened need, the surety statutes presumed that individuals had a right to public carry that could be burdened only if another could make out a specific showing of “reasonable cause to fear an injury, or breach of the peace.” Mass. Rev. Stat., ch. 134, §16 (1836). Thus, unlike New York’s regime, a showing of special need was required only after an individual was reasonably accused of intending to injure another or breach the peace. And, even then, proving special need simply avoided a fee. In sum, the historical evidence from antebellum America does demonstrate that the manner of public carry was subject to reasonable regulation, but none of these limitations on the right to bear arms operated to prevent law-abiding citizens with ordinary self-defense needs from carrying arms in public for that purpose. Pp. 42–51. (iv) Evidence from around the adoption of the Fourteenth Amendment also does not support respondents’ position. The “discussion of the [right to keep and bear arms] in Congress and in public discourse, as people debated whether and how to secure constitutional rights for newly free slaves,” Heller, 554 U. S., at 614, generally demonstrates that during Reconstruction the right to keep and bear arms had limits that were consistent with a right of the public to peaceably carry handguns for self-defense. The Court acknowledges two Texas cases—English v. State, 35 Tex. 473 and State v. Duke, 42 Tex. 455—that approved a statutory “reasonable grounds” standard for public carry analogous to New York’s proper-cause requirement. But these decisions were outliers and therefore provide little insight into how postbellum courts viewed the right to carry protected arms in public. See Heller, 554 U. S., at 632. Pp. 52–58. USER: Please define Common Law Offenses, Surety Statues, and Statutory Prohibitions as they relate to gun laws. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
52
16
894
null
750
Present your answer without any extraneous information.
What is the general customer feedback?
8 A day age Failure atter failure. Switching to mint took ages. You failed to enable one of us to connect at all. Her money is going to another provider. Texting rarely works or can lake hours to deliver. Now, my phone has stopped working af all, Wi-Fi works. Wi-Fi calling doesn't. The other two phones on the plan work. | just gal an email telling me my next 6 months are free. That would mean something if | actually had service. After all, zero service is enly werk zero dollars. Asoon to be @x-customer. Date of experience: March 7, 2024 pf Usetul = oF Share i= ‘ss Paeply trom Mint Mobile 21 hours aga Hi, Thank you for contacting Mint Mobile. We're soery for any inconvenience you may have capenenood with the serviog. This is definitely not the experionce we want you 1b have when using our service. Since we aro unadle to provide you with assistance ‘through this channel, plaase contact us through our cusbomer support channels at 8O0- 683-7452 or our live chat plationn, 7 days a week from 5am to 7 pm PST, bo tatoo a look at your case and find the best solution. (Christy Unruh irview @ US 2 days age Hassle free service The company never bothers me with upsells & my better hall will change service once his contract is wp. He has had the same number since the earhy 90s. Anything they might want to tell me abaut is done by my email, which is awesarnell (Date of eaperonoe: February 06, 2024 eo Useful = of Share i=4 Grace Fong lroview @ US BREE 18 hours ago A Canadian snowbird who wants to use the same phone number again next year. Jam a Canadian whe just began to “snewbird” in Hawaii this year - it means | came hare in the winter for a few mente to agcape the deep frees. Having the Mint mabile account has been very convenient for doing all kinds of business and shopping. This lire | gulbseribed for three monihs a¢ | return to Canada April 23.1 will come back to my apartment in Katbua, Hl again im Jan. 2025. It would be wonderful if | could use the game phone nurnber nest year Would thal be posgible? Thank you. (Date of experienom March 07, 2024 pf UWeetul = of Share - BBE Updated 2 days ago Had for almost 2 years and for the most... Had fer almost 2 years and for the mest parl been happy until | go aver data limits. I'm ok with 4 $0 called “SLOW DOWN" speed when over erry limit, but when it is se slow info times out it is a totally useless service. Will be switehing te anether service 4% anether data level with Mint Mobile gives me more options with their competition.
Present your answer without any extraneous information. What is the general customer feedback? 8 A day age Failure atter failure. Switching to mint took ages. You failed to enable one of us to connect at all. Her money is going to another provider. Texting rarely works or can lake hours to deliver. Now, my phone has stopped working af all, Wi-Fi works. Wi-Fi calling doesn't. The other two phones on the plan work. | just gal an email telling me my next 6 months are free. That would mean something if | actually had service. After all, zero service is enly werk zero dollars. Asoon to be @x-customer. Date of experience: March 7, 2024 pf Usetul = oF Share i= ‘ss Paeply trom Mint Mobile 21 hours aga Hi, Thank you for contacting Mint Mobile. We're soery for any inconvenience you may have capenenood with the serviog. This is definitely not the experionce we want you 1b have when using our service. Since we aro unadle to provide you with assistance ‘through this channel, plaase contact us through our cusbomer support channels at 8O0- 683-7452 or our live chat plationn, 7 days a week from 5am to 7 pm PST, bo tatoo a look at your case and find the best solution. (Christy Unruh irview @ US 2 days age Hassle free service The company never bothers me with upsells & my better hall will change service once his contract is wp. He has had the same number since the earhy 90s. Anything they might want to tell me abaut is done by my email, which is awesarnell (Date of eaperonoe: February 06, 2024 eo Useful = of Share i=4 Grace Fong lroview @ US BREE 18 hours ago A Canadian snowbird who wants to use the same phone number again next year. Jam a Canadian whe just began to “snewbird” in Hawaii this year - it means | came hare in the winter for a few mente to agcape the deep frees. Having the Mint mabile account has been very convenient for doing all kinds of business and shopping. This lire | gulbseribed for three monihs a¢ | return to Canada April 23.1 will come back to my apartment in Katbua, Hl again im Jan. 2025. It would be wonderful if | could use the game phone nurnber nest year Would thal be posgible? Thank you. (Date of experienom March 07, 2024 pf UWeetul = of Share - BBE Updated 2 days ago Had for almost 2 years and for the most... Had fer almost 2 years and for the mest parl been happy until | go aver data limits. I'm ok with 4 $0 called “SLOW DOWN" speed when over erry limit, but when it is se slow info times out it is a totally useless service. Will be switehing te anether service 4% anether data level with Mint Mobile gives me more options with their competition.
Present your answer without any extraneous information. EVIDENCE: 8 A day age Failure atter failure. Switching to mint took ages. You failed to enable one of us to connect at all. Her money is going to another provider. Texting rarely works or can lake hours to deliver. Now, my phone has stopped working af all, Wi-Fi works. Wi-Fi calling doesn't. The other two phones on the plan work. | just gal an email telling me my next 6 months are free. That would mean something if | actually had service. After all, zero service is enly werk zero dollars. Asoon to be @x-customer. Date of experience: March 7, 2024 pf Usetul = oF Share i= ‘ss Paeply trom Mint Mobile 21 hours aga Hi, Thank you for contacting Mint Mobile. We're soery for any inconvenience you may have capenenood with the serviog. This is definitely not the experionce we want you 1b have when using our service. Since we aro unadle to provide you with assistance ‘through this channel, plaase contact us through our cusbomer support channels at 8O0- 683-7452 or our live chat plationn, 7 days a week from 5am to 7 pm PST, bo tatoo a look at your case and find the best solution. (Christy Unruh irview @ US 2 days age Hassle free service The company never bothers me with upsells & my better hall will change service once his contract is wp. He has had the same number since the earhy 90s. Anything they might want to tell me abaut is done by my email, which is awesarnell (Date of eaperonoe: February 06, 2024 eo Useful = of Share i=4 Grace Fong lroview @ US BREE 18 hours ago A Canadian snowbird who wants to use the same phone number again next year. Jam a Canadian whe just began to “snewbird” in Hawaii this year - it means | came hare in the winter for a few mente to agcape the deep frees. Having the Mint mabile account has been very convenient for doing all kinds of business and shopping. This lire | gulbseribed for three monihs a¢ | return to Canada April 23.1 will come back to my apartment in Katbua, Hl again im Jan. 2025. It would be wonderful if | could use the game phone nurnber nest year Would thal be posgible? Thank you. (Date of experienom March 07, 2024 pf UWeetul = of Share - BBE Updated 2 days ago Had for almost 2 years and for the most... Had fer almost 2 years and for the mest parl been happy until | go aver data limits. I'm ok with 4 $0 called “SLOW DOWN" speed when over erry limit, but when it is se slow info times out it is a totally useless service. Will be switehing te anether service 4% anether data level with Mint Mobile gives me more options with their competition. USER: What is the general customer feedback? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
7
6
476
null
671
Only use the information made available in the prompt to formulate an answer. Do not use any outside sources or prior knowledge.
Summarize the consequences of the mergers described in the text.
On August 28, 2017, Amazon acquired Whole Foods Market, a grocery retailer, for approximately $13.2 billion.47 After reviewing the proposed acquisition, the FTC determined no further action was needed at the time.48 Prior to the acquisition, Amazon offered the online grocery delivery service Amazon Fresh, which launched in 2007,49 and Prime Pantry, which launched in 2014 and ended in January 2021.50 By acquiring Whole Foods Market, Amazon obtained brick-and-mortar grocery store locations that it was able to integrate with its online services.51 For example, shoppers with an Amazon Prime membership52 are eligible for discounts and free pickup or delivery of Whole Foods Market groceries in selected zip codes,53 and Amazon Hub Lockers—where consumers can pick up products purchased on Amazon’s website—are often located in Whole Foods Markets.54 Amazon’s acquisition of Whole Foods Market may have increased competition in the grocery retail market. Prior to the acquisition, Walmart was the largest grocery retailer, followed by Kroger.55 Progressive Grocer, a research group, estimates that in 2020, Walmart had the highest U.S. retail sales of grocery items, followed by Amazon.56 However, Duff & Phelps, a consulting firm, indicates that Amazon comprises only a small portion of the grocery retail market and that it serves as “more of a symbolic threat.”57 Nevertheless, other grocery retailers have responded by implementing changes in response to competitive pressure from Amazon.58 Competitive pressure from Amazon may have incentivized other grocery retailers to start offering online delivery services. In 2017, the year Amazon acquired Whole Foods, Walmart launched an online delivery service in selected cities;59 Kroger launched an online delivery service in selected cities in 2018.60 In 2020, Walmart launched Walmart+,61 a membership delivery service that does not have a minimum order requirement,62 similar to an Amazon Prime membership. Consumers may have benefited from food retailers offering their own online delivery services, particularly as many of these stores offer free delivery on orders over $35. These changes may have also increased pressure on other online grocery delivery services, such as Instacart, a third-party service that delivers online groceries from selected stores in selected cities; the service launched in 2012 and stopped delivering groceries from Whole Foods in 2019.63 Amazon’s acquisition of Whole Foods Market raised concern about its growing dominance in the retail industry, particularly in e-commerce. According to eMarketer, a market research company, Amazon had the greatest share of e-commerce sales at 38.7% in 2020; Walmart had the second- greatest share at 5.3% (Figure 1). The estimate from eMarketer includes all online sales, including products that Amazon does not offer. The House Subcommittee on Antitrust staff report finds that by restricting products to those sold on Amazon, a market share of 50% or higher may be a more credible estimate of Amazon’s share of online sales, and that over 60% of all U.S. online product searches begin on Amazon.64 Through its acquisition of Whole Foods, Amazon gained access to additional consumer data, strengthening its bargaining power with suppliers.65 In addition, Amazon has integrated vertically, such as by offering products under its private label AmazonBasics and by creating its own delivery system. Amazon has reportedly invested $60 billion since 2014 in its delivery network, including capital leases for warehouses and aircraft; in 2019, it had the fourth-largest share of U.S. package deliveries, behind FedEx, United Parcel Service, and the U.S. Postal Service.66 By integrating vertically, Amazon may be able to further strengthen its position in e-commerce; if, for example, it is able to provide faster delivery,67 consumers could benefit even if it becomes more difficult for other companies to compete. Facebook’s Acquisition of Instagram Facebook announced that it had reached an agreement to acquire Instagram, a social networking service (i.e., social media platform), for $1 billion on April 9, 2012.68 The FTC reviewed the acquisition, and on August 22, 2012, it closed the investigation without taking action.69 On December 9, 2020, the FTC filed a lawsuit against Facebook, alleging that “Facebook has maintained its monopoly position by buying up companies that present competitive threats, ” in addition to imposing restrictive policies against companies it does not acquire.70 A coalition of 46 state attorneys general, led by New York Attorney General Letitia James, filed a parallel lawsuit against Facebook, also alleging that Facebook acquired companies to eliminate competitive threats.71 Both lawsuits72 specifically mention Facebook’s acquisitions of Instagram and WhatsApp, a messaging app for mobile devices.73 Prior to the acquisition, Facebook CEO Mark Zuckerberg stated in an internal email that “Instagram has become a large and viable competitor to us on mobile photos, which will increasingly be the future of photos.”74 This statement has been used to support the claim that Facebook acquired Instagram with the intention of eliminating a potential competitor. It is unclear how successful Instagram would have been had it not been acquired by Facebook, illustrating the difficulty of predicting whether a nascent firm could become a viable competitor. Instagram was a relatively new company when it was acquired,75 and grew rapidly thereafter, from about 100 million monthly active users (MAUs) in February 2013 to 500 million MAUs in June 2016 and 1 billion MAUs in June 2018.76 As it grew in popularity, Instagram was able to use Facebook’s resources, such as its advertising services and its infrastructure, which hosts and processes large amounts of consumer data. These have been key to the profitability of Instagram, which hosts a wide range of users, including “influencers”—that is, users with a large number of followers who are paid by sponsors to market certain products.77 It is possible that without the merger, Instagram would have been among the platforms that have struggled to compete in digital markets because of resource constraints. This occurred with the social networking service Friendster, which turned down a $30 million buyout offer from Google in 2003 but then struggled with technical difficulties as its user base grew; users left the platform for other social media sites, and Friendster eventually closed down.78 Another complication in evaluating the effect of Facebook’s acquisition of Instagram is determining how the market should be defined, particularly in digital markets that can quickly evolve. Social networking services can include a wide range of platforms. When Facebook acquired Instagram in 2012, one of the defining features of social networking services—a category that than included Friendster and Myspace, among others—was the networks users could create. Users could clearly indicate the users in their respective network(s) on the social networking service,79 although some may have chosen to keep their network(s) private. At that time, Instagram was described as a photo-sharing app, arguably competing with apps like Photobucket and Flickr, rather than with Facebook. Additional types of platforms can be considered social networking services: Reddit allows users to create communities based on their interests; LinkedIn allows users to create connections for business and employment opportunities; and TikTok allows users to share short-form videos.80 Some of these platforms allow users to connect with any other user on the platform rather than only with users in their personal network, focusing on the content rather than the user. These changes suggest that a user’s ability to create social networks may no longer be the defining feature of social networking services. In addition, social networking services are not necessarily substitutes for one another. For example, although Instagram and Microsoft’s LinkedIn are both typically viewed as social networking services, it is unlikely that users would substitute one platform for the other. One report estimates that internet users had an average of about seven social media accounts, suggesting that some users rely on different social media platforms for different purposes.81
On August 28, 2017, Amazon acquired Whole Foods Market, a grocery retailer, for approximately $13.2 billion.47 After reviewing the proposed acquisition, the FTC determined no further action was needed at the time.48 Prior to the acquisition, Amazon offered the online grocery delivery service Amazon Fresh, which launched in 2007,49 and Prime Pantry, which launched in 2014 and ended in January 2021.50 By acquiring Whole Foods Market, Amazon obtained brick-and-mortar grocery store locations that it was able to integrate with its online services.51 For example, shoppers with an Amazon Prime membership52 are eligible for discounts and free pickup or delivery of Whole Foods Market groceries in selected zip codes,53 and Amazon Hub Lockers—where consumers can pick up products purchased on Amazon’s website—are often located in Whole Foods Markets.54 Amazon’s acquisition of Whole Foods Market may have increased competition in the grocery retail market. Prior to the acquisition, Walmart was the largest grocery retailer, followed by Kroger.55 Progressive Grocer, a research group, estimates that in 2020, Walmart had the highest U.S. retail sales of grocery items, followed by Amazon.56 However, Duff & Phelps, a consulting firm, indicates that Amazon comprises only a small portion of the grocery retail market and that it serves as “more of a symbolic threat.”57 Nevertheless, other grocery retailers have responded by implementing changes in response to competitive pressure from Amazon.58 Competitive pressure from Amazon may have incentivized other grocery retailers to start offering online delivery services. In 2017, the year Amazon acquired Whole Foods, Walmart launched an online delivery service in selected cities;59 Kroger launched an online delivery service in selected cities in 2018.60 In 2020, Walmart launched Walmart+,61 a membership delivery service that does not have a minimum order requirement,62 similar to an Amazon Prime membership. Consumers may have benefited from food retailers offering their own online delivery services, particularly as many of these stores offer free delivery on orders over $35. These changes may have also increased pressure on other online grocery delivery services, such as Instacart, a third-party service that delivers online groceries from selected stores in selected cities; the service launched in 2012 and stopped delivering groceries from Whole Foods in 2019.63 Amazon’s acquisition of Whole Foods Market raised concern about its growing dominance in the retail industry, particularly in e-commerce. According to eMarketer, a market research company, Amazon had the greatest share of e-commerce sales at 38.7% in 2020; Walmart had the second- greatest share at 5.3% (Figure 1). The estimate from eMarketer includes all online sales, including products that Amazon does not offer. The House Subcommittee on Antitrust staff report finds that by restricting products to those sold on Amazon, a market share of 50% or higher may be a more credible estimate of Amazon’s share of online sales, and that over 60% of all U.S. online product searches begin on Amazon.64 Through its acquisition of Whole Foods, Amazon gained access to additional consumer data, strengthening its bargaining power with suppliers.65 In addition, Amazon has integrated vertically, such as by offering products under its private label AmazonBasics and by creating its own delivery system. Amazon has reportedly invested $60 billion since 2014 in its delivery network, including capital leases for warehouses and aircraft; in 2019, it had the fourth-largest share of U.S. package deliveries, behind FedEx, United Parcel Service, and the U.S. Postal Service.66 By integrating vertically, Amazon may be able to further strengthen its position in e-commerce; if, for example, it is able to provide faster delivery,67 consumers could benefit even if it becomes more difficult for other companies to compete. Facebook’s Acquisition of Instagram Facebook announced that it had reached an agreement to acquire Instagram, a social networking service (i.e., social media platform), for $1 billion on April 9, 2012.68 The FTC reviewed the acquisition, and on August 22, 2012, it closed the investigation without taking action.69 On December 9, 2020, the FTC filed a lawsuit against Facebook, alleging that “Facebook has maintained its monopoly position by buying up companies that present competitive threats, ” in addition to imposing restrictive policies against companies it does not acquire.70 A coalition of 46 state attorneys general, led by New York Attorney General Letitia James, filed a parallel lawsuit against Facebook, also alleging that Facebook acquired companies to eliminate competitive threats.71 Both lawsuits72 specifically mention Facebook’s acquisitions of Instagram and WhatsApp, a messaging app for mobile devices.73 Prior to the acquisition, Facebook CEO Mark Zuckerberg stated in an internal email that “Instagram has become a large and viable competitor to us on mobile photos, which will increasingly be the future of photos.”74 This statement has been used to support the claim that Facebook acquired Instagram with the intention of eliminating a potential competitor. It is unclear how successful Instagram would have been had it not been acquired by Facebook, illustrating the difficulty of predicting whether a nascent firm could become a viable competitor. Instagram was a relatively new company when it was acquired,75 and grew rapidly thereafter, from about 100 million monthly active users (MAUs) in February 2013 to 500 million MAUs in June 2016 and 1 billion MAUs in June 2018.76 As it grew in popularity, Instagram was able to use Facebook’s resources, such as its advertising services and its infrastructure, which hosts and processes large amounts of consumer data. These have been key to the profitability of Instagram, which hosts a wide range of users, including “influencers”—that is, users with a large number of followers who are paid by sponsors to market certain products.77 It is possible that without the merger, Instagram would have been among the platforms that have struggled to compete in digital markets because of resource constraints. This occurred with the social networking service Friendster, which turned down a $30 million buyout offer from Google in 2003 but then struggled with technical difficulties as its user base grew; users left the platform for other social media sites, and Friendster eventually closed down.78 Another complication in evaluating the effect of Facebook’s acquisition of Instagram is determining how the market should be defined, particularly in digital markets that can quickly evolve. Social networking services can include a wide range of platforms. When Facebook acquired Instagram in 2012, one of the defining features of social networking services—a category that than included Friendster and Myspace, among others—was the networks users could create. Users could clearly indicate the users in their respective network(s) on the social networking service,79 although some may have chosen to keep their network(s) private. At that time, Instagram was described as a photo-sharing app, arguably competing with apps like Photobucket and Flickr, rather than with Facebook. Additional types of platforms can be considered social networking services: Reddit allows users to create communities based on their interests; LinkedIn allows users to create connections for business and employment opportunities; and TikTok allows users to share short-form videos.80 Some of these platforms allow users to connect with any other user on the platform rather than only with users in their personal network, focusing on the content rather than the user. These changes suggest that a user’s ability to create social networks may no longer be the defining feature of social networking services. In addition, social networking services are not necessarily substitutes for one another. For example, although Instagram and Microsoft’s LinkedIn are both typically viewed as social networking services, it is unlikely that users would substitute one platform for the other. One report estimates that internet users had an average of about seven social media accounts, suggesting that some users rely on different social media platforms for different purposes.81 Summarize the consequences of the mergers described in the text. Only use the information made available in the prompt to formulate an answer. Do not use any outside sources or prior knowledge.
Only use the information made available in the prompt to formulate an answer. Do not use any outside sources or prior knowledge. EVIDENCE: On August 28, 2017, Amazon acquired Whole Foods Market, a grocery retailer, for approximately $13.2 billion.47 After reviewing the proposed acquisition, the FTC determined no further action was needed at the time.48 Prior to the acquisition, Amazon offered the online grocery delivery service Amazon Fresh, which launched in 2007,49 and Prime Pantry, which launched in 2014 and ended in January 2021.50 By acquiring Whole Foods Market, Amazon obtained brick-and-mortar grocery store locations that it was able to integrate with its online services.51 For example, shoppers with an Amazon Prime membership52 are eligible for discounts and free pickup or delivery of Whole Foods Market groceries in selected zip codes,53 and Amazon Hub Lockers—where consumers can pick up products purchased on Amazon’s website—are often located in Whole Foods Markets.54 Amazon’s acquisition of Whole Foods Market may have increased competition in the grocery retail market. Prior to the acquisition, Walmart was the largest grocery retailer, followed by Kroger.55 Progressive Grocer, a research group, estimates that in 2020, Walmart had the highest U.S. retail sales of grocery items, followed by Amazon.56 However, Duff & Phelps, a consulting firm, indicates that Amazon comprises only a small portion of the grocery retail market and that it serves as “more of a symbolic threat.”57 Nevertheless, other grocery retailers have responded by implementing changes in response to competitive pressure from Amazon.58 Competitive pressure from Amazon may have incentivized other grocery retailers to start offering online delivery services. In 2017, the year Amazon acquired Whole Foods, Walmart launched an online delivery service in selected cities;59 Kroger launched an online delivery service in selected cities in 2018.60 In 2020, Walmart launched Walmart+,61 a membership delivery service that does not have a minimum order requirement,62 similar to an Amazon Prime membership. Consumers may have benefited from food retailers offering their own online delivery services, particularly as many of these stores offer free delivery on orders over $35. These changes may have also increased pressure on other online grocery delivery services, such as Instacart, a third-party service that delivers online groceries from selected stores in selected cities; the service launched in 2012 and stopped delivering groceries from Whole Foods in 2019.63 Amazon’s acquisition of Whole Foods Market raised concern about its growing dominance in the retail industry, particularly in e-commerce. According to eMarketer, a market research company, Amazon had the greatest share of e-commerce sales at 38.7% in 2020; Walmart had the second- greatest share at 5.3% (Figure 1). The estimate from eMarketer includes all online sales, including products that Amazon does not offer. The House Subcommittee on Antitrust staff report finds that by restricting products to those sold on Amazon, a market share of 50% or higher may be a more credible estimate of Amazon’s share of online sales, and that over 60% of all U.S. online product searches begin on Amazon.64 Through its acquisition of Whole Foods, Amazon gained access to additional consumer data, strengthening its bargaining power with suppliers.65 In addition, Amazon has integrated vertically, such as by offering products under its private label AmazonBasics and by creating its own delivery system. Amazon has reportedly invested $60 billion since 2014 in its delivery network, including capital leases for warehouses and aircraft; in 2019, it had the fourth-largest share of U.S. package deliveries, behind FedEx, United Parcel Service, and the U.S. Postal Service.66 By integrating vertically, Amazon may be able to further strengthen its position in e-commerce; if, for example, it is able to provide faster delivery,67 consumers could benefit even if it becomes more difficult for other companies to compete. Facebook’s Acquisition of Instagram Facebook announced that it had reached an agreement to acquire Instagram, a social networking service (i.e., social media platform), for $1 billion on April 9, 2012.68 The FTC reviewed the acquisition, and on August 22, 2012, it closed the investigation without taking action.69 On December 9, 2020, the FTC filed a lawsuit against Facebook, alleging that “Facebook has maintained its monopoly position by buying up companies that present competitive threats, ” in addition to imposing restrictive policies against companies it does not acquire.70 A coalition of 46 state attorneys general, led by New York Attorney General Letitia James, filed a parallel lawsuit against Facebook, also alleging that Facebook acquired companies to eliminate competitive threats.71 Both lawsuits72 specifically mention Facebook’s acquisitions of Instagram and WhatsApp, a messaging app for mobile devices.73 Prior to the acquisition, Facebook CEO Mark Zuckerberg stated in an internal email that “Instagram has become a large and viable competitor to us on mobile photos, which will increasingly be the future of photos.”74 This statement has been used to support the claim that Facebook acquired Instagram with the intention of eliminating a potential competitor. It is unclear how successful Instagram would have been had it not been acquired by Facebook, illustrating the difficulty of predicting whether a nascent firm could become a viable competitor. Instagram was a relatively new company when it was acquired,75 and grew rapidly thereafter, from about 100 million monthly active users (MAUs) in February 2013 to 500 million MAUs in June 2016 and 1 billion MAUs in June 2018.76 As it grew in popularity, Instagram was able to use Facebook’s resources, such as its advertising services and its infrastructure, which hosts and processes large amounts of consumer data. These have been key to the profitability of Instagram, which hosts a wide range of users, including “influencers”—that is, users with a large number of followers who are paid by sponsors to market certain products.77 It is possible that without the merger, Instagram would have been among the platforms that have struggled to compete in digital markets because of resource constraints. This occurred with the social networking service Friendster, which turned down a $30 million buyout offer from Google in 2003 but then struggled with technical difficulties as its user base grew; users left the platform for other social media sites, and Friendster eventually closed down.78 Another complication in evaluating the effect of Facebook’s acquisition of Instagram is determining how the market should be defined, particularly in digital markets that can quickly evolve. Social networking services can include a wide range of platforms. When Facebook acquired Instagram in 2012, one of the defining features of social networking services—a category that than included Friendster and Myspace, among others—was the networks users could create. Users could clearly indicate the users in their respective network(s) on the social networking service,79 although some may have chosen to keep their network(s) private. At that time, Instagram was described as a photo-sharing app, arguably competing with apps like Photobucket and Flickr, rather than with Facebook. Additional types of platforms can be considered social networking services: Reddit allows users to create communities based on their interests; LinkedIn allows users to create connections for business and employment opportunities; and TikTok allows users to share short-form videos.80 Some of these platforms allow users to connect with any other user on the platform rather than only with users in their personal network, focusing on the content rather than the user. These changes suggest that a user’s ability to create social networks may no longer be the defining feature of social networking services. In addition, social networking services are not necessarily substitutes for one another. For example, although Instagram and Microsoft’s LinkedIn are both typically viewed as social networking services, it is unlikely that users would substitute one platform for the other. One report estimates that internet users had an average of about seven social media accounts, suggesting that some users rely on different social media platforms for different purposes.81 USER: Summarize the consequences of the mergers described in the text. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
22
10
1,258
null
677
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document]
I want my case thrown out. The man that is representing me comes to court drunk and has mishandled evidence needed to acquit me. How can I get the judge to throw it out and expunge my record in Texas?
Can the Judge Dismiss a Case in Texas? Written by The Law Office of David White PLLC: Austin Criminal Lawyer, reviewed by David D. White May 18, 2024 can the judge dismiss a case Yes, the decision to dismiss a criminal case can lie within the discretion of the judge overseeing the proceedings. So can the judge dismiss a case in Texas?, well a case dismissal occurs when the court ends legal proceedings prematurely, effectively terminating the case before reaching a verdict. This can happen at any stage of the legal process, from the initial filing of charges to the trial itself. A dismissal is the optimal outcome for the defendant, as it means the case is dropped and no further action is taken. However, it’s essential to note that a dismissal does not mean the charge or arrest or citation are expunged from your record. For the best chance of having charges dismissed in your criminal case or obtaining another favorable outcome, discuss your case with an Austin Criminal Defense Lawyer now. What is a Case Dismissal? A criminal case dismissal by a judge occurs when a judge determines that there is not enough evidence to proceed with the case or that the prosecution has failed to meet its burden of proof beyond a reasonable doubt. A dismissal does not necessarily mean that the defendant is innocent or that they did not commit the alleged offense. Instead, it reflects the judge’s determination that the evidence presented by the prosecution is not sufficient to proceed with the case. In some instances, a dismissal may also occur if there are procedural errors or violations of the defendant’s constitutional rights. If you are facing criminal charges, you need a skilled attorney who can aggressively advocate for your rights and work toward the best possible outcome for your case. An experienced criminal defense attorney can assess the strength of the evidence against you, identify potential weaknesses in the prosecution’s case, and present persuasive arguments to the judge in support of a dismissal. Remember, each case is different and will require its own strategic defense tailored to your specific circumstances. Part of your defense lawyer’s job is to determine whether there are grounds for dismissal and take appropriate action, so you want a skilled attorney working on your case as soon as possible. Seeking a Dismissal from the Judge in Criminal Court Judges have the authority to dismiss cases, but they only do so under certain circumstances. They consider the facts presented, evaluate legal arguments, and assess the overall fairness and integrity of the case. Ultimately, the judge’s primary responsibility is to administer justice and ensure the proper functioning of the legal system. However, a judge will not review a case for errors or deficiencies on their own. Instead, your criminal defense lawyer will need to petition the court to dismiss the case by filing a proper motion. This motion must set forth the facts and arguments supporting the dismissal. Motion practice in a criminal case involves complicated legal requirements and procedures. You need an experienced defense attorney on board in your case right away, so they can identify any grounds for a possible dismissal and file persuasive motions with the court. If My Case is Dismissed, Will it Still Be in my File? While a case dismissal can be a favorable outcome, it’s important to understand that it does not automatically erase the charges from your record. In Texas, dismissed cases are typically still part of your criminal record, but they may be eligible for expunction or nondisclosure in certain circumstances. Expunction completely removes the record of the case, as if it never happened, while nondisclosure limits access to the record by most employers and the public. To determine if your case is eligible for expunction or nondisclosure, it is advisable to consult with an experienced Austin criminal defense attorney who can guide you through the process. You Need a Tough, Skilled Criminal Defense Lawyer in Austin, Texas Navigating the legal system can be overwhelming, especially when you’re hoping for a case dismissal in Texas. That’s where The Law Office of David D. White: Austin Criminal Lawyer comes in. Our experienced Austin criminal defense attorneys are dedicated to helping individuals like you overcome legal challenges and achieve the best possible outcome. To get the answers you need and start building your defense, contact our offices today and schedule a free, no-pressure consultation. David D. White
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. I want my case thrown out. The man that is representing me comes to court drunk and has mishandled evidence needed to acquit me. How can I get the judge to throw it out and expunge my record in Texas? Can the Judge Dismiss a Case in Texas? Written by The Law Office of David White PLLC: Austin Criminal Lawyer, reviewed by David D. White May 18, 2024 can the judge dismiss a case Yes, the decision to dismiss a criminal case can lie within the discretion of the judge overseeing the proceedings. So can the judge dismiss a case in Texas?, well a case dismissal occurs when the court ends legal proceedings prematurely, effectively terminating the case before reaching a verdict. This can happen at any stage of the legal process, from the initial filing of charges to the trial itself. A dismissal is the optimal outcome for the defendant, as it means the case is dropped and no further action is taken. However, it’s essential to note that a dismissal does not mean the charge or arrest or citation are expunged from your record. For the best chance of having charges dismissed in your criminal case or obtaining another favorable outcome, discuss your case with an Austin Criminal Defense Lawyer now. What is a Case Dismissal? A criminal case dismissal by a judge occurs when a judge determines that there is not enough evidence to proceed with the case or that the prosecution has failed to meet its burden of proof beyond a reasonable doubt. A dismissal does not necessarily mean that the defendant is innocent or that they did not commit the alleged offense. Instead, it reflects the judge’s determination that the evidence presented by the prosecution is not sufficient to proceed with the case. In some instances, a dismissal may also occur if there are procedural errors or violations of the defendant’s constitutional rights. If you are facing criminal charges, you need a skilled attorney who can aggressively advocate for your rights and work toward the best possible outcome for your case. An experienced criminal defense attorney can assess the strength of the evidence against you, identify potential weaknesses in the prosecution’s case, and present persuasive arguments to the judge in support of a dismissal. Remember, each case is different and will require its own strategic defense tailored to your specific circumstances. Part of your defense lawyer’s job is to determine whether there are grounds for dismissal and take appropriate action, so you want a skilled attorney working on your case as soon as possible. Seeking a Dismissal from the Judge in Criminal Court Judges have the authority to dismiss cases, but they only do so under certain circumstances. They consider the facts presented, evaluate legal arguments, and assess the overall fairness and integrity of the case. Ultimately, the judge’s primary responsibility is to administer justice and ensure the proper functioning of the legal system. However, a judge will not review a case for errors or deficiencies on their own. Instead, your criminal defense lawyer will need to petition the court to dismiss the case by filing a proper motion. This motion must set forth the facts and arguments supporting the dismissal. Motion practice in a criminal case involves complicated legal requirements and procedures. You need an experienced defense attorney on board in your case right away, so they can identify any grounds for a possible dismissal and file persuasive motions with the court. If My Case is Dismissed, Will it Still Be in my File? While a case dismissal can be a favorable outcome, it’s important to understand that it does not automatically erase the charges from your record. In Texas, dismissed cases are typically still part of your criminal record, but they may be eligible for expunction or nondisclosure in certain circumstances. Expunction completely removes the record of the case, as if it never happened, while nondisclosure limits access to the record by most employers and the public. To determine if your case is eligible for expunction or nondisclosure, it is advisable to consult with an experienced Austin criminal defense attorney who can guide you through the process. You Need a Tough, Skilled Criminal Defense Lawyer in Austin, Texas Navigating the legal system can be overwhelming, especially when you’re hoping for a case dismissal in Texas. That’s where The Law Office of David D. White: Austin Criminal Lawyer comes in. Our experienced Austin criminal defense attorneys are dedicated to helping individuals like you overcome legal challenges and achieve the best possible outcome. To get the answers you need and start building your defense, contact our offices today and schedule a free, no-pressure consultation. David D. White https://www.wm-attorneys.com/can-the-judge-dismiss-a-case/#:~:text=A%20criminal%20case%20dismissal%20by,proof%20beyond%20a%20reasonable%20doubt.
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document] EVIDENCE: Can the Judge Dismiss a Case in Texas? Written by The Law Office of David White PLLC: Austin Criminal Lawyer, reviewed by David D. White May 18, 2024 can the judge dismiss a case Yes, the decision to dismiss a criminal case can lie within the discretion of the judge overseeing the proceedings. So can the judge dismiss a case in Texas?, well a case dismissal occurs when the court ends legal proceedings prematurely, effectively terminating the case before reaching a verdict. This can happen at any stage of the legal process, from the initial filing of charges to the trial itself. A dismissal is the optimal outcome for the defendant, as it means the case is dropped and no further action is taken. However, it’s essential to note that a dismissal does not mean the charge or arrest or citation are expunged from your record. For the best chance of having charges dismissed in your criminal case or obtaining another favorable outcome, discuss your case with an Austin Criminal Defense Lawyer now. What is a Case Dismissal? A criminal case dismissal by a judge occurs when a judge determines that there is not enough evidence to proceed with the case or that the prosecution has failed to meet its burden of proof beyond a reasonable doubt. A dismissal does not necessarily mean that the defendant is innocent or that they did not commit the alleged offense. Instead, it reflects the judge’s determination that the evidence presented by the prosecution is not sufficient to proceed with the case. In some instances, a dismissal may also occur if there are procedural errors or violations of the defendant’s constitutional rights. If you are facing criminal charges, you need a skilled attorney who can aggressively advocate for your rights and work toward the best possible outcome for your case. An experienced criminal defense attorney can assess the strength of the evidence against you, identify potential weaknesses in the prosecution’s case, and present persuasive arguments to the judge in support of a dismissal. Remember, each case is different and will require its own strategic defense tailored to your specific circumstances. Part of your defense lawyer’s job is to determine whether there are grounds for dismissal and take appropriate action, so you want a skilled attorney working on your case as soon as possible. Seeking a Dismissal from the Judge in Criminal Court Judges have the authority to dismiss cases, but they only do so under certain circumstances. They consider the facts presented, evaluate legal arguments, and assess the overall fairness and integrity of the case. Ultimately, the judge’s primary responsibility is to administer justice and ensure the proper functioning of the legal system. However, a judge will not review a case for errors or deficiencies on their own. Instead, your criminal defense lawyer will need to petition the court to dismiss the case by filing a proper motion. This motion must set forth the facts and arguments supporting the dismissal. Motion practice in a criminal case involves complicated legal requirements and procedures. You need an experienced defense attorney on board in your case right away, so they can identify any grounds for a possible dismissal and file persuasive motions with the court. If My Case is Dismissed, Will it Still Be in my File? While a case dismissal can be a favorable outcome, it’s important to understand that it does not automatically erase the charges from your record. In Texas, dismissed cases are typically still part of your criminal record, but they may be eligible for expunction or nondisclosure in certain circumstances. Expunction completely removes the record of the case, as if it never happened, while nondisclosure limits access to the record by most employers and the public. To determine if your case is eligible for expunction or nondisclosure, it is advisable to consult with an experienced Austin criminal defense attorney who can guide you through the process. You Need a Tough, Skilled Criminal Defense Lawyer in Austin, Texas Navigating the legal system can be overwhelming, especially when you’re hoping for a case dismissal in Texas. That’s where The Law Office of David D. White: Austin Criminal Lawyer comes in. Our experienced Austin criminal defense attorneys are dedicated to helping individuals like you overcome legal challenges and achieve the best possible outcome. To get the answers you need and start building your defense, contact our offices today and schedule a free, no-pressure consultation. David D. White USER: I want my case thrown out. The man that is representing me comes to court drunk and has mishandled evidence needed to acquit me. How can I get the judge to throw it out and expunge my record in Texas? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
24
40
742
null
433
Model must only respond using information contained in the context block. Model must not rely on its own knowledge or outside sources of information when responding.
What measures did the federal reserve implement in March 2020 to stabilize the commercial paper market during the COVID pandemic?
CRS INSIGHT Prepared for Members and Committees of Congress INSIGHTi COVID-19: Commercial Paper Market Strains and Federal Government Support April 13, 2020 What Is Commercial Paper and Why Is It Important? As COVID-19 spread rapidly in the United States, fears of its economic effects led to strains in the commercial paper (CP) market, one of the main funding sources for many firms and for providers of credit to individuals. Commercial paper is short-term debt issued primarily by corporations and generally is unsecured. The CP market is an important source of short-term credit for a range of financial and nonfinancial businesses, who may rely on it as an alternative to bank loans—for example, in making payroll or for other short-term funding needs. The CP market also helps provide credit to individuals through short-term asset-backed commercial paper (ABCP), which finances certain consumer loans such as auto loans or other consumer debt. Municipalities also issue CP for short-term funding needs. Some money market funds (MMFs) are key purchasers of CP, which plays a significant role in this short-term funding market. As of March 31, 2020, about 24% of total CP outstanding was ABCP; 47% of total CP was from financial issuers; and 28% was from nonfinancial issuers. The total CP market in the United States was $1.092 trillion as of the end of March 2020, though this amount can fluctuate based on market conditions. For a sense of scale, this is roughly 65% of the amount of currency in circulation by the public ($1.73 trillion as of March 9, 2020). The CP market grew rapidly in the 1970s and 1980s in the United States, as a lower-cost alternative to bank loans. A provision in the securities laws allowing for an exemption from more elaborate Securities and Exchange Commission (SEC) registration requirements for debt securities with maturities of 270 days or less helped fuel this market’s rapid expansion. From 1970 to 1991, outstanding commercial paper grew at an annual rate of 14%. The subsequent growth of securitization, in which loans are packaged into bonds and sold to investors as securities, also fueled a rapid expansion of ABCP. Between 1997 and 2007, ABCP grew from $250 billion to more than $1 trillion. This growth was partly fueled by the expansion of residential mortgage securitization. In August 2007, ABCP comprised over 52% of the total CP; financial CP accounted for 38%; and nonfinancial CP constituted 10%. The amount of CP outstanding peaked at $2.2 trillion in August 2007, before shrinking considerably during and after the 2008 financial crisis. Congressional Research Service https://crsreports.congress.gov IN11332 Congressional Research Service 2 Because CP involves short maturities (much CP matures in 30 days or less), many firms have to “roll over” maturing CP—issuing new CP as existing CP matures. Thus, the CP market is generally susceptible to roll-over risk, meaning the risk that market conditions may change and the usual buyers of CP might decline to purchase new notes when existing ones expire, preferring perhaps to hold cash. This is often sparked by credit risk, wherein fears over a CP issuer’s credit, or even the bankruptcy of a CP issuer, lead to depressed demand for commercial paper. The risk of being unable to roll over maturing commercial paper due to credit risk has been demonstrated as real in recent financial history, both in the financial crisis following Lehman Brothers’ collapse and in prior sudden corporate bankruptcies. When credit and liquidity become unavailable through the CP market, the effects can spill over into credit markets more generally. Commercial Paper Market Stress and Federal Government Support As concerns over the spread of COVID-19 grew, stresses in the CP market became linked to the supply of business credit, putting pressure on banks and heightening the market demand for cash. Such strains on credit markets can sharply increase borrowing costs for financial and nonfinancial firms. When investment bank Lehman Brothers failed during the 2008 crisis, the cost of borrowing in CP, as measured by the spread for CP borrowing rates over more stable overnight index swap rates, rose by about 200 basis points (2%) in the following week, and the rates for financial firms’ CP notes eventually climbed higher. Data from the Federal Reserve shown in Figure 1 indicate that CP borrowing rates for financial issuers, as measured in spreads for CP borrowing rates over Treasuries, spiked by about 200 basis points in March 2020, as investors grew reluctant to buy new CP. To add liquidity and foster credit provision in the CP market, the Federal Reserve intervened on March 17, 2020, with a credit facility. Figure 1. Spreads Between 1-Month and 3-Month AA-rated Financial Commercial Paper and 3-Month Constant Maturity Treasury Rates Source: CRS, based on data obtained from the Federal Reserve Bank of St. Louis FRED website. Congressional Research Service 3 IN11332 · VERSION 1 · NEW Note: “AA-rated” is the second-highest credit rating. For more information, see the Federal Reserve Bank of New York website. On March 17, the Federal Reserve (Fed) announced that it was establishing a Commercial Paper Funding Facility (CPFF) to support the flow of credit to households and businesses. This facility is backed by funding from the Treasury’s Economic Stabilization Fund. The Fed noted the CPFF was designed to support the CP markets, which “directly finance a wide range of economic activity, supplying credit and funding for auto loans and mortgages as well as liquidity to meet the operational needs of a range of companies.” The Fed aims to provide a liquidity backstop to CP issuers by buying both ABCP and regular, unsecured CP of a minimum credit quality from eligible companies. By acting as a buyer of the last resort, the Fed program aims to reduce investors’ risk that CP issuers would not repay them because they became unable to roll over any maturing CP. On March 23, the Fed expanded the CPFF to facilitate the flow of credit to municipalities by including high-quality, tax-exempt commercial paper as eligible securities, and also reduced the pricing of the facility. (For more information, see CRS Insight IN11259, Federal Reserve: Recent Actions in Response to COVID-19, by Marc Labonte; and CRS Report R44185, Federal Reserve: Emergency Lending, by Marc Labonte.) Author Information Rena S. Miller Specialist in Financial Economics Disclaimer This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan shared staff to congressional committees and Members of Congress. It operates solely at the behest of and under the direction of Congress. Information in a CRS Report should not be relied upon for purposes other than public understanding of information that has been provided by CRS to Members of Congress in connection with CRS’s institutional role. CRS Reports, as a work of the United States Government, are not subject to copyright protection in the United States. Any CRS Report may be reproduced and distributed in its entirety without permission from CRS. However, as a CRS Report may include copyrighted images or material from a third party, you may need to obtain the permission of the copyright holder if you wish to copy or otherwise use copyrighted material.
CRS INSIGHT Prepared for Members and Committees of Congress INSIGHTi COVID-19: Commercial Paper Market Strains and Federal Government Support April 13, 2020 What Is Commercial Paper and Why Is It Important? As COVID-19 spread rapidly in the United States, fears of its economic effects led to strains in the commercial paper (CP) market, one of the main funding sources for many firms and for providers of credit to individuals. Commercial paper is short-term debt issued primarily by corporations and generally is unsecured. The CP market is an important source of short-term credit for a range of financial and nonfinancial businesses, who may rely on it as an alternative to bank loans—for example, in making payroll or for other short-term funding needs. The CP market also helps provide credit to individuals through short-term asset-backed commercial paper (ABCP), which finances certain consumer loans such as auto loans or other consumer debt. Municipalities also issue CP for short-term funding needs. Some money market funds (MMFs) are key purchasers of CP, which plays a significant role in this short-term funding market. As of March 31, 2020, about 24% of total CP outstanding was ABCP; 47% of total CP was from financial issuers; and 28% was from nonfinancial issuers. The total CP market in the United States was $1.092 trillion as of the end of March 2020, though this amount can fluctuate based on market conditions. For a sense of scale, this is roughly 65% of the amount of currency in circulation by the public ($1.73 trillion as of March 9, 2020). The CP market grew rapidly in the 1970s and 1980s in the United States, as a lower-cost alternative to bank loans. A provision in the securities laws allowing for an exemption from more elaborate Securities and Exchange Commission (SEC) registration requirements for debt securities with maturities of 270 days or less helped fuel this market’s rapid expansion. From 1970 to 1991, outstanding commercial paper grew at an annual rate of 14%. The subsequent growth of securitization, in which loans are packaged into bonds and sold to investors as securities, also fueled a rapid expansion of ABCP. Between 1997 and 2007, ABCP grew from $250 billion to more than $1 trillion. This growth was partly fueled by the expansion of residential mortgage securitization. In August 2007, ABCP comprised over 52% of the total CP; financial CP accounted for 38%; and nonfinancial CP constituted 10%. The amount of CP outstanding peaked at $2.2 trillion in August 2007, before shrinking considerably during and after the 2008 financial crisis. Congressional Research Service https://crsreports.congress.gov IN11332 Congressional Research Service 2 Because CP involves short maturities (much CP matures in 30 days or less), many firms have to “roll over” maturing CP—issuing new CP as existing CP matures. Thus, the CP market is generally susceptible to roll-over risk, meaning the risk that market conditions may change and the usual buyers of CP might decline to purchase new notes when existing ones expire, preferring perhaps to hold cash. This is often sparked by credit risk, wherein fears over a CP issuer’s credit, or even the bankruptcy of a CP issuer, lead to depressed demand for commercial paper. The risk of being unable to roll over maturing commercial paper due to credit risk has been demonstrated as real in recent financial history, both in the financial crisis following Lehman Brothers’ collapse and in prior sudden corporate bankruptcies. When credit and liquidity become unavailable through the CP market, the effects can spill over into credit markets more generally. Commercial Paper Market Stress and Federal Government Support As concerns over the spread of COVID-19 grew, stresses in the CP market became linked to the supply of business credit, putting pressure on banks and heightening the market demand for cash. Such strains on credit markets can sharply increase borrowing costs for financial and nonfinancial firms. When investment bank Lehman Brothers failed during the 2008 crisis, the cost of borrowing in CP, as measured by the spread for CP borrowing rates over more stable overnight index swap rates, rose by about 200 basis points (2%) in the following week, and the rates for financial firms’ CP notes eventually climbed higher. Data from the Federal Reserve shown in Figure 1 indicate that CP borrowing rates for financial issuers, as measured in spreads for CP borrowing rates over Treasuries, spiked by about 200 basis points in March 2020, as investors grew reluctant to buy new CP. To add liquidity and foster credit provision in the CP market, the Federal Reserve intervened on March 17, 2020, with a credit facility. Figure 1. Spreads Between 1-Month and 3-Month AA-rated Financial Commercial Paper and 3-Month Constant Maturity Treasury Rates Source: CRS, based on data obtained from the Federal Reserve Bank of St. Louis FRED website. Congressional Research Service 3 IN11332 · VERSION 1 · NEW Note: “AA-rated” is the second-highest credit rating. For more information, see the Federal Reserve Bank of New York website. On March 17, the Federal Reserve (Fed) announced that it was establishing a Commercial Paper Funding Facility (CPFF) to support the flow of credit to households and businesses. This facility is backed by funding from the Treasury’s Economic Stabilization Fund. The Fed noted the CPFF was designed to support the CP markets, which “directly finance a wide range of economic activity, supplying credit and funding for auto loans and mortgages as well as liquidity to meet the operational needs of a range of companies.” The Fed aims to provide a liquidity backstop to CP issuers by buying both ABCP and regular, unsecured CP of a minimum credit quality from eligible companies. By acting as a buyer of the last resort, the Fed program aims to reduce investors’ risk that CP issuers would not repay them because they became unable to roll over any maturing CP. On March 23, the Fed expanded the CPFF to facilitate the flow of credit to municipalities by including high-quality, tax-exempt commercial paper as eligible securities, and also reduced the pricing of the facility. (For more information, see CRS Insight IN11259, Federal Reserve: Recent Actions in Response to COVID-19, by Marc Labonte; and CRS Report R44185, Federal Reserve: Emergency Lending, by Marc Labonte.) Author Information Rena S. Miller Specialist in Financial Economics Disclaimer This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan shared staff to congressional committees and Members of Congress. It operates solely at the behest of and under the direction of Congress. Information in a CRS Report should not be relied upon for purposes other than public understanding of information that has been provided by CRS to Members of Congress in connection with CRS’s institutional role. CRS Reports, as a work of the United States Government, are not subject to copyright protection in the United States. Any CRS Report may be reproduced and distributed in its entirety without permission from CRS. However, as a CRS Report may include copyrighted images or material from a third party, you may need to obtain the permission of the copyright holder if you wish to copy or otherwise use copyrighted material. Model must only respond using information contained in the context block. Model must not rely on its own knowledge or outside sources of information when responding. What measures did the federal reserve implement in March 2020 to stabilize the commercial paper market during the COVID pandemic?
Model must only respond using information contained in the context block. Model must not rely on its own knowledge or outside sources of information when responding. EVIDENCE: CRS INSIGHT Prepared for Members and Committees of Congress INSIGHTi COVID-19: Commercial Paper Market Strains and Federal Government Support April 13, 2020 What Is Commercial Paper and Why Is It Important? As COVID-19 spread rapidly in the United States, fears of its economic effects led to strains in the commercial paper (CP) market, one of the main funding sources for many firms and for providers of credit to individuals. Commercial paper is short-term debt issued primarily by corporations and generally is unsecured. The CP market is an important source of short-term credit for a range of financial and nonfinancial businesses, who may rely on it as an alternative to bank loans—for example, in making payroll or for other short-term funding needs. The CP market also helps provide credit to individuals through short-term asset-backed commercial paper (ABCP), which finances certain consumer loans such as auto loans or other consumer debt. Municipalities also issue CP for short-term funding needs. Some money market funds (MMFs) are key purchasers of CP, which plays a significant role in this short-term funding market. As of March 31, 2020, about 24% of total CP outstanding was ABCP; 47% of total CP was from financial issuers; and 28% was from nonfinancial issuers. The total CP market in the United States was $1.092 trillion as of the end of March 2020, though this amount can fluctuate based on market conditions. For a sense of scale, this is roughly 65% of the amount of currency in circulation by the public ($1.73 trillion as of March 9, 2020). The CP market grew rapidly in the 1970s and 1980s in the United States, as a lower-cost alternative to bank loans. A provision in the securities laws allowing for an exemption from more elaborate Securities and Exchange Commission (SEC) registration requirements for debt securities with maturities of 270 days or less helped fuel this market’s rapid expansion. From 1970 to 1991, outstanding commercial paper grew at an annual rate of 14%. The subsequent growth of securitization, in which loans are packaged into bonds and sold to investors as securities, also fueled a rapid expansion of ABCP. Between 1997 and 2007, ABCP grew from $250 billion to more than $1 trillion. This growth was partly fueled by the expansion of residential mortgage securitization. In August 2007, ABCP comprised over 52% of the total CP; financial CP accounted for 38%; and nonfinancial CP constituted 10%. The amount of CP outstanding peaked at $2.2 trillion in August 2007, before shrinking considerably during and after the 2008 financial crisis. Congressional Research Service https://crsreports.congress.gov IN11332 Congressional Research Service 2 Because CP involves short maturities (much CP matures in 30 days or less), many firms have to “roll over” maturing CP—issuing new CP as existing CP matures. Thus, the CP market is generally susceptible to roll-over risk, meaning the risk that market conditions may change and the usual buyers of CP might decline to purchase new notes when existing ones expire, preferring perhaps to hold cash. This is often sparked by credit risk, wherein fears over a CP issuer’s credit, or even the bankruptcy of a CP issuer, lead to depressed demand for commercial paper. The risk of being unable to roll over maturing commercial paper due to credit risk has been demonstrated as real in recent financial history, both in the financial crisis following Lehman Brothers’ collapse and in prior sudden corporate bankruptcies. When credit and liquidity become unavailable through the CP market, the effects can spill over into credit markets more generally. Commercial Paper Market Stress and Federal Government Support As concerns over the spread of COVID-19 grew, stresses in the CP market became linked to the supply of business credit, putting pressure on banks and heightening the market demand for cash. Such strains on credit markets can sharply increase borrowing costs for financial and nonfinancial firms. When investment bank Lehman Brothers failed during the 2008 crisis, the cost of borrowing in CP, as measured by the spread for CP borrowing rates over more stable overnight index swap rates, rose by about 200 basis points (2%) in the following week, and the rates for financial firms’ CP notes eventually climbed higher. Data from the Federal Reserve shown in Figure 1 indicate that CP borrowing rates for financial issuers, as measured in spreads for CP borrowing rates over Treasuries, spiked by about 200 basis points in March 2020, as investors grew reluctant to buy new CP. To add liquidity and foster credit provision in the CP market, the Federal Reserve intervened on March 17, 2020, with a credit facility. Figure 1. Spreads Between 1-Month and 3-Month AA-rated Financial Commercial Paper and 3-Month Constant Maturity Treasury Rates Source: CRS, based on data obtained from the Federal Reserve Bank of St. Louis FRED website. Congressional Research Service 3 IN11332 · VERSION 1 · NEW Note: “AA-rated” is the second-highest credit rating. For more information, see the Federal Reserve Bank of New York website. On March 17, the Federal Reserve (Fed) announced that it was establishing a Commercial Paper Funding Facility (CPFF) to support the flow of credit to households and businesses. This facility is backed by funding from the Treasury’s Economic Stabilization Fund. The Fed noted the CPFF was designed to support the CP markets, which “directly finance a wide range of economic activity, supplying credit and funding for auto loans and mortgages as well as liquidity to meet the operational needs of a range of companies.” The Fed aims to provide a liquidity backstop to CP issuers by buying both ABCP and regular, unsecured CP of a minimum credit quality from eligible companies. By acting as a buyer of the last resort, the Fed program aims to reduce investors’ risk that CP issuers would not repay them because they became unable to roll over any maturing CP. On March 23, the Fed expanded the CPFF to facilitate the flow of credit to municipalities by including high-quality, tax-exempt commercial paper as eligible securities, and also reduced the pricing of the facility. (For more information, see CRS Insight IN11259, Federal Reserve: Recent Actions in Response to COVID-19, by Marc Labonte; and CRS Report R44185, Federal Reserve: Emergency Lending, by Marc Labonte.) Author Information Rena S. Miller Specialist in Financial Economics Disclaimer This document was prepared by the Congressional Research Service (CRS). CRS serves as nonpartisan shared staff to congressional committees and Members of Congress. It operates solely at the behest of and under the direction of Congress. Information in a CRS Report should not be relied upon for purposes other than public understanding of information that has been provided by CRS to Members of Congress in connection with CRS’s institutional role. CRS Reports, as a work of the United States Government, are not subject to copyright protection in the United States. Any CRS Report may be reproduced and distributed in its entirety without permission from CRS. However, as a CRS Report may include copyrighted images or material from a third party, you may need to obtain the permission of the copyright holder if you wish to copy or otherwise use copyrighted material. USER: What measures did the federal reserve implement in March 2020 to stabilize the commercial paper market during the COVID pandemic? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
true
26
20
1,181
null
565
You may only use information in the context block when responding to the prompt
What specific disease in AR currently used for on the patient end?
AR applications for physicians and healthcare professionals cover by far the largest user group [6]. However, especially for AR-supported image guidance and navigation, very high accuracy and reliability may be needed [9]. Applications, for which sub-millimeter precision is not necessary, are, for example, ablations, ventriculostomy [10-14] or certain orthopedic interventions [15]. Here, the HoloLens is with its inside-out tracking already a promising tool, but for applications that need, for example, sub-millimeter precision, it cannot be used reliably yet. An example is the deep brain stimulation (DBS) procedure used for treating essential tremor and Parkinson’s disease, where millimeter-to-submillimeter accuracy in DBS targeting (an electrode placement inside the brain) can be important [16]. Another study exploring the clinical accuracy of the HoloLens for neuronavigation concludes also that it is currently not within clinically acceptable levels [17]. The same holds true for some application scenarios in orthopedic surgery [18], like screw placement, where there is still room for improvement [19]. We expect the Vision Pro to move the needle in terms of accuracy, because of its strong inside-out tracking through 12 built-in cameras and LiDAR (Light Detection and Ranging) sensing method, which is the key aspect for increasing the accuracy of AR. We do not see the often-criticized end-user price of $3,499 (without tax) for the Vision Pro as an issue for this user group. The price is similar to the HoloLens and much cheaper at a fraction of the costs compared to existing, and clinically used conventional medical navigation systems, e.g., from Brainlab or Medtronic. In this context, it is important to mention that the use of AR head-mounted displays (HMDs) with direct patient reference (and outside of research in the context of feasibility studies) requires an appropriate regulatory approval and a certification/classification as medical device. The Magic Leap 2, for example, received certification for usage in an operating room in January of this year (2023). Moreover, and especially in a medical context, the display may also require an image focus in surgical table distance [20]. Students are the second most common intended user group [6] with applications, for example, in educational training, like the HoloPointer, a virtual AR pointer for laparoscopic surgery training [21]. Another example is the usage of MR to teach medical students catheter placement [22] or a phantom experiment to study the effectiveness of learning using AR in the access of the central venous [23]. We found that the effects of HoloLens-based learning compared to conventional learning, e.g., by using cadavers or other computerized methods, seem to be rather small. A reason for this is that the learning tools usually used are relatively simple, conventional 3D models and more innovative visualizations, including interactive, dynamic content, which cannot be easily delivered by regular computerized methods, have not been explored in depth yet [6]. Here, we believe that the Vision Pro can raise the bar, because of its high-resolution visualization (consisting of two micro-OLED displays with a total of 23 megapixels) that uses eye-tracking to make the headset also usable as a desktop screen with its fine textual details. In addition, the 3D user interface, using precise and intuitive finger tracking, can be a game changer. Despite these advances, it remains to be seen if an AR headset has a major advantage over a conventional screen. The cost of the Vision Pro is also a concern, especially for students with a limited financial budget, but the “Pro” might indicate that there will also be a “regular” version released. Patients are currently the least frequented target user group with AR devices[6]. An example is MemHolo, which provides MR experiences for subjects with Alzheimer’s disease [24]. Other examples in this area include the usage of the HoloLens as an assistant and monitoring tool for medication adherence [25] and the usage of a HoloLens-based system for functional mobility assessment [26]. However, many interesting assistance and monitoring applications are limited by the restricted possible usage time of untethered AR headsets primarily due to battery life, and this problem seems to remain with the Vision Pro, which has a similar battery life of approximately two hours, like the HoloLens. The only consolation is the “external” battery pack that can be swapped for the Vision Pro. Finally, the Apple M2 and the new R1 chip of the Vision Pro could potentially boost the relatively small number of applications that have been explored so far, e.g., by reducing motion sickness, especially for elderly patients, which are more susceptible to MR sickness than younger users.
system instruction: [You may only use information in the context block when responding to the prompt] Content block: [AR applications for physicians and healthcare professionals cover by far the largest user group [6]. However, especially for AR-supported image guidance and navigation, very high accuracy and reliability may be needed [9]. Applications, for which sub-millimeter precision is not necessary, are, for example, ablations, ventriculostomy [10-14] or certain orthopedic interventions [15]. Here, the HoloLens is with its inside-out tracking already a promising tool, but for applications that need, for example, sub-millimeter precision, it cannot be used reliably yet. An example is the deep brain stimulation (DBS) procedure used for treating essential tremor and Parkinson’s disease, where millimeter-to-submillimeter accuracy in DBS targeting (an electrode placement inside the brain) can be important [16]. Another study exploring the clinical accuracy of the HoloLens for neuronavigation concludes also that it is currently not within clinically acceptable levels [17]. The same holds true for some application scenarios in orthopedic surgery [18], like screw placement, where there is still room for improvement [19]. We expect the Vision Pro to move the needle in terms of accuracy, because of its strong inside-out tracking through 12 built-in cameras and LiDAR (Light Detection and Ranging) sensing method, which is the key aspect for increasing the accuracy of AR. We do not see the often-criticized end-user price of $3,499 (without tax) for the Vision Pro as an issue for this user group. The price is similar to the HoloLens and much cheaper at a fraction of the costs compared to existing, and clinically used conventional medical navigation systems, e.g., from Brainlab or Medtronic. In this context, it is important to mention that the use of AR head-mounted displays (HMDs) with direct patient reference (and outside of research in the context of feasibility studies) requires an appropriate regulatory approval and a certification/classification as medical device. The Magic Leap 2, for example, received certification for usage in an operating room in January of this year (2023). Moreover, and especially in a medical context, the display may also require an image focus in surgical table distance [20]. Students are the second most common intended user group [6] with applications, for example, in educational training, like the HoloPointer, a virtual AR pointer for laparoscopic surgery training [21]. Another example is the usage of MR to teach medical students catheter placement [22] or a phantom experiment to study the effectiveness of learning using AR in the access of the central venous [23]. We found that the effects of HoloLens-based learning compared to conventional learning, e.g., by using cadavers or other computerized methods, seem to be rather small. A reason for this is that the learning tools usually used are relatively simple, conventional 3D models and more innovative visualizations, including interactive, dynamic content, which cannot be easily delivered by regular computerized methods, have not been explored in depth yet [6]. Here, we believe that the Vision Pro can raise the bar, because of its high-resolution visualization (consisting of two micro-OLED displays with a total of 23 megapixels) that uses eye-tracking to make the headset also usable as a desktop screen with its fine textual details. In addition, the 3D user interface, using precise and intuitive finger tracking, can be a game changer. Despite these advances, it remains to be seen if an AR headset has a major advantage over a conventional screen. The cost of the Vision Pro is also a concern, especially for students with a limited financial budget, but the “Pro” might indicate that there will also be a “regular” version released. Patients are currently the least frequented target user group with AR devices[6]. An example is MemHolo, which provides MR experiences for subjects with Alzheimer’s disease [24]. Other examples in this area include the usage of the HoloLens as an assistant and monitoring tool for medication adherence [25] and the usage of a HoloLens-based system for functional mobility assessment [26]. However, many interesting assistance and monitoring applications are limited by the restricted possible usage time of untethered AR headsets primarily due to battery life, and this problem seems to remain with the Vision Pro, which has a similar battery life of approximately two hours, like the HoloLens. The only consolation is the “external” battery pack that can be swapped for the Vision Pro. Finally, the Apple M2 and the new R1 chip of the Vision Pro could potentially boost the relatively small number of applications that have been explored so far, e.g., by reducing motion sickness, especially for elderly patients, which are more susceptible to MR sickness than younger users.] question: [What specific disease in AR currently used for on the patient end?]
You may only use information in the context block when responding to the prompt EVIDENCE: AR applications for physicians and healthcare professionals cover by far the largest user group [6]. However, especially for AR-supported image guidance and navigation, very high accuracy and reliability may be needed [9]. Applications, for which sub-millimeter precision is not necessary, are, for example, ablations, ventriculostomy [10-14] or certain orthopedic interventions [15]. Here, the HoloLens is with its inside-out tracking already a promising tool, but for applications that need, for example, sub-millimeter precision, it cannot be used reliably yet. An example is the deep brain stimulation (DBS) procedure used for treating essential tremor and Parkinson’s disease, where millimeter-to-submillimeter accuracy in DBS targeting (an electrode placement inside the brain) can be important [16]. Another study exploring the clinical accuracy of the HoloLens for neuronavigation concludes also that it is currently not within clinically acceptable levels [17]. The same holds true for some application scenarios in orthopedic surgery [18], like screw placement, where there is still room for improvement [19]. We expect the Vision Pro to move the needle in terms of accuracy, because of its strong inside-out tracking through 12 built-in cameras and LiDAR (Light Detection and Ranging) sensing method, which is the key aspect for increasing the accuracy of AR. We do not see the often-criticized end-user price of $3,499 (without tax) for the Vision Pro as an issue for this user group. The price is similar to the HoloLens and much cheaper at a fraction of the costs compared to existing, and clinically used conventional medical navigation systems, e.g., from Brainlab or Medtronic. In this context, it is important to mention that the use of AR head-mounted displays (HMDs) with direct patient reference (and outside of research in the context of feasibility studies) requires an appropriate regulatory approval and a certification/classification as medical device. The Magic Leap 2, for example, received certification for usage in an operating room in January of this year (2023). Moreover, and especially in a medical context, the display may also require an image focus in surgical table distance [20]. Students are the second most common intended user group [6] with applications, for example, in educational training, like the HoloPointer, a virtual AR pointer for laparoscopic surgery training [21]. Another example is the usage of MR to teach medical students catheter placement [22] or a phantom experiment to study the effectiveness of learning using AR in the access of the central venous [23]. We found that the effects of HoloLens-based learning compared to conventional learning, e.g., by using cadavers or other computerized methods, seem to be rather small. A reason for this is that the learning tools usually used are relatively simple, conventional 3D models and more innovative visualizations, including interactive, dynamic content, which cannot be easily delivered by regular computerized methods, have not been explored in depth yet [6]. Here, we believe that the Vision Pro can raise the bar, because of its high-resolution visualization (consisting of two micro-OLED displays with a total of 23 megapixels) that uses eye-tracking to make the headset also usable as a desktop screen with its fine textual details. In addition, the 3D user interface, using precise and intuitive finger tracking, can be a game changer. Despite these advances, it remains to be seen if an AR headset has a major advantage over a conventional screen. The cost of the Vision Pro is also a concern, especially for students with a limited financial budget, but the “Pro” might indicate that there will also be a “regular” version released. Patients are currently the least frequented target user group with AR devices[6]. An example is MemHolo, which provides MR experiences for subjects with Alzheimer’s disease [24]. Other examples in this area include the usage of the HoloLens as an assistant and monitoring tool for medication adherence [25] and the usage of a HoloLens-based system for functional mobility assessment [26]. However, many interesting assistance and monitoring applications are limited by the restricted possible usage time of untethered AR headsets primarily due to battery life, and this problem seems to remain with the Vision Pro, which has a similar battery life of approximately two hours, like the HoloLens. The only consolation is the “external” battery pack that can be swapped for the Vision Pro. Finally, the Apple M2 and the new R1 chip of the Vision Pro could potentially boost the relatively small number of applications that have been explored so far, e.g., by reducing motion sickness, especially for elderly patients, which are more susceptible to MR sickness than younger users. USER: What specific disease in AR currently used for on the patient end? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
14
12
747
null
142
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document]
How can china achieve technological parity with the US military? What lessons from history can it learn in trying to achieve this aim? Under 300 words please.
Can adversaries of the United States easily imitate its most advanced weapon systems and thus erode its military-technological superiority? Do reverse engineering, industrial espi- onage, and, in particular, cyber espionage facilitate and accelerate this process? China’s decades-long economic boom, military modernization program, mas- sive reliance on cyber espionage, and assertive foreign policy have made these questions increasingly salient. Yet, almost everything known about this topic draws from the past. As we explain in this article, the conclusions that the ex- isting literature has reached by studying prior eras have no applicability to the current day. Scholarship in international relations theory generally assumes that ris- ing states beneªt from the “advantage of backwardness,” as described by Alexander Gerschenkron.1 By free riding on the research and technology of the most advanced countries, less developed states can allegedly close the military-technological gap with their rivals relatively easily and quickly.2 More recent works maintain that globalization, the emergence of dual-use compo- nents, and advances in communications (including the opportunity for cyber espionage) have facilitated this process.3 This literature is built on shaky theo- retical foundations, and its claims lack empirical support. The international relations literature largely ignores one of the most impor- tant changes to have occurred in the realm of weapons development since the second industrial revolution (1870–1914): the exponential increase in the com- plexity of military technology. We argue that this increase in complexity has promoted a change in the system of production that has made the imita- tion and replication of the performance of state-of-the-art weapon systems harder—so much so as to offset the diffusing effects of globalization and ad- vances in communications. On the one hand, the increase in complexity has signiªcantly raised the entry barriers for the production of advanced wea- pon systems: countries must now possess an extremely advanced industrial, scientiªc, and technological base in weapons production before they can copy foreign military technology. On the other hand, the knowledge to design, de- velop, and produce advanced weapon systems is less likely to diffuse, given its increasingly tacit and organizational nature. As a result, the advantage of backwardness has shrunk signiªcantly, and know-how and experience in the production of advanced weapon systems have become an important source of power for those who master them. We employ two case studies to test this ar- gument: Imperial Germany’s rapid success in closing the technological gap with the British Dreadnought battleship, despite signiªcant inhibiting factors; and China’s struggle to imitate the U.S. F-22/A Raptor jet ªghter, despite sev- eral facilitating conditions. Our research contributes to key theoretical and policy debates. First, the ability to imitate state-of-the-art military hardware plays a central role in theo- ries that seek to explain patterns of internal balancing and the rise and fall of great powers. Yet, the mainstream international relations literature has not in- vestigated this process.4 Because imitating military technology was relatively easy in the past, scholars and policymakers assume that it also is today, as fre- quent analogies between Wilhelmine Germany and contemporary China epit- omize.5 In this article, we investigate the conditions under which the imitation of state-of-the-art weapon systems such as attack submarines and combat air- craft is more or less likely to succeed. Second, we develop the ªrst systematic theoretical explanation of why U.S. superiority in military technology remains largely unrivaled almost thirty years after the end of the Cold War, despite globalization and the information and communication technology revolution. Some scholars have argued that developing modern weapon systems has become dramatically more demand- ing, which in turn has made internal balancing against the United States more difªcult.6 This literature, however, cannot explain why in the age of globaliza- tion and instant communications—with cyber espionage permitting the theft of massive amount of digital data—U.S. know-how in advanced weapon sys- tems has not already diffused to other states. Other contributors to the debate on unipolarity have either pointed to the relative inferiority of Chinese mili- tary technology without providing a theoretical explanation, or they have ar- gued that developing the military capabilities to challenge the status quo is, in the long run, a function of political will—an argument that cannot account for the failure of the Soviet Union to cope with U.S. military technology from the late 1970s onward.7 We argue that in the transition from the second industrial revolution to the information age, the imitation of state-of-the-art mili- tary technology has become more difªcult, so much so that today rising powers or even peer competitors cannot easily copy foreign weapon systems.8 Our ªndings address existing concerns that China’s use of cyber espionage and the increasing globalization of arms production will allow Beijing to rap- idly close the military-technological gap with the United States.9 Third, the international relations literature accepts the claim that globali- zation and advances in communications have made the imitation of military technology easier; yet no one has empirically tested this proposition.10 This failing is particularly concerning in light of the opportunities opened by cyber espionage—a practice that, according to many observers, could erode the U.S. advantage in military technology. Richard Clark, a former U.S. senior govern- ment ofªcial, believes that Chinese cyber espionage could result in the United States “hav[ing] all of [its] research and development stolen”; Gen. Keith Alexander, a former director of the National Security Agency, worries that cyber espionage could lead to “the greatest transfer of wealth in history.”11 With a few notable exceptions, however, international relations scholars have paid little attention to the advantages and limits of cyber espionage for copy- ing foreign military technology.12 Our research ªlls this gap and tests the con- ventional wisdom using the case of China, one of the states that has beneªted the most from globalization and that has employed cyber espionage more ex- tensively than any other country.
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. How can china achieve technological parity with the US military? What lessons from history can it learn in trying to achieve this aim? Under 300 words please. Can adversaries of the United States easily imitate its most advanced weapon systems and thus erode its military-technological superiority? Do reverse engineering, industrial espi- onage, and, in particular, cyber espionage facilitate and accelerate this process? China’s decades-long economic boom, military modernization program, mas- sive reliance on cyber espionage, and assertive foreign policy have made these questions increasingly salient. Yet, almost everything known about this topic draws from the past. As we explain in this article, the conclusions that the ex- isting literature has reached by studying prior eras have no applicability to the current day. Scholarship in international relations theory generally assumes that ris- ing states beneªt from the “advantage of backwardness,” as described by Alexander Gerschenkron.1 By free riding on the research and technology of the most advanced countries, less developed states can allegedly close the military-technological gap with their rivals relatively easily and quickly.2 More recent works maintain that globalization, the emergence of dual-use compo- nents, and advances in communications (including the opportunity for cyber espionage) have facilitated this process.3 This literature is built on shaky theo- retical foundations, and its claims lack empirical support. The international relations literature largely ignores one of the most impor- tant changes to have occurred in the realm of weapons development since the second industrial revolution (1870–1914): the exponential increase in the com- plexity of military technology. We argue that this increase in complexity has promoted a change in the system of production that has made the imita- tion and replication of the performance of state-of-the-art weapon systems harder—so much so as to offset the diffusing effects of globalization and ad- vances in communications. On the one hand, the increase in complexity has signiªcantly raised the entry barriers for the production of advanced wea- pon systems: countries must now possess an extremely advanced industrial, scientiªc, and technological base in weapons production before they can copy foreign military technology. On the other hand, the knowledge to design, de- velop, and produce advanced weapon systems is less likely to diffuse, given its increasingly tacit and organizational nature. As a result, the advantage of backwardness has shrunk signiªcantly, and know-how and experience in the production of advanced weapon systems have become an important source of power for those who master them. We employ two case studies to test this ar- gument: Imperial Germany’s rapid success in closing the technological gap with the British Dreadnought battleship, despite signiªcant inhibiting factors; and China’s struggle to imitate the U.S. F-22/A Raptor jet ªghter, despite sev- eral facilitating conditions. Our research contributes to key theoretical and policy debates. First, the ability to imitate state-of-the-art military hardware plays a central role in theo- ries that seek to explain patterns of internal balancing and the rise and fall of great powers. Yet, the mainstream international relations literature has not in- vestigated this process.4 Because imitating military technology was relatively easy in the past, scholars and policymakers assume that it also is today, as fre- quent analogies between Wilhelmine Germany and contemporary China epit- omize.5 In this article, we investigate the conditions under which the imitation of state-of-the-art weapon systems such as attack submarines and combat air- craft is more or less likely to succeed. Second, we develop the ªrst systematic theoretical explanation of why U.S. superiority in military technology remains largely unrivaled almost thirty years after the end of the Cold War, despite globalization and the information and communication technology revolution. Some scholars have argued that developing modern weapon systems has become dramatically more demand- ing, which in turn has made internal balancing against the United States more difªcult.6 This literature, however, cannot explain why in the age of globaliza- tion and instant communications—with cyber espionage permitting the theft of massive amount of digital data—U.S. know-how in advanced weapon sys- tems has not already diffused to other states. Other contributors to the debate on unipolarity have either pointed to the relative inferiority of Chinese mili- tary technology without providing a theoretical explanation, or they have ar- gued that developing the military capabilities to challenge the status quo is, in the long run, a function of political will—an argument that cannot account for the failure of the Soviet Union to cope with U.S. military technology from the late 1970s onward.7 We argue that in the transition from the second industrial revolution to the information age, the imitation of state-of-the-art mili- tary technology has become more difªcult, so much so that today rising powers or even peer competitors cannot easily copy foreign weapon systems.8 Our ªndings address existing concerns that China’s use of cyber espionage and the increasing globalization of arms production will allow Beijing to rap- idly close the military-technological gap with the United States.9 Third, the international relations literature accepts the claim that globali- zation and advances in communications have made the imitation of military technology easier; yet no one has empirically tested this proposition.10 This failing is particularly concerning in light of the opportunities opened by cyber espionage—a practice that, according to many observers, could erode the U.S. advantage in military technology. Richard Clark, a former U.S. senior govern- ment ofªcial, believes that Chinese cyber espionage could result in the United States “hav[ing] all of [its] research and development stolen”; Gen. Keith Alexander, a former director of the National Security Agency, worries that cyber espionage could lead to “the greatest transfer of wealth in history.”11 With a few notable exceptions, however, international relations scholars have paid little attention to the advantages and limits of cyber espionage for copy- ing foreign military technology.12 Our research ªlls this gap and tests the con- ventional wisdom using the case of China, one of the states that has beneªted the most from globalization and that has employed cyber espionage more ex- tensively than any other country. https://direct.mit.edu/isec/article/43/3/141/12218/Why-China-Has-Not-Caught-Up-Yet-Military
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document] EVIDENCE: Can adversaries of the United States easily imitate its most advanced weapon systems and thus erode its military-technological superiority? Do reverse engineering, industrial espi- onage, and, in particular, cyber espionage facilitate and accelerate this process? China’s decades-long economic boom, military modernization program, mas- sive reliance on cyber espionage, and assertive foreign policy have made these questions increasingly salient. Yet, almost everything known about this topic draws from the past. As we explain in this article, the conclusions that the ex- isting literature has reached by studying prior eras have no applicability to the current day. Scholarship in international relations theory generally assumes that ris- ing states beneªt from the “advantage of backwardness,” as described by Alexander Gerschenkron.1 By free riding on the research and technology of the most advanced countries, less developed states can allegedly close the military-technological gap with their rivals relatively easily and quickly.2 More recent works maintain that globalization, the emergence of dual-use compo- nents, and advances in communications (including the opportunity for cyber espionage) have facilitated this process.3 This literature is built on shaky theo- retical foundations, and its claims lack empirical support. The international relations literature largely ignores one of the most impor- tant changes to have occurred in the realm of weapons development since the second industrial revolution (1870–1914): the exponential increase in the com- plexity of military technology. We argue that this increase in complexity has promoted a change in the system of production that has made the imita- tion and replication of the performance of state-of-the-art weapon systems harder—so much so as to offset the diffusing effects of globalization and ad- vances in communications. On the one hand, the increase in complexity has signiªcantly raised the entry barriers for the production of advanced wea- pon systems: countries must now possess an extremely advanced industrial, scientiªc, and technological base in weapons production before they can copy foreign military technology. On the other hand, the knowledge to design, de- velop, and produce advanced weapon systems is less likely to diffuse, given its increasingly tacit and organizational nature. As a result, the advantage of backwardness has shrunk signiªcantly, and know-how and experience in the production of advanced weapon systems have become an important source of power for those who master them. We employ two case studies to test this ar- gument: Imperial Germany’s rapid success in closing the technological gap with the British Dreadnought battleship, despite signiªcant inhibiting factors; and China’s struggle to imitate the U.S. F-22/A Raptor jet ªghter, despite sev- eral facilitating conditions. Our research contributes to key theoretical and policy debates. First, the ability to imitate state-of-the-art military hardware plays a central role in theo- ries that seek to explain patterns of internal balancing and the rise and fall of great powers. Yet, the mainstream international relations literature has not in- vestigated this process.4 Because imitating military technology was relatively easy in the past, scholars and policymakers assume that it also is today, as fre- quent analogies between Wilhelmine Germany and contemporary China epit- omize.5 In this article, we investigate the conditions under which the imitation of state-of-the-art weapon systems such as attack submarines and combat air- craft is more or less likely to succeed. Second, we develop the ªrst systematic theoretical explanation of why U.S. superiority in military technology remains largely unrivaled almost thirty years after the end of the Cold War, despite globalization and the information and communication technology revolution. Some scholars have argued that developing modern weapon systems has become dramatically more demand- ing, which in turn has made internal balancing against the United States more difªcult.6 This literature, however, cannot explain why in the age of globaliza- tion and instant communications—with cyber espionage permitting the theft of massive amount of digital data—U.S. know-how in advanced weapon sys- tems has not already diffused to other states. Other contributors to the debate on unipolarity have either pointed to the relative inferiority of Chinese mili- tary technology without providing a theoretical explanation, or they have ar- gued that developing the military capabilities to challenge the status quo is, in the long run, a function of political will—an argument that cannot account for the failure of the Soviet Union to cope with U.S. military technology from the late 1970s onward.7 We argue that in the transition from the second industrial revolution to the information age, the imitation of state-of-the-art mili- tary technology has become more difªcult, so much so that today rising powers or even peer competitors cannot easily copy foreign weapon systems.8 Our ªndings address existing concerns that China’s use of cyber espionage and the increasing globalization of arms production will allow Beijing to rap- idly close the military-technological gap with the United States.9 Third, the international relations literature accepts the claim that globali- zation and advances in communications have made the imitation of military technology easier; yet no one has empirically tested this proposition.10 This failing is particularly concerning in light of the opportunities opened by cyber espionage—a practice that, according to many observers, could erode the U.S. advantage in military technology. Richard Clark, a former U.S. senior govern- ment ofªcial, believes that Chinese cyber espionage could result in the United States “hav[ing] all of [its] research and development stolen”; Gen. Keith Alexander, a former director of the National Security Agency, worries that cyber espionage could lead to “the greatest transfer of wealth in history.”11 With a few notable exceptions, however, international relations scholars have paid little attention to the advantages and limits of cyber espionage for copy- ing foreign military technology.12 Our research ªlls this gap and tests the con- ventional wisdom using the case of China, one of the states that has beneªted the most from globalization and that has employed cyber espionage more ex- tensively than any other country. USER: How can china achieve technological parity with the US military? What lessons from history can it learn in trying to achieve this aim? Under 300 words please. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
24
27
962
null
331
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge.
What is the SDWA?
Detections of contaminants in public water supplies in numerous states have raised questions about drinking water quality and have increased congressional interest in the Environmental Protection Agency’s (EPA’s) efforts to regulate contaminants under the Safe Drinking Water Act (SDWA). Congress is particularly interested in the EPA’s process for evaluating new contaminants for potential regulation. Detections of already-regulated contaminants, such as lead, have also raised concerns about the effectiveness of certain existing regulations. SDWA is the key federal law that authorizes EPA to promulgate regulations to control contaminants in public water supplies. Since its enactment in 1974, EPA has issued drinking water regulations for 100 contaminants. Congress has twice revised the act’s process for evaluating contaminants and developing drinking water regulations (in 1986 and 1996). In 1986, Congress directed EPA to develop regulations for 83 contaminants within 3 years, and adopt regulations, every 3 years, for at least 25 new contaminants. In 1996, when this regulatory schedule proved unworkable, Congress amended SDWA to establish a riskbased process that prioritizes contaminants for regulation based on health effects and occurrence. Under SDWA, EPA follows a multistep process to evaluate and prioritize contaminants for regulation. This process includes identifying contaminants of potential concern, assessing health risks, collecting national occurrence data (and developing reliable and field-tested analytical methods necessary to do so), and making determinations as to whether a contaminant warrants regulation. Since 1996, EPA has considered over 7,500 contaminants for potential regulation, revised existing regulations, and established new regulations and standards for several contaminants. When EPA determines that a contaminant warrants regulation, SDWA directs EPA to propose a “national primary drinking water regulation” and request public comment within 24 months. Within 18 months of the proposal, EPA is required to promulgate a final rule. EPA is required to establish a nonenforceable maximum contaminant level goal (MCLG) at a level at which no known or anticipated adverse health effects occur and allowing for an adequate margin of safety. Drinking water regulations generally specify a maximum contaminant level (MCL)—an enforceable limit for a contaminant in public water supplies. SDWA requires EPA to set the MCL as close to the MCLG as “feasible,” taking treatment efficacy and costs into consideration. Concurrent with proposing a regulation, SDWA requires EPA to publish a “health risk reduction and cost analysis” for each contaminant covered by the proposed regulation and make a determination whether or not the benefits of regulation outweigh the compliance costs. EPA’s regulations generally take effect three years after promulgation, though the agency may allow up to two additional years, under certain conditions. For each drinking water regulation, SDWA requires EPA to identify a list of best available technologies, treatment techniques, and other means that EPA finds feasible for meeting the MCL. In addition, EPA is required to identify treatment technologies that achieve the MCL and are affordable for small systems. Each regulation also establishes associated monitoring and reporting requirements. SDWA requires EPA to review—and, if necessary, revise— each existing national primary drinking water regulation every six years. SDWA also requires that any revisions to drinking water regulations maintain or provide greater health protection. Under the current SDWA statutory framework, evaluating and developing regulations for contaminants requires data, including data from peer-reviewed scientific studies on potential health effects and nationally representative occurrence data. For some contaminants, the availability or development of (1) data, (2) analytical methods to detect contaminants in drinking water, and (3) treatment technologies pose technical and resource challenges. Congressional attention has centered on EPA’s implementation of SDWA regulatory development provisions. Some stakeholders also have raised concerns about regulatory costs for communities. In recent Congresses, some Members have raised concerns that the act’s process is lengthy and complicated and does not allow for the timely regulation of contaminants of concern in drinking water. Others have expressed concern that proposals to expedite regulation by removing elements of SDWA regulatory development provisions (e.g., the requirement to use peer-reviewed science or a health-risk-based approach) may result in increased costs to communities without commensurate public health protection. This debate was heightened prior to EPA’s determination to regulate per- and polyfluoroalkyl substances (PFAS) in 2021, yet stakeholders raised similar concerns after EPA’s 2024 finalization of a regulation for six PFAS.
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. What is the SDWA? Detections of contaminants in public water supplies in numerous states have raised questions about drinking water quality and have increased congressional interest in the Environmental Protection Agency’s (EPA’s) efforts to regulate contaminants under the Safe Drinking Water Act (SDWA). Congress is particularly interested in the EPA’s process for evaluating new contaminants for potential regulation. Detections of already-regulated contaminants, such as lead, have also raised concerns about the effectiveness of certain existing regulations. SDWA is the key federal law that authorizes EPA to promulgate regulations to control contaminants in public water supplies. Since its enactment in 1974, EPA has issued drinking water regulations for 100 contaminants. Congress has twice revised the act’s process for evaluating contaminants and developing drinking water regulations (in 1986 and 1996). In 1986, Congress directed EPA to develop regulations for 83 contaminants within 3 years, and adopt regulations, every 3 years, for at least 25 new contaminants. In 1996, when this regulatory schedule proved unworkable, Congress amended SDWA to establish a riskbased process that prioritizes contaminants for regulation based on health effects and occurrence. Under SDWA, EPA follows a multistep process to evaluate and prioritize contaminants for regulation. This process includes identifying contaminants of potential concern, assessing health risks, collecting national occurrence data (and developing reliable and field-tested analytical methods necessary to do so), and making determinations as to whether a contaminant warrants regulation. Since 1996, EPA has considered over 7,500 contaminants for potential regulation, revised existing regulations, and established new regulations and standards for several contaminants. When EPA determines that a contaminant warrants regulation, SDWA directs EPA to propose a “national primary drinking water regulation” and request public comment within 24 months. Within 18 months of the proposal, EPA is required to promulgate a final rule. EPA is required to establish a nonenforceable maximum contaminant level goal (MCLG) at a level at which no known or anticipated adverse health effects occur and allowing for an adequate margin of safety. Drinking water regulations generally specify a maximum contaminant level (MCL)—an enforceable limit for a contaminant in public water supplies. SDWA requires EPA to set the MCL as close to the MCLG as “feasible,” taking treatment efficacy and costs into consideration. Concurrent with proposing a regulation, SDWA requires EPA to publish a “health risk reduction and cost analysis” for each contaminant covered by the proposed regulation and make a determination whether or not the benefits of regulation outweigh the compliance costs. EPA’s regulations generally take effect three years after promulgation, though the agency may allow up to two additional years, under certain conditions. For each drinking water regulation, SDWA requires EPA to identify a list of best available technologies, treatment techniques, and other means that EPA finds feasible for meeting the MCL. In addition, EPA is required to identify treatment technologies that achieve the MCL and are affordable for small systems. Each regulation also establishes associated monitoring and reporting requirements. SDWA requires EPA to review—and, if necessary, revise— each existing national primary drinking water regulation every six years. SDWA also requires that any revisions to drinking water regulations maintain or provide greater health protection. Under the current SDWA statutory framework, evaluating and developing regulations for contaminants requires data, including data from peer-reviewed scientific studies on potential health effects and nationally representative occurrence data. For some contaminants, the availability or development of (1) data, (2) analytical methods to detect contaminants in drinking water, and (3) treatment technologies pose technical and resource challenges. Congressional attention has centered on EPA’s implementation of SDWA regulatory development provisions. Some stakeholders also have raised concerns about regulatory costs for communities. In recent Congresses, some Members have raised concerns that the act’s process is lengthy and complicated and does not allow for the timely regulation of contaminants of concern in drinking water. Others have expressed concern that proposals to expedite regulation by removing elements of SDWA regulatory development provisions (e.g., the requirement to use peer-reviewed science or a health-risk-based approach) may result in increased costs to communities without commensurate public health protection. This debate was heightened prior to EPA’s determination to regulate per- and polyfluoroalkyl substances (PFAS) in 2021, yet stakeholders raised similar concerns after EPA’s 2024 finalization of a regulation for six PFAS.
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. EVIDENCE: Detections of contaminants in public water supplies in numerous states have raised questions about drinking water quality and have increased congressional interest in the Environmental Protection Agency’s (EPA’s) efforts to regulate contaminants under the Safe Drinking Water Act (SDWA). Congress is particularly interested in the EPA’s process for evaluating new contaminants for potential regulation. Detections of already-regulated contaminants, such as lead, have also raised concerns about the effectiveness of certain existing regulations. SDWA is the key federal law that authorizes EPA to promulgate regulations to control contaminants in public water supplies. Since its enactment in 1974, EPA has issued drinking water regulations for 100 contaminants. Congress has twice revised the act’s process for evaluating contaminants and developing drinking water regulations (in 1986 and 1996). In 1986, Congress directed EPA to develop regulations for 83 contaminants within 3 years, and adopt regulations, every 3 years, for at least 25 new contaminants. In 1996, when this regulatory schedule proved unworkable, Congress amended SDWA to establish a riskbased process that prioritizes contaminants for regulation based on health effects and occurrence. Under SDWA, EPA follows a multistep process to evaluate and prioritize contaminants for regulation. This process includes identifying contaminants of potential concern, assessing health risks, collecting national occurrence data (and developing reliable and field-tested analytical methods necessary to do so), and making determinations as to whether a contaminant warrants regulation. Since 1996, EPA has considered over 7,500 contaminants for potential regulation, revised existing regulations, and established new regulations and standards for several contaminants. When EPA determines that a contaminant warrants regulation, SDWA directs EPA to propose a “national primary drinking water regulation” and request public comment within 24 months. Within 18 months of the proposal, EPA is required to promulgate a final rule. EPA is required to establish a nonenforceable maximum contaminant level goal (MCLG) at a level at which no known or anticipated adverse health effects occur and allowing for an adequate margin of safety. Drinking water regulations generally specify a maximum contaminant level (MCL)—an enforceable limit for a contaminant in public water supplies. SDWA requires EPA to set the MCL as close to the MCLG as “feasible,” taking treatment efficacy and costs into consideration. Concurrent with proposing a regulation, SDWA requires EPA to publish a “health risk reduction and cost analysis” for each contaminant covered by the proposed regulation and make a determination whether or not the benefits of regulation outweigh the compliance costs. EPA’s regulations generally take effect three years after promulgation, though the agency may allow up to two additional years, under certain conditions. For each drinking water regulation, SDWA requires EPA to identify a list of best available technologies, treatment techniques, and other means that EPA finds feasible for meeting the MCL. In addition, EPA is required to identify treatment technologies that achieve the MCL and are affordable for small systems. Each regulation also establishes associated monitoring and reporting requirements. SDWA requires EPA to review—and, if necessary, revise— each existing national primary drinking water regulation every six years. SDWA also requires that any revisions to drinking water regulations maintain or provide greater health protection. Under the current SDWA statutory framework, evaluating and developing regulations for contaminants requires data, including data from peer-reviewed scientific studies on potential health effects and nationally representative occurrence data. For some contaminants, the availability or development of (1) data, (2) analytical methods to detect contaminants in drinking water, and (3) treatment technologies pose technical and resource challenges. Congressional attention has centered on EPA’s implementation of SDWA regulatory development provisions. Some stakeholders also have raised concerns about regulatory costs for communities. In recent Congresses, some Members have raised concerns that the act’s process is lengthy and complicated and does not allow for the timely regulation of contaminants of concern in drinking water. Others have expressed concern that proposals to expedite regulation by removing elements of SDWA regulatory development provisions (e.g., the requirement to use peer-reviewed science or a health-risk-based approach) may result in increased costs to communities without commensurate public health protection. This debate was heightened prior to EPA’s determination to regulate per- and polyfluoroalkyl substances (PFAS) in 2021, yet stakeholders raised similar concerns after EPA’s 2024 finalization of a regulation for six PFAS. USER: What is the SDWA? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
28
4
699
null
150
Answer the questions using only the provided text, do not use any other outside sources for information. Any mention of a Supreme Court Justice by name should be in bold.
What specific concerns did the dissenting Supreme Court Justices have on this ruling?
JUSTICE BREYER, JUSTICE SOTOMAYOR, and JUSTICE KAGAN, dissenting. For half a century, Roe v. Wade, and Planned Parenthood of Southeastern Pa. v. Casey, have protected the liberty and equality of women. Roe held, and Casey reaffirmed, that the Constitution safeguards a woman’s right to decide for herself whether to bear a child. Roe held, and Casey reaffirmed, that in the first stages of pregnancy, the government could not make that choice for women. The government could not control a woman’s body or the course of a woman’s life: It could not determine what the woman’s future would be. Respecting a woman as an autonomous being, and granting her full equality, meant giving her substantial choice over this most personal and most consequential of all life decisions. Roe and Casey well understood the difficulty and divisiveness of the abortion issue. The Court knew that Americans hold profoundly different views about the “moral[ity]” of “terminating a pregnancy, even in its earliest stage.” And the Court recognized that “the state has legitimate interests from the outset of the pregnancy in protecting” the “life of the fetus that may become a child.” So the Court struck a balance, as it often does when values and goals compete. It held that the State could prohibit abortions after fetal viability, so long as the ban contained exceptions to safeguard a woman’s life or health. It held that even before viability, the State could regulate the abortion procedure in multiple and meaningful ways. But until the viability line was crossed, the Court held, a State could not impose a “substantial obstacle” on a woman’s “right to elect the procedure” as she (not the government) thought proper, in light of all the circumstances and complexities of her own life. Ibid. Today, the Court discards that balance. It says that from the very moment of fertilization, a woman has no rights to speak of. A State can force her to bring a pregnancy to term, even at the steepest personal and familial costs. An abortion restriction, the majority holds, is permissible whenever rational, the lowest level of scrutiny known to the law. And because, as the Court has often stated, protecting fetal life is rational, States will feel free to enact all manner of restrictions. The Mississippi law at issue here bars abortions after the 15th week of pregnancy. Under the majority’s ruling, though, another State’s law could do so after ten weeks, or five or three or one—or, again, from the moment of fertilization. States have already passed such laws, in anticipation of today’s ruling. More will follow. Some States have enacted laws extending to all forms of abortion procedure, including taking medication in one’s own home. They have passed laws without any exceptions for when the woman is the victim of rape or incest. Under those laws, a woman will have to bear her rapist’s child or a young girl her father’s—no matter if doing so will destroy her life. So too, after today’s ruling, some States may compel women to carry to term a fetus with severe physical anomalies—for example, one afflicted with Tay-Sachs disease, sure to die within a few years of birth. States may even argue that a prohibition on abortion need make no provision for protecting a woman from risk of death or physical harm. Across a vast array of circumstances, a State will be able to impose its moral choice on a woman and coerce her to give birth to a child. Enforcement of all these draconian restrictions will also be left largely to the States’ devices. A State can of course impose criminal penalties on abortion providers, including lengthy prison sentences. But some States will not stop there. Perhaps, in the wake of today’s decision, a state law will criminalize the woman’s conduct too, incarcerating or fining her for daring to seek or obtain an abortion. And as Texas has recently shown, a State can turn neighbor against neighbor, enlisting fellow citizens in the effort to root out anyone who tries to get an abortion, or to assist another in doing so. Today’s decision, the majority says, permits “each State” to address abortion as it pleases. That is cold comfort, of course, for the poor woman who cannot get the money to fly to a distant State for a procedure. Above all others, women lacking financial resources will suffer from today’s decision. In any event, interstate restrictions will also soon be in the offing. After this decision, some States may block women from traveling out of State to obtain abortions, or even from receiving abortion medications from out of State. Some may criminalize efforts, including the provision of information or funding, to help women gain access to other States’ abortion services. Most threatening of all, no language in today’s decision stops the Federal Government from prohibiting abortions nationwide, once again from the moment of conception and without exceptions for rape or incest. If that happens, “the views of [an individual State’s] citizens” will not matter. The challenge for a woman will be to finance a trip not to “New York [or] California” but to Toronto. Whatever the exact scope of the coming laws, one result of today’s decision is certain: the curtailment of women’s rights, and of their status as free and equal citizens. Yesterday, the Constitution guaranteed that a woman confronted with an unplanned pregnancy could (within reasonable limits) make her own decision about whether to bear a child, with all the life-transforming consequences that act involves. But no longer. As of today, this Court holds, a State can always force a woman to give birth, prohibiting even the earliest abortions. A State can thus transform what, when freely undertaken, is a wonder into what, when forced, may be a nightmare. Some women, especially women of means, will find ways around the State’s assertion of power. Others—those without money or childcare or the ability to take time off from work—will not be so fortunate. Maybe they will try an unsafe method of abortion, and come to physical harm, or even die. Maybe they will undergo pregnancy and have a child, but at significant personal or familial cost. At the least, they will incur the cost of losing control of their lives. The Constitution will, today’s majority holds, provide no shield, despite its guarantees of liberty and equality for all. And no one should be confident that this majority is done with its work. The right Roe and Casey recognized does not stand alone. To the contrary, the Court has linked it for decades to other settled freedoms involving bodily integrity, familial relationships, and procreation. Most obviously, the right to terminate a pregnancy arose straight out of the right to purchase and use contraception. The majority (or to be more accurate, most of it) is eager to tell us today that nothing it does “cast[s] doubt on precedents that do not concern abortion.” But how could that be? The lone rationale for what the majority does today is that the right to elect an abortion is not “deeply rooted in history”: Not until Roe, the majority argues, did people think abortion fell within the Constitution’s guarantee of liberty. The same could be said, though, of most of the rights the majority claims it is not tampering with. The majority could write just as long an opinion showing, for example, that until the mid-20th century, “there was no support in American law for a constitutional right to obtain [contraceptives].” So one of two things must be true. Either the majority does not really believe in its own reasoning. Or if it does, all rights that have no history stretching back to the mid19th century are insecure. Either the mass of the majority’s opinion is hypocrisy, or additional constitutional rights are under threat. It is one or the other. One piece of evidence on that score seems especially salient: The majority’s cavalier approach to overturning this Court’s precedents. Stare decisis is the Latin phrase for a foundation stone of the rule of law: that things decided should stay decided unless there is a very good reason for change. It is a doctrine of judicial modesty and humility. Those qualities are not evident in today’s opinion. The majority has no good reason for the upheaval in law and society it sets off. Women have reliedon the availability of abortion both in structuring their relationships and in planning their lives. The legal framework Roe and Casey developed to balance the competing interests in this sphere has proved workable in courts across the country. No recent developments, in either law or fact, have eroded or cast doubt on those precedents. Nothing, in short, has changed.
Answer the question below using only the provided text, do not use any other outside sources for information. Any mention of a Supreme Court Justice by name should be in bold. What specific concerns did the dissenting Supreme Court Justices have on this ruling? JUSTICE BREYER, JUSTICE SOTOMAYOR, and JUSTICE KAGAN, dissenting. For half a century, Roe v. Wade, and Planned Parenthood of Southeastern Pa. v. Casey, have protected the liberty and equality of women. Roe held, and Casey reaffirmed, that the Constitution safeguards a woman’s right to decide for herself whether to bear a child. Roe held, and Casey reaffirmed, that in the first stages of pregnancy, the government could not make that choice for women. The government could not control a woman’s body or the course of a woman’s life: It could not determine what the woman’s future would be. Respecting a woman as an autonomous being, and granting her full equality, meant giving her substantial choice over this most personal and most consequential of all life decisions. Roe and Casey well understood the difficulty and divisiveness of the abortion issue. The Court knew that Americans hold profoundly different views about the “moral[ity]” of “terminating a pregnancy, even in its earliest stage.” And the Court recognized that “the state has legitimate interests from the outset of the pregnancy in protecting” the “life of the fetus that may become a child.” So the Court struck a balance, as it often does when values and goals compete. It held that the State could prohibit abortions after fetal viability, so long as the ban contained exceptions to safeguard a woman’s life or health. It held that even before viability, the State could regulate the abortion procedure in multiple and meaningful ways. But until the viability line was crossed, the Court held, a State could not impose a “substantial obstacle” on a woman’s “right to elect the procedure” as she (not the government) thought proper, in light of all the circumstances and complexities of her own life. Ibid. Today, the Court discards that balance. It says that from the very moment of fertilization, a woman has no rights to speak of. A State can force her to bring a pregnancy to term, even at the steepest personal and familial costs. An abortion restriction, the majority holds, is permissible whenever rational, the lowest level of scrutiny known to the law. And because, as the Court has often stated, protecting fetal life is rational, States will feel free to enact all manner of restrictions. The Mississippi law at issue here bars abortions after the 15th week of pregnancy. Under the majority’s ruling, though, another State’s law could do so after ten weeks, or five or three or one—or, again, from the moment of fertilization. States have already passed such laws, in anticipation of today’s ruling. More will follow. Some States have enacted laws extending to all forms of abortion procedure, including taking medication in one’s own home. They have passed laws without any exceptions for when the woman is the victim of rape or incest. Under those laws, a woman will have to bear her rapist’s child or a young girl her father’s—no matter if doing so will destroy her life. So too, after today’s ruling, some States may compel women to carry to term a fetus with severe physical anomalies—for example, one afflicted with Tay-Sachs disease, sure to die within a few years of birth. States may even argue that a prohibition on abortion need make no provision for protecting a woman from risk of death or physical harm. Across a vast array of circumstances, a State will be able to impose its moral choice on a woman and coerce her to give birth to a child. Enforcement of all these draconian restrictions will also be left largely to the States’ devices. A State can of course impose criminal penalties on abortion providers, including lengthy prison sentences. But some States will not stop there. Perhaps, in the wake of today’s decision, a state law will criminalize the woman’s conduct too, incarcerating or fining her for daring to seek or obtain an abortion. And as Texas has recently shown, a State can turn neighbor against neighbor, enlisting fellow citizens in the effort to root out anyone who tries to get an abortion, or to assist another in doing so. Today’s decision, the majority says, permits “each State” to address abortion as it pleases. That is cold comfort, of course, for the poor woman who cannot get the money to fly to a distant State for a procedure. Above all others, women lacking financial resources will suffer from today’s decision. In any event, interstate restrictions will also soon be in the offing. After this decision, some States may block women from traveling out of State to obtain abortions, or even from receiving abortion medications from out of State. Some may criminalize efforts, including the provision of information or funding, to help women gain access to other States’ abortion services. Most threatening of all, no language in today’s decision stops the Federal Government from prohibiting abortions nationwide, once again from the moment of conception and without exceptions for rape or incest. If that happens, “the views of [an individual State’s] citizens” will not matter. The challenge for a woman will be to finance a trip not to “New York [or] California” but to Toronto. Whatever the exact scope of the coming laws, one result of today’s decision is certain: the curtailment of women’s rights, and of their status as free and equal citizens. Yesterday, the Constitution guaranteed that a woman confronted with an unplanned pregnancy could (within reasonable limits) make her own decision about whether to bear a child, with all the life-transforming consequences that act involves. But no longer. As of today, this Court holds, a State can always force a woman to give birth, prohibiting even the earliest abortions. A State can thus transform what, when freely undertaken, is a wonder into what, when forced, may be a nightmare. Some women, especially women of means, will find ways around the State’s assertion of power. Others—those without money or childcare or the ability to take time off from work—will not be so fortunate. Maybe they will try an unsafe method of abortion, and come to physical harm, or even die. Maybe they will undergo pregnancy and have a child, but at significant personal or familial cost. At the least, they will incur the cost of losing control of their lives. The Constitution will, today’s majority holds, provide no shield, despite its guarantees of liberty and equality for all. And no one should be confident that this majority is done with its work. The right Roe and Casey recognized does not stand alone. To the contrary, the Court has linked it for decades to other settled freedoms involving bodily integrity, familial relationships, and procreation. Most obviously, the right to terminate a pregnancy arose straight out of the right to purchase and use contraception. The majority (or to be more accurate, most of it) is eager to tell us today that nothing it does “cast[s] doubt on precedents that do not concern abortion.” But how could that be? The lone rationale for what the majority does today is that the right to elect an abortion is not “deeply rooted in history”: Not until Roe, the majority argues, did people think abortion fell within the Constitution’s guarantee of liberty. The same could be said, though, of most of the rights the majority claims it is not tampering with. The majority could write just as long an opinion showing, for example, that until the mid-20th century, “there was no support in American law for a constitutional right to obtain [contraceptives].” So one of two things must be true. Either the majority does not really believe in its own reasoning. Or if it does, all rights that have no history stretching back to the mid19th century are insecure. Either the mass of the majority’s opinion is hypocrisy, or additional constitutional rights are under threat. It is one or the other. One piece of evidence on that score seems especially salient: The majority’s cavalier approach to overturning this Court’s precedents. Stare decisis is the Latin phrase for a foundation stone of the rule of law: that things decided should stay decided unless there is a very good reason for change. It is a doctrine of judicial modesty and humility. Those qualities are not evident in today’s opinion. The majority has no good reason for the upheaval in law and society it sets off. Women have relied on the availability of abortion both in structuring their relationships and in planning their lives. The legal framework Roe and Casey developed to balance the competing interests in this sphere has proved workable in courts across the country. No recent developments, in either law or fact, have eroded or cast doubt on those precedents. Nothing, in short, has changed.
Answer the questions using only the provided text, do not use any other outside sources for information. Any mention of a Supreme Court Justice by name should be in bold. EVIDENCE: JUSTICE BREYER, JUSTICE SOTOMAYOR, and JUSTICE KAGAN, dissenting. For half a century, Roe v. Wade, and Planned Parenthood of Southeastern Pa. v. Casey, have protected the liberty and equality of women. Roe held, and Casey reaffirmed, that the Constitution safeguards a woman’s right to decide for herself whether to bear a child. Roe held, and Casey reaffirmed, that in the first stages of pregnancy, the government could not make that choice for women. The government could not control a woman’s body or the course of a woman’s life: It could not determine what the woman’s future would be. Respecting a woman as an autonomous being, and granting her full equality, meant giving her substantial choice over this most personal and most consequential of all life decisions. Roe and Casey well understood the difficulty and divisiveness of the abortion issue. The Court knew that Americans hold profoundly different views about the “moral[ity]” of “terminating a pregnancy, even in its earliest stage.” And the Court recognized that “the state has legitimate interests from the outset of the pregnancy in protecting” the “life of the fetus that may become a child.” So the Court struck a balance, as it often does when values and goals compete. It held that the State could prohibit abortions after fetal viability, so long as the ban contained exceptions to safeguard a woman’s life or health. It held that even before viability, the State could regulate the abortion procedure in multiple and meaningful ways. But until the viability line was crossed, the Court held, a State could not impose a “substantial obstacle” on a woman’s “right to elect the procedure” as she (not the government) thought proper, in light of all the circumstances and complexities of her own life. Ibid. Today, the Court discards that balance. It says that from the very moment of fertilization, a woman has no rights to speak of. A State can force her to bring a pregnancy to term, even at the steepest personal and familial costs. An abortion restriction, the majority holds, is permissible whenever rational, the lowest level of scrutiny known to the law. And because, as the Court has often stated, protecting fetal life is rational, States will feel free to enact all manner of restrictions. The Mississippi law at issue here bars abortions after the 15th week of pregnancy. Under the majority’s ruling, though, another State’s law could do so after ten weeks, or five or three or one—or, again, from the moment of fertilization. States have already passed such laws, in anticipation of today’s ruling. More will follow. Some States have enacted laws extending to all forms of abortion procedure, including taking medication in one’s own home. They have passed laws without any exceptions for when the woman is the victim of rape or incest. Under those laws, a woman will have to bear her rapist’s child or a young girl her father’s—no matter if doing so will destroy her life. So too, after today’s ruling, some States may compel women to carry to term a fetus with severe physical anomalies—for example, one afflicted with Tay-Sachs disease, sure to die within a few years of birth. States may even argue that a prohibition on abortion need make no provision for protecting a woman from risk of death or physical harm. Across a vast array of circumstances, a State will be able to impose its moral choice on a woman and coerce her to give birth to a child. Enforcement of all these draconian restrictions will also be left largely to the States’ devices. A State can of course impose criminal penalties on abortion providers, including lengthy prison sentences. But some States will not stop there. Perhaps, in the wake of today’s decision, a state law will criminalize the woman’s conduct too, incarcerating or fining her for daring to seek or obtain an abortion. And as Texas has recently shown, a State can turn neighbor against neighbor, enlisting fellow citizens in the effort to root out anyone who tries to get an abortion, or to assist another in doing so. Today’s decision, the majority says, permits “each State” to address abortion as it pleases. That is cold comfort, of course, for the poor woman who cannot get the money to fly to a distant State for a procedure. Above all others, women lacking financial resources will suffer from today’s decision. In any event, interstate restrictions will also soon be in the offing. After this decision, some States may block women from traveling out of State to obtain abortions, or even from receiving abortion medications from out of State. Some may criminalize efforts, including the provision of information or funding, to help women gain access to other States’ abortion services. Most threatening of all, no language in today’s decision stops the Federal Government from prohibiting abortions nationwide, once again from the moment of conception and without exceptions for rape or incest. If that happens, “the views of [an individual State’s] citizens” will not matter. The challenge for a woman will be to finance a trip not to “New York [or] California” but to Toronto. Whatever the exact scope of the coming laws, one result of today’s decision is certain: the curtailment of women’s rights, and of their status as free and equal citizens. Yesterday, the Constitution guaranteed that a woman confronted with an unplanned pregnancy could (within reasonable limits) make her own decision about whether to bear a child, with all the life-transforming consequences that act involves. But no longer. As of today, this Court holds, a State can always force a woman to give birth, prohibiting even the earliest abortions. A State can thus transform what, when freely undertaken, is a wonder into what, when forced, may be a nightmare. Some women, especially women of means, will find ways around the State’s assertion of power. Others—those without money or childcare or the ability to take time off from work—will not be so fortunate. Maybe they will try an unsafe method of abortion, and come to physical harm, or even die. Maybe they will undergo pregnancy and have a child, but at significant personal or familial cost. At the least, they will incur the cost of losing control of their lives. The Constitution will, today’s majority holds, provide no shield, despite its guarantees of liberty and equality for all. And no one should be confident that this majority is done with its work. The right Roe and Casey recognized does not stand alone. To the contrary, the Court has linked it for decades to other settled freedoms involving bodily integrity, familial relationships, and procreation. Most obviously, the right to terminate a pregnancy arose straight out of the right to purchase and use contraception. The majority (or to be more accurate, most of it) is eager to tell us today that nothing it does “cast[s] doubt on precedents that do not concern abortion.” But how could that be? The lone rationale for what the majority does today is that the right to elect an abortion is not “deeply rooted in history”: Not until Roe, the majority argues, did people think abortion fell within the Constitution’s guarantee of liberty. The same could be said, though, of most of the rights the majority claims it is not tampering with. The majority could write just as long an opinion showing, for example, that until the mid-20th century, “there was no support in American law for a constitutional right to obtain [contraceptives].” So one of two things must be true. Either the majority does not really believe in its own reasoning. Or if it does, all rights that have no history stretching back to the mid19th century are insecure. Either the mass of the majority’s opinion is hypocrisy, or additional constitutional rights are under threat. It is one or the other. One piece of evidence on that score seems especially salient: The majority’s cavalier approach to overturning this Court’s precedents. Stare decisis is the Latin phrase for a foundation stone of the rule of law: that things decided should stay decided unless there is a very good reason for change. It is a doctrine of judicial modesty and humility. Those qualities are not evident in today’s opinion. The majority has no good reason for the upheaval in law and society it sets off. Women have reliedon the availability of abortion both in structuring their relationships and in planning their lives. The legal framework Roe and Casey developed to balance the competing interests in this sphere has proved workable in courts across the country. No recent developments, in either law or fact, have eroded or cast doubt on those precedents. Nothing, in short, has changed. USER: What specific concerns did the dissenting Supreme Court Justices have on this ruling? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
30
13
1,449
null
230
Answer the questions from only the provided text. Do not use any external resources or prior knowledge. Explain your answer but do not exceed 250 words per answer.
My family has been grazing our cattle on federal government land that is not U.S. Fish and Wildlife Service or a National Park for 75 years that has been banned from being used for geothermal leasing. Do we have protected rights to keep grazing our cattle on that land?
Lands and interest in lands owned by the United States (i.e., federal lands) have been withdrawn from agency management under various public land laws. Federal land withdrawals typically seek to preclude lands from being used for certain purposes (i.e., withdraw them)in order to dedicate them to other purposes or to maintain other public values. For example, some laws established or expanded federal land designations, such as wilderness areas or units of the National Park System, and withdrew the lands apparently to foster the primary purposes of these designations. Withdrawals affect lands managed by agencies including the four major land management agencies: the Bureau of Land Management (BLM), U.S. Fish and Wildlife Service (FWS), and National Park Service (NPS), all in the Department of the Interior, and the U.S. Forest Service (FS), in the Department of Agriculture. The first component of the example provision generally would bar third parties from applying to take ownership and obtaining possession of the lands or resources on the lands under public land laws. However, the lack of a comprehensive list of public land laws—and the lack of a single, consistent definition of the term public land laws itself over time—makes it challenging to determine the precise meaning and applicability. The second component generally would prevent the withdrawn lands from being available for new mining (e.g., under theGeneral Mining Law of 1872). The third component generally would prevent the withdrawn lands from being available for new mineral leasing, sale of mineral materials, and geothermal leasing (e.g., under the Mineral Leasing Act of 1920, Materials Act of 1947, and Geothermal Steam Act of 1970). Together, the three components primarily would affect BLM and FS, because laws governing lands managed by those agencies generally allow for energy and mineral development and provide broader authority to convey lands out of federal ownership than laws governing NPS and FWS lands. Typically, the three components would not bar various surface uses that otherwise might be allowed, possibly including recreation, hunting, and livestock grazing. However, some uses might be limited by Congress or by subsequent agency actions, such as amendments to land management plans, if the uses are inconsistent with the withdrawal’s purposes. Defining “Valid Existing Rights” As used in legislated withdrawals, a “valid existing right” is a third-party (i.e., nonfederal) interest in federal land that the relevant federal agency cannot terminate or unduly limit.82 To have a valid existing right, the third party must  have met the requirements under the relevant law to obtain a property interest in the land (i.e., the property interest must be valid);  have had a protectable interest before the United States withdraws the land (i.e., the property interest was existing at the time of withdrawal);83 and  possess a property interest (or in some cases a possessory interest) in the land that constitutes a right for purposes of withdrawals (i.e., it must be a right).84 Valid The validity of the interest depends on whether the third party has met the requirements of the law under which it alleges to have secured the property interest. First, the interest itself must be legitimate (i.e., supported by evidence of the factual basis required by the relevant statute). For example, to secure a mining claim as a valid right under the mining laws, a claimant must demonstrate that they have made a “valid discovery” of a valuable mineral deposit that can be extracted and marketed. Existing The second requirement for a third party to have a “valid existing right” is that the property interest existed at the time of withdrawal.90 Depending on the legal basis for the right, a third party obtains an interest in federal land either (1) once they meet the statutory requirements, without the federal agency having to act, or (2) when the federal agency exercises its discretion to grant the property interest after the third party meets the relevant statutory requirements. 91 Third parties claiming property interests under laws that do not require the federal agency to grant the interest have an existing property interest as soon as they meet the law’s requirements.92 For example, a claimant under federal mining laws is entitled to the claim once they complete the statutory steps described above (discovery and location).93 Whether the Secretary of the Interior has issued a land patent to transfer title to the claimant does not affect the claimant’s right to the land; once federal mining law requirements are met, the property right “vests” (i.e., ownership is transferred to the claimant) and the right exists. 94 In some cases, the claimant need not complete all of the required steps before the withdrawal to obtain an existing right. If the law allows claims to relate back to occupancy (i.e., be back-dated to when the claimant first occupied the land), claimants may have existing rights if they occupied the land before withdrawal and ultimately complete the remaining steps required by law.95 Other laws provide that a claimant’s interest in federal land only becomes a valid existing right once the Secretary has acted to make it valid. 96 For example, third parties acquire oil and gas leases when the Secretary of the Interior approves their application. 97 Although courts and agencies have recognized these leases as valid existing rights in various contexts, they have not recognized applications for oil and gas leases or other leasehold interests in federal land. Courts and agencies have at times concluded that a third party has a valid existing right despite not having established an interest by law before the land is withdrawn. 99 The Solicitor of the Department of the Interior has offered “an expansive interpretation of ‘existing valid rights’ in the context of withdrawal” 100 that includes “all prior valid applications for entry, selection, or location, which were substantially complete at the date of the withdrawal” and “[c]laims under the Color of Title Act of December 22, 1928.”101 A court or agency also may recognize a valid existing right, even if the claimant is not legally entitled to it, because it would be equitable (i.e., consistent with the principles of justice). 102 Rights Not all uses of or interests in federal land qualify as valid existing “rights.” The third party usually must have obtained a property interest in the land to have a right; merely using the land generally is insufficient to establish a valid existing right. 103 To determine whether the asserted interest qualifies as a right, courts and agencies examine the law authorizing the interest and the withdrawal law.104 Courts and agencies have recognized a number of property interests as protected rights, such as entitlements to land patents under mining laws and entry-based laws such as the Homestead Acts and the Trade and Manufacturing Site Act;105 land grants to states;106 rights-of-way;107 and mineral leases.108 Courts and agencies also have deemed certain possessory interests protected, the most common example being perfected but unpatented mining claims. 109 However, they have declined to recognize other possessory interests as valid existing rights.110 Courts and agencies have generally not recognized permits, such as grazing permits, as protected property rights for purposes of interpreting withdrawals, absent a specific provision in the withdrawal law or order.111
Answer the questions from only the provided text. Do not use any external resources or prior knowledge. Explain your answer but do not exceed 250 words per answer. Lands and interest in lands owned by the United States (i.e., federal lands) have been withdrawn from agency management under various public land laws. Federal land withdrawals typically seek to preclude lands from being used for certain purposes (i.e., withdraw them)in order to dedicate them to other purposes or to maintain other public values. For example, some laws established or expanded federal land designations, such as wilderness areas or units of the National Park System, and withdrew the lands apparently to foster the primary purposes of these designations. Withdrawals affect lands managed by agencies including the four major land management agencies: the Bureau of Land Management (BLM), U.S. Fish and Wildlife Service (FWS), and National Park Service (NPS), all in the Department of the Interior, and the U.S. Forest Service (FS), in the Department of Agriculture. The first component of the example provision generally would bar third parties from applying to take ownership and obtaining possession of the lands or resources on the lands under public land laws. However, the lack of a comprehensive list of public land laws—and the lack of a single, consistent definition of the term public land laws itself over time—makes it challenging to determine the precise meaning and applicability. The second component generally would prevent the withdrawn lands from being available for new mining (e.g., under theGeneral Mining Law of 1872). The third component generally would prevent the withdrawn lands from being available for new mineral leasing, sale of mineral materials, and geothermal leasing (e.g., under the Mineral Leasing Act of 1920, Materials Act of 1947, and Geothermal Steam Act of 1970). Together, the three components primarily would affect BLM and FS, because laws governing lands managed by those agencies generally allow for energy and mineral development and provide broader authority to convey lands out of federal ownership than laws governing NPS and FWS lands. Typically, the three components would not bar various surface uses that otherwise might be allowed, possibly including recreation, hunting, and livestock grazing. However, some uses might be limited by Congress or by subsequent agency actions, such as amendments to land management plans, if the uses are inconsistent with the withdrawal’s purposes. Defining “Valid Existing Rights” As used in legislated withdrawals, a “valid existing right” is a third-party (i.e., nonfederal) interest in federal land that the relevant federal agency cannot terminate or unduly limit.82 To have a valid existing right, the third party must  have met the requirements under the relevant law to obtain a property interest in the land (i.e., the property interest must be valid);  have had a protectable interest before the United States withdraws the land (i.e., the property interest was existing at the time of withdrawal);83 and  possess a property interest (or in some cases a possessory interest) in the land that constitutes a right for purposes of withdrawals (i.e., it must be a right).84 Valid The validity of the interest depends on whether the third party has met the requirements of the law under which it alleges to have secured the property interest. First, the interest itself must be legitimate (i.e., supported by evidence of the factual basis required by the relevant statute). For example, to secure a mining claim as a valid right under the mining laws, a claimant must demonstrate that they have made a “valid discovery” of a valuable mineral deposit that can be extracted and marketed. Existing The second requirement for a third party to have a “valid existing right” is that the property interest existed at the time of withdrawal.90 Depending on the legal basis for the right, a third party obtains an interest in federal land either (1) once they meet the statutory requirements, without the federal agency having to act, or (2) when the federal agency exercises its discretion to grant the property interest after the third party meets the relevant statutory requirements. 91 Third parties claiming property interests under laws that do not require the federal agency to grant the interest have an existing property interest as soon as they meet the law’s requirements.92 For example, a claimant under federal mining laws is entitled to the claim once they complete the statutory steps described above (discovery and location).93 Whether the Secretary of the Interior has issued a land patent to transfer title to the claimant does not affect the claimant’s right to the land; once federal mining law requirements are met, the property right “vests” (i.e., ownership is transferred to the claimant) and the right exists. 94 In some cases, the claimant need not complete all of the required steps before the withdrawal to obtain an existing right. If the law allows claims to relate back to occupancy (i.e., be back-dated to when the claimant first occupied the land), claimants may have existing rights if they occupied the land before withdrawal and ultimately complete the remaining steps required by law.95 Other laws provide that a claimant’s interest in federal land only becomes a valid existing right once the Secretary has acted to make it valid. 96 For example, third parties acquire oil and gas leases when the Secretary of the Interior approves their application. 97 Although courts and agencies have recognized these leases as valid existing rights in various contexts, they have not recognized applications for oil and gas leases or other leasehold interests in federal land. Courts and agencies have at times concluded that a third party has a valid existing right despite not having established an interest by law before the land is withdrawn. 99 The Solicitor of the Department of the Interior has offered “an expansive interpretation of ‘existing valid rights’ in the context of withdrawal” 100 that includes “all prior valid applications for entry, selection, or location, which were substantially complete at the date of the withdrawal” and “[c]laims under the Color of Title Act of December 22, 1928.”101 A court or agency also may recognize a valid existing right, even if the claimant is not legally entitled to it, because it would be equitable (i.e., consistent with the principles of justice). 102 Rights Not all uses of or interests in federal land qualify as valid existing “rights.” The third party usually must have obtained a property interest in the land to have a right; merely using the land generally is insufficient to establish a valid existing right. 103 To determine whether the asserted interest qualifies as a right, courts and agencies examine the law authorizing the interest and the withdrawal law.104 Courts and agencies have recognized a number of property interests as protected rights, such as entitlements to land patents under mining laws and entry-based laws such as the Homestead Acts and the Trade and Manufacturing Site Act;105 land grants to states;106 rights-of-way;107 and mineral leases.108 Courts and agencies also have deemed certain possessory interests protected, the most common example being perfected but unpatented mining claims. 109 However, they have declined to recognize other possessory interests as valid existing rights.110 Courts and agencies have generally not recognized permits, such as grazing permits, as protected property rights for purposes of interpreting withdrawals, absent a specific provision in the withdrawal law or order.111 My family has been grazing our cattle on federal government land that is not U.S. Fish and Wildlife Service or a National Park for 75 years that has been banned from being used for geothermal leasing. Do we have protected rights to keep grazing our cattle on that land?
Answer the questions from only the provided text. Do not use any external resources or prior knowledge. Explain your answer but do not exceed 250 words per answer. EVIDENCE: Lands and interest in lands owned by the United States (i.e., federal lands) have been withdrawn from agency management under various public land laws. Federal land withdrawals typically seek to preclude lands from being used for certain purposes (i.e., withdraw them)in order to dedicate them to other purposes or to maintain other public values. For example, some laws established or expanded federal land designations, such as wilderness areas or units of the National Park System, and withdrew the lands apparently to foster the primary purposes of these designations. Withdrawals affect lands managed by agencies including the four major land management agencies: the Bureau of Land Management (BLM), U.S. Fish and Wildlife Service (FWS), and National Park Service (NPS), all in the Department of the Interior, and the U.S. Forest Service (FS), in the Department of Agriculture. The first component of the example provision generally would bar third parties from applying to take ownership and obtaining possession of the lands or resources on the lands under public land laws. However, the lack of a comprehensive list of public land laws—and the lack of a single, consistent definition of the term public land laws itself over time—makes it challenging to determine the precise meaning and applicability. The second component generally would prevent the withdrawn lands from being available for new mining (e.g., under theGeneral Mining Law of 1872). The third component generally would prevent the withdrawn lands from being available for new mineral leasing, sale of mineral materials, and geothermal leasing (e.g., under the Mineral Leasing Act of 1920, Materials Act of 1947, and Geothermal Steam Act of 1970). Together, the three components primarily would affect BLM and FS, because laws governing lands managed by those agencies generally allow for energy and mineral development and provide broader authority to convey lands out of federal ownership than laws governing NPS and FWS lands. Typically, the three components would not bar various surface uses that otherwise might be allowed, possibly including recreation, hunting, and livestock grazing. However, some uses might be limited by Congress or by subsequent agency actions, such as amendments to land management plans, if the uses are inconsistent with the withdrawal’s purposes. Defining “Valid Existing Rights” As used in legislated withdrawals, a “valid existing right” is a third-party (i.e., nonfederal) interest in federal land that the relevant federal agency cannot terminate or unduly limit.82 To have a valid existing right, the third party must  have met the requirements under the relevant law to obtain a property interest in the land (i.e., the property interest must be valid);  have had a protectable interest before the United States withdraws the land (i.e., the property interest was existing at the time of withdrawal);83 and  possess a property interest (or in some cases a possessory interest) in the land that constitutes a right for purposes of withdrawals (i.e., it must be a right).84 Valid The validity of the interest depends on whether the third party has met the requirements of the law under which it alleges to have secured the property interest. First, the interest itself must be legitimate (i.e., supported by evidence of the factual basis required by the relevant statute). For example, to secure a mining claim as a valid right under the mining laws, a claimant must demonstrate that they have made a “valid discovery” of a valuable mineral deposit that can be extracted and marketed. Existing The second requirement for a third party to have a “valid existing right” is that the property interest existed at the time of withdrawal.90 Depending on the legal basis for the right, a third party obtains an interest in federal land either (1) once they meet the statutory requirements, without the federal agency having to act, or (2) when the federal agency exercises its discretion to grant the property interest after the third party meets the relevant statutory requirements. 91 Third parties claiming property interests under laws that do not require the federal agency to grant the interest have an existing property interest as soon as they meet the law’s requirements.92 For example, a claimant under federal mining laws is entitled to the claim once they complete the statutory steps described above (discovery and location).93 Whether the Secretary of the Interior has issued a land patent to transfer title to the claimant does not affect the claimant’s right to the land; once federal mining law requirements are met, the property right “vests” (i.e., ownership is transferred to the claimant) and the right exists. 94 In some cases, the claimant need not complete all of the required steps before the withdrawal to obtain an existing right. If the law allows claims to relate back to occupancy (i.e., be back-dated to when the claimant first occupied the land), claimants may have existing rights if they occupied the land before withdrawal and ultimately complete the remaining steps required by law.95 Other laws provide that a claimant’s interest in federal land only becomes a valid existing right once the Secretary has acted to make it valid. 96 For example, third parties acquire oil and gas leases when the Secretary of the Interior approves their application. 97 Although courts and agencies have recognized these leases as valid existing rights in various contexts, they have not recognized applications for oil and gas leases or other leasehold interests in federal land. Courts and agencies have at times concluded that a third party has a valid existing right despite not having established an interest by law before the land is withdrawn. 99 The Solicitor of the Department of the Interior has offered “an expansive interpretation of ‘existing valid rights’ in the context of withdrawal” 100 that includes “all prior valid applications for entry, selection, or location, which were substantially complete at the date of the withdrawal” and “[c]laims under the Color of Title Act of December 22, 1928.”101 A court or agency also may recognize a valid existing right, even if the claimant is not legally entitled to it, because it would be equitable (i.e., consistent with the principles of justice). 102 Rights Not all uses of or interests in federal land qualify as valid existing “rights.” The third party usually must have obtained a property interest in the land to have a right; merely using the land generally is insufficient to establish a valid existing right. 103 To determine whether the asserted interest qualifies as a right, courts and agencies examine the law authorizing the interest and the withdrawal law.104 Courts and agencies have recognized a number of property interests as protected rights, such as entitlements to land patents under mining laws and entry-based laws such as the Homestead Acts and the Trade and Manufacturing Site Act;105 land grants to states;106 rights-of-way;107 and mineral leases.108 Courts and agencies also have deemed certain possessory interests protected, the most common example being perfected but unpatented mining claims. 109 However, they have declined to recognize other possessory interests as valid existing rights.110 Courts and agencies have generally not recognized permits, such as grazing permits, as protected property rights for purposes of interpreting withdrawals, absent a specific provision in the withdrawal law or order.111 USER: My family has been grazing our cattle on federal government land that is not U.S. Fish and Wildlife Service or a National Park for 75 years that has been banned from being used for geothermal leasing. Do we have protected rights to keep grazing our cattle on that land? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
28
49
1,191
null
268
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document]
I just had a baby and he was diagnosed with a VSD and I am freaking out. What is the cause of this and how common is it? Can the hole close it on its own? What symptoms should I look for in my baby? If the hole is too large or doesn't close, what is the next step?
About Ventricular Septal Defect Key points A ventricular septal defect (pronounced ven·tric·u·lar sep·tal de·fect) is atype of congenital heart defect. Congenital means present at birth. A ventricular septal defect is a hole in the wall (septum) that separates the two lower chambers (ventricles) of the heart. Doctor listening to baby's heart What it is A ventricular septal defect (VSD) happens during pregnancy if the wall that forms between the two ventricles does not fully develop. This leaves a hole. In babies without a heart defect, the right side of the heart pumps oxygen-poor blood from the heart to the lungs. The left side of the heart pumps oxygen-rich blood to the rest of the body. In babies with a VSD, blood flows from the left ventricle through the VSD to the right ventricle and into the lungs. Keep Reading: How the Heart Works Occurrence About 42 of every 10,000 babies in the United States are born with a VSD.1 This means that about 16,800 babies are born with a VSD each year. Types An infant with a VSD can have one or more holes in different places of the septum. There are several names for these holes. Some common locations and names are listed below: Conoventricular Ventricular Septal Defect. In general, this is a hole where portions of the ventricular septum should meet just below the pulmonary and aortic valves. Perimembranous Ventricular Septal Defect. This is a hole in the upper section of the ventricular septum. Inlet Ventricular Septal Defect. This is a hole in the septum near to where the blood enters the ventricles through the tricuspid and mitral valves. This type of ventricular septal defect also might be part of another heart defect called an atrioventricular septal defect (AVSD). Muscular Ventricular Septal Defect. This is a hole in the lower, muscular part of the ventricular septum. This is the most common type of ventricular septal defect. View LargerDownload Normal heart compared with a heart with VSD A VSD is one or more holes in the wall between the ventricles. Signs and symptoms The size of the ventricular septal defect will influence what symptoms, if any, are present. Signs of a ventricular septal defect might be present at birth or might not appear until well after birth. If the hole is small, it could close on its own. The baby might not show any signs of the defect. However, if the hole is large, the baby might have symptoms, including Shortness of breath Fast or heavy breathing Sweating Tiredness while feeding Poor weight gain Complications A ventricular septal defect increases the amount of blood that flows through the lungs. This forces the heart and lungs to work harder. Overtime, if not repaired, a ventricular septal defect can increase the risk for other complications, including Heart failure High blood pressure in the lungs (called pulmonary hypertension) Irregular heart rhythms (called arrhythmia) Stroke Risk factors The causes of ventricular septal defects among most babies are unknown. Some babies have heart defects because of changes in their genes or chromosomes. A combination of genes and other risk factors may increase the risk for ventricular septal defects. These factors can include things in a mother's environment, what she eats or drinks, or the medicines she uses. Diagnosis A VSD is usually diagnosed after a baby is born. During a physical exam, a healthcare provider might hear a distinct whooshing sound, called a heart murmur. The size of the VSD will influence whether a healthcare provider hears a heart murmur during a physical exam. If signs or symptoms are present, the healthcare provider might request one or more tests to confirm the diagnosis. The most common test is an echocardiogram, which is an ultrasound of the heart. An echocardiogram can show how large the hole is and how much blood is flowing through the hole. A doctor has a stethoscope on a babies chest A VSD is usually diagnosed after a baby is born. Treatments Treatments for a VSD depend on the size of the hole and the problems it might cause. Many VSDs are small and close on their own. If the hole is small and causing no symptoms, the doctor will check the infant regularly. This is to ensure there are no signs of heart failure and that the hole closes. If the hole doesn't close on its own or if it's large, further action might needed. Depending on the size of the hole, symptoms, and general health of the child, the doctor might recommend either cardiac catheterization or open-heart surgery. These procedures will close the hole and restore normal blood flow. After surgery, the doctor will set up regular follow-up visits to make sure that the VSD remains closed. Medicines Some children will need medicines to help strengthen the heart muscle, lower their blood pressure, and help the body get rid of extra fluid. Nutrition Some babies with a ventricular septal defect become tired while feeding and do not eat enough to gain weight. To make sure babies have a healthy weight gain, a special high-calorie formula might be prescribed. Some babies become extremely tired while feeding and might need to be fed through a feeding tube. What to expect long-term Most children who have a VSD that closes (either on its own or with surgery) live healthy lives.
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. I just had a baby and he was diagnosed with a VSD and I am freaking out. What is the cause of this and how common is it? Can the hole close it on its own? What symptoms should I look for in my baby? If the hole is too large or doesn't close, what is the next step? About Ventricular Septal Defect Key points A ventricular septal defect (pronounced ven·tric·u·lar sep·tal de·fect) is atype of congenital heart defect. Congenital means present at birth. A ventricular septal defect is a hole in the wall (septum) that separates the two lower chambers (ventricles) of the heart. Doctor listening to baby's heart What it is A ventricular septal defect (VSD) happens during pregnancy if the wall that forms between the two ventricles does not fully develop. This leaves a hole. In babies without a heart defect, the right side of the heart pumps oxygen-poor blood from the heart to the lungs. The left side of the heart pumps oxygen-rich blood to the rest of the body. In babies with a VSD, blood flows from the left ventricle through the VSD to the right ventricle and into the lungs. Keep Reading: How the Heart Works Occurrence About 42 of every 10,000 babies in the United States are born with a VSD.1 This means that about 16,800 babies are born with a VSD each year. Types An infant with a VSD can have one or more holes in different places of the septum. There are several names for these holes. Some common locations and names are listed below: Conoventricular Ventricular Septal Defect. In general, this is a hole where portions of the ventricular septum should meet just below the pulmonary and aortic valves. Perimembranous Ventricular Septal Defect. This is a hole in the upper section of the ventricular septum. Inlet Ventricular Septal Defect. This is a hole in the septum near to where the blood enters the ventricles through the tricuspid and mitral valves. This type of ventricular septal defect also might be part of another heart defect called an atrioventricular septal defect (AVSD). Muscular Ventricular Septal Defect. This is a hole in the lower, muscular part of the ventricular septum. This is the most common type of ventricular septal defect. View LargerDownload Normal heart compared with a heart with VSD A VSD is one or more holes in the wall between the ventricles. Signs and symptoms The size of the ventricular septal defect will influence what symptoms, if any, are present. Signs of a ventricular septal defect might be present at birth or might not appear until well after birth. If the hole is small, it could close on its own. The baby might not show any signs of the defect. However, if the hole is large, the baby might have symptoms, including Shortness of breath Fast or heavy breathing Sweating Tiredness while feeding Poor weight gain Complications A ventricular septal defect increases the amount of blood that flows through the lungs. This forces the heart and lungs to work harder. Overtime, if not repaired, a ventricular septal defect can increase the risk for other complications, including Heart failure High blood pressure in the lungs (called pulmonary hypertension) Irregular heart rhythms (called arrhythmia) Stroke Risk factors The causes of ventricular septal defects among most babies are unknown. Some babies have heart defects because of changes in their genes or chromosomes. A combination of genes and other risk factors may increase the risk for ventricular septal defects. These factors can include things in a mother's environment, what she eats or drinks, or the medicines she uses. Diagnosis A VSD is usually diagnosed after a baby is born. During a physical exam, a healthcare provider might hear a distinct whooshing sound, called a heart murmur. The size of the VSD will influence whether a healthcare provider hears a heart murmur during a physical exam. If signs or symptoms are present, the healthcare provider might request one or more tests to confirm the diagnosis. The most common test is an echocardiogram, which is an ultrasound of the heart. An echocardiogram can show how large the hole is and how much blood is flowing through the hole. A doctor has a stethoscope on a babies chest A VSD is usually diagnosed after a baby is born. Treatments Treatments for a VSD depend on the size of the hole and the problems it might cause. Many VSDs are small and close on their own. If the hole is small and causing no symptoms, the doctor will check the infant regularly. This is to ensure there are no signs of heart failure and that the hole closes. If the hole doesn't close on its own or if it's large, further action might needed. Depending on the size of the hole, symptoms, and general health of the child, the doctor might recommend either cardiac catheterization or open-heart surgery. These procedures will close the hole and restore normal blood flow. After surgery, the doctor will set up regular follow-up visits to make sure that the VSD remains closed. Medicines Some children will need medicines to help strengthen the heart muscle, lower their blood pressure, and help the body get rid of extra fluid. Nutrition Some babies with a ventricular septal defect become tired while feeding and do not eat enough to gain weight. To make sure babies have a healthy weight gain, a special high-calorie formula might be prescribed. Some babies become extremely tired while feeding and might need to be fed through a feeding tube. What to expect long-term Most children who have a VSD that closes (either on its own or with surgery) live healthy lives. https://www.cdc.gov/heart-defects/about/ventricular-septal-defect.html
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document] EVIDENCE: About Ventricular Septal Defect Key points A ventricular septal defect (pronounced ven·tric·u·lar sep·tal de·fect) is atype of congenital heart defect. Congenital means present at birth. A ventricular septal defect is a hole in the wall (septum) that separates the two lower chambers (ventricles) of the heart. Doctor listening to baby's heart What it is A ventricular septal defect (VSD) happens during pregnancy if the wall that forms between the two ventricles does not fully develop. This leaves a hole. In babies without a heart defect, the right side of the heart pumps oxygen-poor blood from the heart to the lungs. The left side of the heart pumps oxygen-rich blood to the rest of the body. In babies with a VSD, blood flows from the left ventricle through the VSD to the right ventricle and into the lungs. Keep Reading: How the Heart Works Occurrence About 42 of every 10,000 babies in the United States are born with a VSD.1 This means that about 16,800 babies are born with a VSD each year. Types An infant with a VSD can have one or more holes in different places of the septum. There are several names for these holes. Some common locations and names are listed below: Conoventricular Ventricular Septal Defect. In general, this is a hole where portions of the ventricular septum should meet just below the pulmonary and aortic valves. Perimembranous Ventricular Septal Defect. This is a hole in the upper section of the ventricular septum. Inlet Ventricular Septal Defect. This is a hole in the septum near to where the blood enters the ventricles through the tricuspid and mitral valves. This type of ventricular septal defect also might be part of another heart defect called an atrioventricular septal defect (AVSD). Muscular Ventricular Septal Defect. This is a hole in the lower, muscular part of the ventricular septum. This is the most common type of ventricular septal defect. View LargerDownload Normal heart compared with a heart with VSD A VSD is one or more holes in the wall between the ventricles. Signs and symptoms The size of the ventricular septal defect will influence what symptoms, if any, are present. Signs of a ventricular septal defect might be present at birth or might not appear until well after birth. If the hole is small, it could close on its own. The baby might not show any signs of the defect. However, if the hole is large, the baby might have symptoms, including Shortness of breath Fast or heavy breathing Sweating Tiredness while feeding Poor weight gain Complications A ventricular septal defect increases the amount of blood that flows through the lungs. This forces the heart and lungs to work harder. Overtime, if not repaired, a ventricular septal defect can increase the risk for other complications, including Heart failure High blood pressure in the lungs (called pulmonary hypertension) Irregular heart rhythms (called arrhythmia) Stroke Risk factors The causes of ventricular septal defects among most babies are unknown. Some babies have heart defects because of changes in their genes or chromosomes. A combination of genes and other risk factors may increase the risk for ventricular septal defects. These factors can include things in a mother's environment, what she eats or drinks, or the medicines she uses. Diagnosis A VSD is usually diagnosed after a baby is born. During a physical exam, a healthcare provider might hear a distinct whooshing sound, called a heart murmur. The size of the VSD will influence whether a healthcare provider hears a heart murmur during a physical exam. If signs or symptoms are present, the healthcare provider might request one or more tests to confirm the diagnosis. The most common test is an echocardiogram, which is an ultrasound of the heart. An echocardiogram can show how large the hole is and how much blood is flowing through the hole. A doctor has a stethoscope on a babies chest A VSD is usually diagnosed after a baby is born. Treatments Treatments for a VSD depend on the size of the hole and the problems it might cause. Many VSDs are small and close on their own. If the hole is small and causing no symptoms, the doctor will check the infant regularly. This is to ensure there are no signs of heart failure and that the hole closes. If the hole doesn't close on its own or if it's large, further action might needed. Depending on the size of the hole, symptoms, and general health of the child, the doctor might recommend either cardiac catheterization or open-heart surgery. These procedures will close the hole and restore normal blood flow. After surgery, the doctor will set up regular follow-up visits to make sure that the VSD remains closed. Medicines Some children will need medicines to help strengthen the heart muscle, lower their blood pressure, and help the body get rid of extra fluid. Nutrition Some babies with a ventricular septal defect become tired while feeding and do not eat enough to gain weight. To make sure babies have a healthy weight gain, a special high-calorie formula might be prescribed. Some babies become extremely tired while feeding and might need to be fed through a feeding tube. What to expect long-term Most children who have a VSD that closes (either on its own or with surgery) live healthy lives. USER: I just had a baby and he was diagnosed with a VSD and I am freaking out. What is the cause of this and how common is it? Can the hole close it on its own? What symptoms should I look for in my baby? If the hole is too large or doesn't close, what is the next step? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
24
59
888
null
647
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document]
What is the mechanism of action of the drug Amoxicillin and what are some of the potential side effects involved with its usage? Respond in more than 150 words.
Amoxicillin is a widely utilized beta-lactam antimicrobial drug approved by the U.S. Food and Drug Administration (FDA) for use in the primary care setting. Amoxicillin is an aminopenicillin created by adding an extra amino group to penicillin to battle antibiotic resistance. This drug is indicated for the treatment of infections caused by susceptible isolates of selected bacteria, specifically those that are beta-lactamase–negative, including ear, nose, and throat infections, Helicobacter pylori eradication, lower respiratory and urinary tract infections, acute bacterial sinusitis, and skin and structure infections. Amoxicillin is effective against a wide range of gram-positive bacteria, offering additional coverage against some gram-negative organisms compared to penicillin. Amoxicillin's spectrum of activity includes coverage against Streptococcus species, with heightened efficacy against Listeria monocytogenes and Enterococcus spp. Furthermore, amoxicillin also demonstrates effectiveness against Haemophilus influenzae, select Escherichia coli strains, Actinomyces spp., Clostridium species, Salmonella spp., Shigella spp., and Corynebacteria spp. This activity delves into the indications, mechanism of action, administration, contraindications, and adverse event profiles associated with amoxicillin. This activity equips clinicians with a comprehensive understanding of amoxicillin to optimally enhance their ability to manage infectious diseases in patients. Amoxicillin is a widely utilized beta-lactam antimicrobial drug approved by the U.S. Food and Drug Administration (FDA) for use in the primary care setting. Amoxicillin is an aminopenicillin created by adding an extra amino group to penicillin to battle antibiotic resistance. The medication is effective against a wide range of gram-positive bacteria, offering additional coverage against some gram-negative organisms compared to penicillin. Amoxicillin's spectrum of activity includes coverage against Streptococcus species, with heightened efficacy against Listeria monocytogenes and Enterococcus spp. Furthermore, amoxicillin also demonstrates effectiveness against Haemophilus influenzae, select Escherichia coli strains, Actinomyces spp., Clostridium species, Salmonella spp., Shigella spp., and Corynebacteria spp. FDA-Approved Indications Amoxicillin is indicated for treating infections caused by susceptible isolates of selected bacteria, specifically beta-lactamase–negative, in the conditions listed below. Ear, nose, and throat infections: Amoxicillin is approved for the treatment of tonsillitis, pharyngitis, and otitis media in adults and pediatric patients aged 12 and older. The microbiological spectrum covers infections caused by beta-lactamase–negative Streptococcus species (alpha- and beta-hemolytic isolates only), Streptococcus pneumoniae, Staphylococcus species, or H influenzae.[1] Helicobacter pylori eradication: H pylori eradication involves triple therapy using clarithromycin, amoxicillin, and lansoprazole to reduce the risk of duodenal ulcer recurrence. In addition, dual treatment with amoxicillin and lansoprazole is FDA-approved for eradicating H pylori infection.[2] Lower respiratory tract infections: Amoxicillin is prescribed for treating lower respiratory tract infections caused by beta-lactamase–negative Streptococcus species (limited to alpha- and beta-hemolytic strains), Pneumococcus or Staphylococcus species, or H influenzae. In cases of community-acquired pneumonia, the Infectious Diseases Society of America (IDSA) recommends a combination therapy comprising amoxicillin and a macrolide antibiotic.[3] Acute bacterial sinusitis: The treatment for acute bacterial sinusitis involves addressing infections caused by beta-lactamase–negative Streptococcus species (limited to alpha- and beta-hemolytic isolates), S pneumoniae, Staphylococcus species, or H influenzae.[4] Skin and skin structure infections: Amoxicillin in the immediate-release formulation is prescribed to treat skin infections caused by beta-lactamase–negative Streptococcus species (restricted to alpha- and beta-hemolytic strains), Staphylococcus species, or E coli.[5] Urinary tract infection: Amoxicillin is indicated for treating genitourinary tract infections caused by beta-lactamase–negative E coli, Proteus mirabilis, or Enterococcus faecalis.[6] The Centers for Disease Control and Prevention (CDC) recommends using amoxicillin as a second-line agent for post-exposure prophylaxis for anthrax.[7] Off-label Uses Amoxicillin is often used for Lyme disease if there are contraindications for doxycycline.[8] Infectious endocarditis prophylaxis is recommended for individuals with high-risk cardiac conditions, such as a prosthetic cardiac valve or congenital heart disease, using amoxicillin.[9] Amoxicillin, combined with metronidazole, is used to treat periodontitis.[10] Amoxicillin is often used for the treatment of actinomycosis.[11] Amoxicillin belongs to the class of beta-lactam antimicrobials. Beta-lactams bind to penicillin-binding proteins, inhibiting transpeptidation — a crucial step in cell wall synthesis involving cross-linking. This action activates autolytic enzymes in the bacterial cell wall, resulting in cell wall lysis and bacterial cell destruction. This mechanism is known as bactericidal killing.[12] Amoxicillin administration can be combined with a beta-lactamase inhibitor, such as clavulanic acid or sulbactam. These inhibitors function by irreversibly binding to the catalytic site of the organism's beta-lactamase enzyme, preventing resistance to the original beta-lactam ring of amoxicillin. Although these inhibitors lack inherent bactericidal activity, their combination with amoxicillin may broaden its spectrum to include organisms producing the beta-lactamase enzyme.[13] Pharmacokinetics Absorption: Amoxicillin exhibits stability in the presence of gastric acid and is rapidly absorbed after oral administration, with average peak blood levels typically reached within 1 to 2 hours. Distribution: Amoxicillin displays significant tissue and fluid diffusion throughout the body, with the exception of the brain and spinal fluid, except in cases where meningeal inflammation is present. Amoxicillin exhibits approximately 20% plasma protein binding. Metabolism: The metabolism of amoxicillin involves oxidation, hydroxylation, and deamination processes. Amoxicillin is a substrate of organic anion transporters (OATs), specifically OATs 1 and 3.[14][15] Elimination: Amoxicillin has an approximate half-life of 61.3 minutes, and about 60% of the administered dose is excreted in the urine within 6 to 8 hours. Co-administration of probenecid can delay amoxicillin excretion, as the majority of the drug is eliminated unchanged in the urine. Common Adverse Drug Reactions Although generally well-tolerated, amoxicillin may lead to common gastrointestinal symptoms, including nausea, vomiting, and diarrhea. Additional adverse drug reactions associated with amoxicillin are listed below. Nephrotoxicity: Amoxicillin may cause crystalluria and interstitial nephritis.[23][24] Hypersensitivity reactions: Amoxicillin has the potential to cause hypersensitivity reactions categorized as type I, II, III, or IV. Differentiating between a type-I and type-IV reaction is crucial due to varying danger levels. A type-I hypersensitivity reaction involves an IgE-mediated response in sensitized patients, inducing widespread histamine release, resulting in an urticarial-like pruritic rash or severe anaphylaxis. In contrast, a type-IV hypersensitivity reaction is not mediated by histamine release and typically presents as a more papular or morbilliform rash, often without itching. Notably, almost all patients receiving amoxicillin inadvertently for infectious mononucleosis may develop a maculopapular rash attributed to a type IV–mediated hypersensitivity reaction. Notably, reactions of this type are not associated with anaphylaxis.[25] Hepatotoxicity: Cases of idiosyncratic liver injury have been reported in individuals receiving amoxicillin. The associated serum enzyme pattern reveals a hepatocellular pattern characterized by significant elevations in aspartate transaminase (AST) and alanine transaminase (ALT), with minimal increases in alkaline phosphatase. Most patients experience rapid recovery upon withdrawal of amoxicillin. The cause of liver injury associated with amoxicillin use is attributed to hypersensitivity. Although rare, cases of acute liver failure and vanishing bile duct syndrome have been reported. Corticosteroids are often used to treat allergic reactions caused by penicillin-related immunoallergic hepatitis, which is a rare cause of clinically apparent liver injury, with a likelihood score of B.[26] Postmarketing Adverse Drug Reactions Gastrointestinal: Gastrointestinal effects may include black hairy tongue, pseudomembranous colitis, and hemorrhagic colitis.[27] Neurological: Neurological effects may encompass reversible hyperactivity, agitation, anxiety, insomnia, confusion, convulsions, and aseptic meningitis.[28] Dermatological: Dermatological effects may manifest as serum sickness-like reactions, erythematous maculopapular rashes, exfoliative dermatitis, toxic epidermal necrolysis, and hypersensitivity vasculitis.[30]
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. What is the mechanism of action of the drug Amoxicillin and what are some of the potential side effects involved with its usage? Respond in more than 150 words. Amoxicillin is a widely utilized beta-lactam antimicrobial drug approved by the U.S. Food and Drug Administration (FDA) for use in the primary care setting. Amoxicillin is an aminopenicillin created by adding an extra amino group to penicillin to battle antibiotic resistance. This drug is indicated for the treatment of infections caused by susceptible isolates of selected bacteria, specifically those that are beta-lactamase–negative, including ear, nose, and throat infections, Helicobacter pylori eradication, lower respiratory and urinary tract infections, acute bacterial sinusitis, and skin and structure infections. Amoxicillin is effective against a wide range of gram-positive bacteria, offering additional coverage against some gram-negative organisms compared to penicillin. Amoxicillin's spectrum of activity includes coverage against Streptococcus species, with heightened efficacy against Listeria monocytogenes and Enterococcus spp. Furthermore, amoxicillin also demonstrates effectiveness against Haemophilus influenzae, select Escherichia coli strains, Actinomyces spp., Clostridium species, Salmonella spp., Shigella spp., and Corynebacteria spp. This activity delves into the indications, mechanism of action, administration, contraindications, and adverse event profiles associated with amoxicillin. This activity equips clinicians with a comprehensive understanding of amoxicillin to optimally enhance their ability to manage infectious diseases in patients. Amoxicillin is a widely utilized beta-lactam antimicrobial drug approved by the U.S. Food and Drug Administration (FDA) for use in the primary care setting. Amoxicillin is an aminopenicillin created by adding an extra amino group to penicillin to battle antibiotic resistance. The medication is effective against a wide range of gram-positive bacteria, offering additional coverage against some gram-negative organisms compared to penicillin. Amoxicillin's spectrum of activity includes coverage against Streptococcus species, with heightened efficacy against Listeria monocytogenes and Enterococcus spp. Furthermore, amoxicillin also demonstrates effectiveness against Haemophilus influenzae, select Escherichia coli strains, Actinomyces spp., Clostridium species, Salmonella spp., Shigella spp., and Corynebacteria spp. FDA-Approved Indications Amoxicillin is indicated for treating infections caused by susceptible isolates of selected bacteria, specifically beta-lactamase–negative, in the conditions listed below. Ear, nose, and throat infections: Amoxicillin is approved for the treatment of tonsillitis, pharyngitis, and otitis media in adults and pediatric patients aged 12 and older. The microbiological spectrum covers infections caused by beta-lactamase–negative Streptococcus species (alpha- and beta-hemolytic isolates only), Streptococcus pneumoniae, Staphylococcus species, or H influenzae.[1] Helicobacter pylori eradication: H pylori eradication involves triple therapy using clarithromycin, amoxicillin, and lansoprazole to reduce the risk of duodenal ulcer recurrence. In addition, dual treatment with amoxicillin and lansoprazole is FDA-approved for eradicating H pylori infection.[2] Lower respiratory tract infections: Amoxicillin is prescribed for treating lower respiratory tract infections caused by beta-lactamase–negative Streptococcus species (limited to alpha- and beta-hemolytic strains), Pneumococcus or Staphylococcus species, or H influenzae. In cases of community-acquired pneumonia, the Infectious Diseases Society of America (IDSA) recommends a combination therapy comprising amoxicillin and a macrolide antibiotic.[3] Acute bacterial sinusitis: The treatment for acute bacterial sinusitis involves addressing infections caused by beta-lactamase–negative Streptococcus species (limited to alpha- and beta-hemolytic isolates), S pneumoniae, Staphylococcus species, or H influenzae.[4] Skin and skin structure infections: Amoxicillin in the immediate-release formulation is prescribed to treat skin infections caused by beta-lactamase–negative Streptococcus species (restricted to alpha- and beta-hemolytic strains), Staphylococcus species, or E coli.[5] Urinary tract infection: Amoxicillin is indicated for treating genitourinary tract infections caused by beta-lactamase–negative E coli, Proteus mirabilis, or Enterococcus faecalis.[6] The Centers for Disease Control and Prevention (CDC) recommends using amoxicillin as a second-line agent for post-exposure prophylaxis for anthrax.[7] Off-label Uses Amoxicillin is often used for Lyme disease if there are contraindications for doxycycline.[8] Infectious endocarditis prophylaxis is recommended for individuals with high-risk cardiac conditions, such as a prosthetic cardiac valve or congenital heart disease, using amoxicillin.[9] Amoxicillin, combined with metronidazole, is used to treat periodontitis.[10] Amoxicillin is often used for the treatment of actinomycosis.[11] Amoxicillin belongs to the class of beta-lactam antimicrobials. Beta-lactams bind to penicillin-binding proteins, inhibiting transpeptidation — a crucial step in cell wall synthesis involving cross-linking. This action activates autolytic enzymes in the bacterial cell wall, resulting in cell wall lysis and bacterial cell destruction. This mechanism is known as bactericidal killing.[12] Amoxicillin administration can be combined with a beta-lactamase inhibitor, such as clavulanic acid or sulbactam. These inhibitors function by irreversibly binding to the catalytic site of the organism's beta-lactamase enzyme, preventing resistance to the original beta-lactam ring of amoxicillin. Although these inhibitors lack inherent bactericidal activity, their combination with amoxicillin may broaden its spectrum to include organisms producing the beta-lactamase enzyme.[13] Pharmacokinetics Absorption: Amoxicillin exhibits stability in the presence of gastric acid and is rapidly absorbed after oral administration, with average peak blood levels typically reached within 1 to 2 hours. Distribution: Amoxicillin displays significant tissue and fluid diffusion throughout the body, with the exception of the brain and spinal fluid, except in cases where meningeal inflammation is present. Amoxicillin exhibits approximately 20% plasma protein binding. Metabolism: The metabolism of amoxicillin involves oxidation, hydroxylation, and deamination processes. Amoxicillin is a substrate of organic anion transporters (OATs), specifically OATs 1 and 3.[14][15] Elimination: Amoxicillin has an approximate half-life of 61.3 minutes, and about 60% of the administered dose is excreted in the urine within 6 to 8 hours. Co-administration of probenecid can delay amoxicillin excretion, as the majority of the drug is eliminated unchanged in the urine. Common Adverse Drug Reactions Although generally well-tolerated, amoxicillin may lead to common gastrointestinal symptoms, including nausea, vomiting, and diarrhea. Additional adverse drug reactions associated with amoxicillin are listed below. Nephrotoxicity: Amoxicillin may cause crystalluria and interstitial nephritis.[23][24] Hypersensitivity reactions: Amoxicillin has the potential to cause hypersensitivity reactions categorized as type I, II, III, or IV. Differentiating between a type-I and type-IV reaction is crucial due to varying danger levels. A type-I hypersensitivity reaction involves an IgE-mediated response in sensitized patients, inducing widespread histamine release, resulting in an urticarial-like pruritic rash or severe anaphylaxis. In contrast, a type-IV hypersensitivity reaction is not mediated by histamine release and typically presents as a more papular or morbilliform rash, often without itching. Notably, almost all patients receiving amoxicillin inadvertently for infectious mononucleosis may develop a maculopapular rash attributed to a type IV–mediated hypersensitivity reaction. Notably, reactions of this type are not associated with anaphylaxis.[25] Hepatotoxicity: Cases of idiosyncratic liver injury have been reported in individuals receiving amoxicillin. The associated serum enzyme pattern reveals a hepatocellular pattern characterized by significant elevations in aspartate transaminase (AST) and alanine transaminase (ALT), with minimal increases in alkaline phosphatase. Most patients experience rapid recovery upon withdrawal of amoxicillin. The cause of liver injury associated with amoxicillin use is attributed to hypersensitivity. Although rare, cases of acute liver failure and vanishing bile duct syndrome have been reported. Corticosteroids are often used to treat allergic reactions caused by penicillin-related immunoallergic hepatitis, which is a rare cause of clinically apparent liver injury, with a likelihood score of B.[26] Postmarketing Adverse Drug Reactions Gastrointestinal: Gastrointestinal effects may include black hairy tongue, pseudomembranous colitis, and hemorrhagic colitis.[27] Neurological: Neurological effects may encompass reversible hyperactivity, agitation, anxiety, insomnia, confusion, convulsions, and aseptic meningitis.[28] Dermatological: Dermatological effects may manifest as serum sickness-like reactions, erythematous maculopapular rashes, exfoliative dermatitis, toxic epidermal necrolysis, and hypersensitivity vasculitis.[30] https://www.ncbi.nlm.nih.gov/books/NBK482250/
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document] EVIDENCE: Amoxicillin is a widely utilized beta-lactam antimicrobial drug approved by the U.S. Food and Drug Administration (FDA) for use in the primary care setting. Amoxicillin is an aminopenicillin created by adding an extra amino group to penicillin to battle antibiotic resistance. This drug is indicated for the treatment of infections caused by susceptible isolates of selected bacteria, specifically those that are beta-lactamase–negative, including ear, nose, and throat infections, Helicobacter pylori eradication, lower respiratory and urinary tract infections, acute bacterial sinusitis, and skin and structure infections. Amoxicillin is effective against a wide range of gram-positive bacteria, offering additional coverage against some gram-negative organisms compared to penicillin. Amoxicillin's spectrum of activity includes coverage against Streptococcus species, with heightened efficacy against Listeria monocytogenes and Enterococcus spp. Furthermore, amoxicillin also demonstrates effectiveness against Haemophilus influenzae, select Escherichia coli strains, Actinomyces spp., Clostridium species, Salmonella spp., Shigella spp., and Corynebacteria spp. This activity delves into the indications, mechanism of action, administration, contraindications, and adverse event profiles associated with amoxicillin. This activity equips clinicians with a comprehensive understanding of amoxicillin to optimally enhance their ability to manage infectious diseases in patients. Amoxicillin is a widely utilized beta-lactam antimicrobial drug approved by the U.S. Food and Drug Administration (FDA) for use in the primary care setting. Amoxicillin is an aminopenicillin created by adding an extra amino group to penicillin to battle antibiotic resistance. The medication is effective against a wide range of gram-positive bacteria, offering additional coverage against some gram-negative organisms compared to penicillin. Amoxicillin's spectrum of activity includes coverage against Streptococcus species, with heightened efficacy against Listeria monocytogenes and Enterococcus spp. Furthermore, amoxicillin also demonstrates effectiveness against Haemophilus influenzae, select Escherichia coli strains, Actinomyces spp., Clostridium species, Salmonella spp., Shigella spp., and Corynebacteria spp. FDA-Approved Indications Amoxicillin is indicated for treating infections caused by susceptible isolates of selected bacteria, specifically beta-lactamase–negative, in the conditions listed below. Ear, nose, and throat infections: Amoxicillin is approved for the treatment of tonsillitis, pharyngitis, and otitis media in adults and pediatric patients aged 12 and older. The microbiological spectrum covers infections caused by beta-lactamase–negative Streptococcus species (alpha- and beta-hemolytic isolates only), Streptococcus pneumoniae, Staphylococcus species, or H influenzae.[1] Helicobacter pylori eradication: H pylori eradication involves triple therapy using clarithromycin, amoxicillin, and lansoprazole to reduce the risk of duodenal ulcer recurrence. In addition, dual treatment with amoxicillin and lansoprazole is FDA-approved for eradicating H pylori infection.[2] Lower respiratory tract infections: Amoxicillin is prescribed for treating lower respiratory tract infections caused by beta-lactamase–negative Streptococcus species (limited to alpha- and beta-hemolytic strains), Pneumococcus or Staphylococcus species, or H influenzae. In cases of community-acquired pneumonia, the Infectious Diseases Society of America (IDSA) recommends a combination therapy comprising amoxicillin and a macrolide antibiotic.[3] Acute bacterial sinusitis: The treatment for acute bacterial sinusitis involves addressing infections caused by beta-lactamase–negative Streptococcus species (limited to alpha- and beta-hemolytic isolates), S pneumoniae, Staphylococcus species, or H influenzae.[4] Skin and skin structure infections: Amoxicillin in the immediate-release formulation is prescribed to treat skin infections caused by beta-lactamase–negative Streptococcus species (restricted to alpha- and beta-hemolytic strains), Staphylococcus species, or E coli.[5] Urinary tract infection: Amoxicillin is indicated for treating genitourinary tract infections caused by beta-lactamase–negative E coli, Proteus mirabilis, or Enterococcus faecalis.[6] The Centers for Disease Control and Prevention (CDC) recommends using amoxicillin as a second-line agent for post-exposure prophylaxis for anthrax.[7] Off-label Uses Amoxicillin is often used for Lyme disease if there are contraindications for doxycycline.[8] Infectious endocarditis prophylaxis is recommended for individuals with high-risk cardiac conditions, such as a prosthetic cardiac valve or congenital heart disease, using amoxicillin.[9] Amoxicillin, combined with metronidazole, is used to treat periodontitis.[10] Amoxicillin is often used for the treatment of actinomycosis.[11] Amoxicillin belongs to the class of beta-lactam antimicrobials. Beta-lactams bind to penicillin-binding proteins, inhibiting transpeptidation — a crucial step in cell wall synthesis involving cross-linking. This action activates autolytic enzymes in the bacterial cell wall, resulting in cell wall lysis and bacterial cell destruction. This mechanism is known as bactericidal killing.[12] Amoxicillin administration can be combined with a beta-lactamase inhibitor, such as clavulanic acid or sulbactam. These inhibitors function by irreversibly binding to the catalytic site of the organism's beta-lactamase enzyme, preventing resistance to the original beta-lactam ring of amoxicillin. Although these inhibitors lack inherent bactericidal activity, their combination with amoxicillin may broaden its spectrum to include organisms producing the beta-lactamase enzyme.[13] Pharmacokinetics Absorption: Amoxicillin exhibits stability in the presence of gastric acid and is rapidly absorbed after oral administration, with average peak blood levels typically reached within 1 to 2 hours. Distribution: Amoxicillin displays significant tissue and fluid diffusion throughout the body, with the exception of the brain and spinal fluid, except in cases where meningeal inflammation is present. Amoxicillin exhibits approximately 20% plasma protein binding. Metabolism: The metabolism of amoxicillin involves oxidation, hydroxylation, and deamination processes. Amoxicillin is a substrate of organic anion transporters (OATs), specifically OATs 1 and 3.[14][15] Elimination: Amoxicillin has an approximate half-life of 61.3 minutes, and about 60% of the administered dose is excreted in the urine within 6 to 8 hours. Co-administration of probenecid can delay amoxicillin excretion, as the majority of the drug is eliminated unchanged in the urine. Common Adverse Drug Reactions Although generally well-tolerated, amoxicillin may lead to common gastrointestinal symptoms, including nausea, vomiting, and diarrhea. Additional adverse drug reactions associated with amoxicillin are listed below. Nephrotoxicity: Amoxicillin may cause crystalluria and interstitial nephritis.[23][24] Hypersensitivity reactions: Amoxicillin has the potential to cause hypersensitivity reactions categorized as type I, II, III, or IV. Differentiating between a type-I and type-IV reaction is crucial due to varying danger levels. A type-I hypersensitivity reaction involves an IgE-mediated response in sensitized patients, inducing widespread histamine release, resulting in an urticarial-like pruritic rash or severe anaphylaxis. In contrast, a type-IV hypersensitivity reaction is not mediated by histamine release and typically presents as a more papular or morbilliform rash, often without itching. Notably, almost all patients receiving amoxicillin inadvertently for infectious mononucleosis may develop a maculopapular rash attributed to a type IV–mediated hypersensitivity reaction. Notably, reactions of this type are not associated with anaphylaxis.[25] Hepatotoxicity: Cases of idiosyncratic liver injury have been reported in individuals receiving amoxicillin. The associated serum enzyme pattern reveals a hepatocellular pattern characterized by significant elevations in aspartate transaminase (AST) and alanine transaminase (ALT), with minimal increases in alkaline phosphatase. Most patients experience rapid recovery upon withdrawal of amoxicillin. The cause of liver injury associated with amoxicillin use is attributed to hypersensitivity. Although rare, cases of acute liver failure and vanishing bile duct syndrome have been reported. Corticosteroids are often used to treat allergic reactions caused by penicillin-related immunoallergic hepatitis, which is a rare cause of clinically apparent liver injury, with a likelihood score of B.[26] Postmarketing Adverse Drug Reactions Gastrointestinal: Gastrointestinal effects may include black hairy tongue, pseudomembranous colitis, and hemorrhagic colitis.[27] Neurological: Neurological effects may encompass reversible hyperactivity, agitation, anxiety, insomnia, confusion, convulsions, and aseptic meningitis.[28] Dermatological: Dermatological effects may manifest as serum sickness-like reactions, erythematous maculopapular rashes, exfoliative dermatitis, toxic epidermal necrolysis, and hypersensitivity vasculitis.[30] USER: What is the mechanism of action of the drug Amoxicillin and what are some of the potential side effects involved with its usage? Respond in more than 150 words. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
24
29
1,162
null
56
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document]
We used to have a 10-acre waterlocked property by Lake Erie, in Ohio, in a remote section of the shoreline. Recently, we sold half of the property to our neighbor. Now, we can only reach the road by passing through his property or using a boat. No other way is available. However, he has been creating problems for us, because he doesn't us to use the road. He even made a fence in the middle of it. He says that it's his property and he can do whatever he wants with it. Can we prevail? Answer in 150 words.
Such stipulation, which is neither as complete nor satisfactory as could be desired, shows that on May 13, 1867, one Mary Lane acquired by deeds lands which embrace the properties now owned by the plaintiffs and defendant, respectively, plus a strip of land 33 feet in width running from the southeast corner of the 25-acre tract, now owned by plaintiffs, east to the center of a thoroughfare called Cahoon road. By deed recorded August 10, 1881, this same Mary Lane acquired title to another strip of land 33 feet wide and extending east from the northeast corner of plaintiffs' present land to the center of Cahoon road, which was used until the year 1928 for the purpose of ingress and egress. By deed recorded September 5, 1881, Mary Lane conveyed to the New York, Chicago St. Louis Railroad Company a right of way which effected a complete separation of the lands now owned by plaintiffs and defendant. Thus, in 1881, a condition was brought about whereby the original parcel of land was divided by a railroad right of way with two strips of land 33 feet wide and extending from Cahoon road to the 25-acre tract lying south of the railroad right of way and now belonging to plaintiffs. The property involved in the instant controversy continued to be owned by Mary Lane and her heirs until February 19, 1921, when the heirs conveyed the same to two persons named Dodd and Aldrich. In the conveyance there were three separate descriptions, one description included plaintiffs' present property, another defendant's present property and the remaining one the strip of land 33 feet wide and extending from the northeast corner of plaintiffs' premises to the center of Cahoon road. Sometime during the year 1921 Dodd and Aldrich constructed a crossing seven feet wide over the tracks and right of way of the railroad and connecting the premises now owned by plaintiffs with those now owned by defendant. Such railroad crossing was used by Dodd and Aldrich from the year 1922, and upon the establishment of Forest Drive in 1925 they traveled across the land now owned by defendant along a line between the railroad crossing and the south end of Forest Drive. The nature and extent of such use are not disclosed, but it apparently continued for an undisclosed purpose until the separate and distinct tax sales in 1940. By the present action plaintiffs seek to enjoin the defendant from interfering with their use of the passage or alleged easement from their land across his land to Forest Drive. An easement has been defined as "a right without profit, created by grant or prescription, which the owner of one estate [called the dominant estate] may exercise in or over the estate of another [called the servient estate] for the benefit of the former." Yeager v. Tuning, 79 Ohio St. 121, 124, 86 N.E. 657, 658, 19 L.R.A. (N.S.), 700, 128 Am. St. Rep., 679. An easement may be acquired only by grant, express or implied, or by prescription. Where, however, the easement sought to be enforced is grounded upon implication rather than express grant, it must be clearly established that such a right exists. Implied easements are not favored because they are in derogation of the rule that written instruments speak for themselves. Ciski v. Wentworth, 122 Ohio St. 487, 172 N.E. 276. An implied easement is based upon the theory that whenever one conveys property he includes in the conveyance whatever is necessary for its beneficial use and enjoyment and retains whatever is necessary for the use and enjoyment of the land retained. There being in this case no express grant of an easement, it becomes necessary to determine whether one arose by implication. Easements may be implied in several ways — from an existing use at the time of the severance of ownership in land, from a conveyance describing the premises as bounded upon a way, from a conveyance with reference to a plat or map or from necessity alone, as in the case of ways of necessity. 15 Ohio Jurisprudence, 37, Section 27. Here, we are concerned only with the first and last of these methods, namely, a use existing at the time of severance or a way of necessity. It is a well settled rule that a use must be continuous, apparent, permanent and necessary to be the basis of an implied easement upon the severance of the ownership of an estate. 28 Corpus Juris Secundum, Easements, 691, Section 33; and 15 Ohio Jurisprudence, 37, 45, Sections 28, 33. For a use to be permanent in character "it is required that the use shall have been so long continued prior to severance and so obvious as to show that it was meant to be permanent; a mere temporary provision or arrangement made for the convenience of the entire estate will not constitute that degree of permanency required to burden the property with a continuance of the same when divided or separated by conveyance to different parties." 28 Corpus Juris Secundum, Easements, 691, 692, Section 33; and 15 Ohio Jurisprudence, 41, Section 31. Plaintiffs having failed, then, to present facts sufficient to warrant the finding of an implied easement from an existing use, we come to a consideration of whether the facts disclosed are such as to sustain a way of necessity. An implied easement or way of necessity is based upon the theory that without it the grantor or grantee, as the case may be, can not make use of his land. It has been stated that "necessity does not of itself create a right of way, but is said to furnish evidence of the grantor's intention to convey a right of way and, therefore, raises an implication of grant." 17 American Jurisprudence, 961, Section 48. A way of necessity will not be implied where the claimant has another means of ingress or egress, whether over his own land or over the land of another. For over 40 years thereafter there was no connection between these lands. As already noted, up to the year 1928 the strip of land 33 feet wide, still in the names of Dodd and Aldrich and connecting Cahoon road with the northeast corner of plaintiffs' property, was used as a way of travel to and from such property. In our opinion plaintiffs do have a means of access to their lands from Cahoon road over the strip of ground 33 feet wide, referred to above, now belonging to those in plaintiffs' chain of title, and this being so they are not in a position to successfully assert an easement or way of necessity over defendant's property. A way of necessity will not be implied, where there is another or other outlets available to a public thoroughfare, even though such other outlets are less convenient and would necessitate the expenditure of a considerable sum of money to render them serviceable. 15 Ohio Jurisprudence, 62, Section 44. "A way of necessity will not be decreed unless the evidence showing the need therefor is clear and convincing. Such a way is not sanctioned when there is available another means of ingress and egress to and from the claimant's land even though it may be less convenient and will involve some labor and expense to repair and maintain." Although it would be much more convenient and much less expensive for plaintiffs to traverse defendant's property to reach a public street, the imposition of such a burden on defendant's land on the theory of a way of necessity is legally unwarranted in the circumstances exhibited by the record. The judgment of the Court of Appeals is, therefore, reversed and final judgment rendered for defendant.
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== We used to have a 10-acre waterlocked property by Lake Erie, in Ohio, in a remote section of the shoreline. Recently, we sold half of the property to our neighbor. Now, we can only reach the road by passing through his property or using a boat. No other way is available. However, he has been creating problems for us, because he doesn't us to use the road. He even made a fence in the middle of it. He says that it's his property and he can do whatever he wants with it. Can we prevail? Answer in 150 words. {passage 0} ========== Such stipulation, which is neither as complete nor satisfactory as could be desired, shows that on May 13, 1867, one Mary Lane acquired by deeds lands which embrace the properties now owned by the plaintiffs and defendant, respectively, plus a strip of land 33 feet in width running from the southeast corner of the 25-acre tract, now owned by plaintiffs, east to the center of a thoroughfare called Cahoon road. By deed recorded August 10, 1881, this same Mary Lane acquired title to another strip of land 33 feet wide and extending east from the northeast corner of plaintiffs' present land to the center of Cahoon road, which was used until the year 1928 for the purpose of ingress and egress. By deed recorded September 5, 1881, Mary Lane conveyed to the New York, Chicago St. Louis Railroad Company a right of way which effected a complete separation of the lands now owned by plaintiffs and defendant. Thus, in 1881, a condition was brought about whereby the original parcel of land was divided by a railroad right of way with two strips of land 33 feet wide and extending from Cahoon road to the 25-acre tract lying south of the railroad right of way and now belonging to plaintiffs. The property involved in the instant controversy continued to be owned by Mary Lane and her heirs until February 19, 1921, when the heirs conveyed the same to two persons named Dodd and Aldrich. In the conveyance there were three separate descriptions, one description included plaintiffs' present property, another defendant's present property and the remaining one the strip of land 33 feet wide and extending from the northeast corner of plaintiffs' premises to the center of Cahoon road. Sometime during the year 1921 Dodd and Aldrich constructed a crossing seven feet wide over the tracks and right of way of the railroad and connecting the premises now owned by plaintiffs with those now owned by defendant. Such railroad crossing was used by Dodd and Aldrich from the year 1922, and upon the establishment of Forest Drive in 1925 they traveled across the land now owned by defendant along a line between the railroad crossing and the south end of Forest Drive. The nature and extent of such use are not disclosed, but it apparently continued for an undisclosed purpose until the separate and distinct tax sales in 1940. By the present action plaintiffs seek to enjoin the defendant from interfering with their use of the passage or alleged easement from their land across his land to Forest Drive. An easement has been defined as "a right without profit, created by grant or prescription, which the owner of one estate [called the dominant estate] may exercise in or over the estate of another [called the servient estate] for the benefit of the former." Yeager v. Tuning, 79 Ohio St. 121, 124, 86 N.E. 657, 658, 19 L.R.A. (N.S.), 700, 128 Am. St. Rep., 679. An easement may be acquired only by grant, express or implied, or by prescription. Where, however, the easement sought to be enforced is grounded upon implication rather than express grant, it must be clearly established that such a right exists. Implied easements are not favored because they are in derogation of the rule that written instruments speak for themselves. Ciski v. Wentworth, 122 Ohio St. 487, 172 N.E. 276. An implied easement is based upon the theory that whenever one conveys property he includes in the conveyance whatever is necessary for its beneficial use and enjoyment and retains whatever is necessary for the use and enjoyment of the land retained. There being in this case no express grant of an easement, it becomes necessary to determine whether one arose by implication. Easements may be implied in several ways — from an existing use at the time of the severance of ownership in land, from a conveyance describing the premises as bounded upon a way, from a conveyance with reference to a plat or map or from necessity alone, as in the case of ways of necessity. 15 Ohio Jurisprudence, 37, Section 27. Here, we are concerned only with the first and last of these methods, namely, a use existing at the time of severance or a way of necessity. It is a well settled rule that a use must be continuous, apparent, permanent and necessary to be the basis of an implied easement upon the severance of the ownership of an estate. 28 Corpus Juris Secundum, Easements, 691, Section 33; and 15 Ohio Jurisprudence, 37, 45, Sections 28, 33. For a use to be permanent in character "it is required that the use shall have been so long continued prior to severance and so obvious as to show that it was meant to be permanent; a mere temporary provision or arrangement made for the convenience of the entire estate will not constitute that degree of permanency required to burden the property with a continuance of the same when divided or separated by conveyance to different parties." 28 Corpus Juris Secundum, Easements, 691, 692, Section 33; and 15 Ohio Jurisprudence, 41, Section 31. Plaintiffs having failed, then, to present facts sufficient to warrant the finding of an implied easement from an existing use, we come to a consideration of whether the facts disclosed are such as to sustain a way of necessity. An implied easement or way of necessity is based upon the theory that without it the grantor or grantee, as the case may be, can not make use of his land. It has been stated that "necessity does not of itself create a right of way, but is said to furnish evidence of the grantor's intention to convey a right of way and, therefore, raises an implication of grant." 17 American Jurisprudence, 961, Section 48. A way of necessity will not be implied where the claimant has another means of ingress or egress, whether over his own land or over the land of another. For over 40 years thereafter there was no connection between these lands. As already noted, up to the year 1928 the strip of land 33 feet wide, still in the names of Dodd and Aldrich and connecting Cahoon road with the northeast corner of plaintiffs' property, was used as a way of travel to and from such property. In our opinion plaintiffs do have a means of access to their lands from Cahoon road over the strip of ground 33 feet wide, referred to above, now belonging to those in plaintiffs' chain of title, and this being so they are not in a position to successfully assert an easement or way of necessity over defendant's property. A way of necessity will not be implied, where there is another or other outlets available to a public thoroughfare, even though such other outlets are less convenient and would necessitate the expenditure of a considerable sum of money to render them serviceable. 15 Ohio Jurisprudence, 62, Section 44. "A way of necessity will not be decreed unless the evidence showing the need therefor is clear and convincing. Such a way is not sanctioned when there is available another means of ingress and egress to and from the claimant's land even though it may be less convenient and will involve some labor and expense to repair and maintain." Although it would be much more convenient and much less expensive for plaintiffs to traverse defendant's property to reach a public street, the imposition of such a burden on defendant's land on the theory of a way of necessity is legally unwarranted in the circumstances exhibited by the record. The judgment of the Court of Appeals is, therefore, reversed and final judgment rendered for defendant. https://casetext.com/case/trattar-v-rausch
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document] EVIDENCE: Such stipulation, which is neither as complete nor satisfactory as could be desired, shows that on May 13, 1867, one Mary Lane acquired by deeds lands which embrace the properties now owned by the plaintiffs and defendant, respectively, plus a strip of land 33 feet in width running from the southeast corner of the 25-acre tract, now owned by plaintiffs, east to the center of a thoroughfare called Cahoon road. By deed recorded August 10, 1881, this same Mary Lane acquired title to another strip of land 33 feet wide and extending east from the northeast corner of plaintiffs' present land to the center of Cahoon road, which was used until the year 1928 for the purpose of ingress and egress. By deed recorded September 5, 1881, Mary Lane conveyed to the New York, Chicago St. Louis Railroad Company a right of way which effected a complete separation of the lands now owned by plaintiffs and defendant. Thus, in 1881, a condition was brought about whereby the original parcel of land was divided by a railroad right of way with two strips of land 33 feet wide and extending from Cahoon road to the 25-acre tract lying south of the railroad right of way and now belonging to plaintiffs. The property involved in the instant controversy continued to be owned by Mary Lane and her heirs until February 19, 1921, when the heirs conveyed the same to two persons named Dodd and Aldrich. In the conveyance there were three separate descriptions, one description included plaintiffs' present property, another defendant's present property and the remaining one the strip of land 33 feet wide and extending from the northeast corner of plaintiffs' premises to the center of Cahoon road. Sometime during the year 1921 Dodd and Aldrich constructed a crossing seven feet wide over the tracks and right of way of the railroad and connecting the premises now owned by plaintiffs with those now owned by defendant. Such railroad crossing was used by Dodd and Aldrich from the year 1922, and upon the establishment of Forest Drive in 1925 they traveled across the land now owned by defendant along a line between the railroad crossing and the south end of Forest Drive. The nature and extent of such use are not disclosed, but it apparently continued for an undisclosed purpose until the separate and distinct tax sales in 1940. By the present action plaintiffs seek to enjoin the defendant from interfering with their use of the passage or alleged easement from their land across his land to Forest Drive. An easement has been defined as "a right without profit, created by grant or prescription, which the owner of one estate [called the dominant estate] may exercise in or over the estate of another [called the servient estate] for the benefit of the former." Yeager v. Tuning, 79 Ohio St. 121, 124, 86 N.E. 657, 658, 19 L.R.A. (N.S.), 700, 128 Am. St. Rep., 679. An easement may be acquired only by grant, express or implied, or by prescription. Where, however, the easement sought to be enforced is grounded upon implication rather than express grant, it must be clearly established that such a right exists. Implied easements are not favored because they are in derogation of the rule that written instruments speak for themselves. Ciski v. Wentworth, 122 Ohio St. 487, 172 N.E. 276. An implied easement is based upon the theory that whenever one conveys property he includes in the conveyance whatever is necessary for its beneficial use and enjoyment and retains whatever is necessary for the use and enjoyment of the land retained. There being in this case no express grant of an easement, it becomes necessary to determine whether one arose by implication. Easements may be implied in several ways — from an existing use at the time of the severance of ownership in land, from a conveyance describing the premises as bounded upon a way, from a conveyance with reference to a plat or map or from necessity alone, as in the case of ways of necessity. 15 Ohio Jurisprudence, 37, Section 27. Here, we are concerned only with the first and last of these methods, namely, a use existing at the time of severance or a way of necessity. It is a well settled rule that a use must be continuous, apparent, permanent and necessary to be the basis of an implied easement upon the severance of the ownership of an estate. 28 Corpus Juris Secundum, Easements, 691, Section 33; and 15 Ohio Jurisprudence, 37, 45, Sections 28, 33. For a use to be permanent in character "it is required that the use shall have been so long continued prior to severance and so obvious as to show that it was meant to be permanent; a mere temporary provision or arrangement made for the convenience of the entire estate will not constitute that degree of permanency required to burden the property with a continuance of the same when divided or separated by conveyance to different parties." 28 Corpus Juris Secundum, Easements, 691, 692, Section 33; and 15 Ohio Jurisprudence, 41, Section 31. Plaintiffs having failed, then, to present facts sufficient to warrant the finding of an implied easement from an existing use, we come to a consideration of whether the facts disclosed are such as to sustain a way of necessity. An implied easement or way of necessity is based upon the theory that without it the grantor or grantee, as the case may be, can not make use of his land. It has been stated that "necessity does not of itself create a right of way, but is said to furnish evidence of the grantor's intention to convey a right of way and, therefore, raises an implication of grant." 17 American Jurisprudence, 961, Section 48. A way of necessity will not be implied where the claimant has another means of ingress or egress, whether over his own land or over the land of another. For over 40 years thereafter there was no connection between these lands. As already noted, up to the year 1928 the strip of land 33 feet wide, still in the names of Dodd and Aldrich and connecting Cahoon road with the northeast corner of plaintiffs' property, was used as a way of travel to and from such property. In our opinion plaintiffs do have a means of access to their lands from Cahoon road over the strip of ground 33 feet wide, referred to above, now belonging to those in plaintiffs' chain of title, and this being so they are not in a position to successfully assert an easement or way of necessity over defendant's property. A way of necessity will not be implied, where there is another or other outlets available to a public thoroughfare, even though such other outlets are less convenient and would necessitate the expenditure of a considerable sum of money to render them serviceable. 15 Ohio Jurisprudence, 62, Section 44. "A way of necessity will not be decreed unless the evidence showing the need therefor is clear and convincing. Such a way is not sanctioned when there is available another means of ingress and egress to and from the claimant's land even though it may be less convenient and will involve some labor and expense to repair and maintain." Although it would be much more convenient and much less expensive for plaintiffs to traverse defendant's property to reach a public street, the imposition of such a burden on defendant's land on the theory of a way of necessity is legally unwarranted in the circumstances exhibited by the record. The judgment of the Court of Appeals is, therefore, reversed and final judgment rendered for defendant. USER: We used to have a 10-acre waterlocked property by Lake Erie, in Ohio, in a remote section of the shoreline. Recently, we sold half of the property to our neighbor. Now, we can only reach the road by passing through his property or using a boat. No other way is available. However, he has been creating problems for us, because he doesn't us to use the road. He even made a fence in the middle of it. He says that it's his property and he can do whatever he wants with it. Can we prevail? Answer in 150 words. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
99
1,286
null
204
You must answer the following questions using only the information found in the provided context block. Do not under any circumstances, use external sources or prior knowledge. Answer in complete sentences but no longer than 250 words.
What are the differences between Points 6 and 7 on the rights of sick children in health care?
6. Every child and young person has a right to information, in a form that is understandable to them. Children and young people have a right to information that they can understand about their health and healthcare. This includes information about the choice of health care services available. Special attention and some creativity are often necessary to ensure that children have the freedom to seek, receive and impart information and ideas, not only orally but also through other means of the child’s or young person’s choice, such as play and art. Ensuring that the language and format used are appropriate to the child’s or young person’s abilities and level of understanding is essential, as is ensuring that they have understood the information given and had every opportunity to participate in the conversations about their health and care. This right to information includes the right of tamariki and rangatahi to have access to information in Te Reo Māori and for those from culturally and linguistically diverse backgrounds to have access to information in their own language. It is crucial that health professionals talk directly to children and young people, as well as to their families/whānau, even if the child or young person may seem unable to comprehend. Health professionals and families/whānau should be as open as possible with children and young people about their health and healthcare. Like all patients, children and young people are entitled to know what is going to happen to them before a procedure occurs and to be given honest information about their condition and treatment outcomes, and to be helped to select and practice strategies for coping. Giving children and young people timely and accurate information means that they can retain a sense of control about their healthcare, particularly in hospital. Advance preparation for hospitalisation, healthcare procedures or impending surgery provides children and young people with a sense of mastery over the healthcare environment and helps them to cope more effectively with potentially stressful situations. 7. Every child and young person has a right to participate in decision-making and, as appropriate to their capabilities, to make decisions about their care. Children and young people have a right to be involved in decision-making about their healthcare, to the greatest extent possible in line with their capacities for understanding. The right to be involved in making decisions also includes the right to be involved in decisions about the use, return or disposal of any bodily parts or substances removed, changed or added in the course of health care. Children and young people should be offered healthcare choices wherever possible. Further, they are always entitled to a second opinion. Whenever a child or young person has questions and ideas about their healthcare, these should be heard. If their views cannot be acted on, they are entitled to an explanation. In order for children and young people to participate in decision-making, the health professionals caring for them ought to be available, trained and committed to communicating with children and young people. Effective communication is critical in healthcare, as children, young people and their families/whānau require appropriate information in order to provide informed consent to treatment. A child or young person needs to be able to talk with the staff caring for him or her, to understand who the staff are and what they do, and to question them about his or her condition and treatment. Participation can include both verbal and nonverbal communication by children and young people with health professionals. It should also include opportunities to communicate through play, art and other media of the child’s or young person’s choice. Health professionals need to pay attention to ensure that appropriate responses are made to the nonverbal cues and communication by children and young people who use this as their main form (for example, infants, very young children and those with disabilities). The right to participation extends beyond the right of every individual child and young person to participate in his or her care. It includes encouraging and supporting children and young people as groups to be involved in consultation on the development, implementation and evaluation of the services, policies and strategies that have an impact on them. Informed consent is to be sought from children, young people and their families/whānau before they are involved in teaching or research. Also, those who do agree to participate must have the opportunity to withdraw at any time without having to give a reason, even if they consent initially. The decision not to participate in teaching or research must not alter access to treatment. Ethical oversight by a Human Research Ethics Committee of all research projects conducted in child healthcare services is part of protecting the children and young people involved.
You must answer the following questions using only the information found in the provided context block. Do not under any circumstances, use external sources or prior knowledge. Answer in complete sentences but no longer than 250 words. You may include Te Reo Māori in your answer. What are the differences between points 6 and 7 on the rights of sick children in health care? 6. Every child and young person has a right to information, in a form that is understandable to them. Children and young people have a right to information that they can understand about their health and healthcare. This includes information about the choice of health care services available. Special attention and some creativity are often necessary to ensure that children have the freedom to seek, receive and impart information and ideas, not only orally but also through other means of the child’s or young person’s choice, such as play and art. Ensuring that the language and format used are appropriate to the child’s or young person’s abilities and level of understanding is essential, as is ensuring that they have understood the information given and had every opportunity to participate in the conversations about their health and care. This right to information includes the right of tamariki and rangatahi to have access to information in Te Reo Māori and for those from culturally and linguistically diverse backgrounds to have access to information in their own language. It is crucial that health professionals talk directly to children and young people, as well as to their families/whānau, even if the child or young person may seem unable to comprehend. Health professionals and families/whānau should be as open as possible with children and young people about their health and healthcare. Like all patients, children and young people are entitled to know what is going to happen to them before a procedure occurs and to be given honest information about their condition and treatment outcomes, and to be helped to select and practice strategies for coping. Giving children and young people timely and accurate information means that they can retain a sense of control about their healthcare, particularly in hospital. Advance preparation for hospitalisation, healthcare procedures or impending surgery provides children and young people with a sense of mastery over the healthcare environment and helps them to cope more effectively with potentially stressful situations. 7. Every child and young person has a right to participate in decision-making and, as appropriate to their capabilities, to make decisions about their care. Children and young people have a right to be involved in decision-making about their healthcare, to the greatest extent possible in line with their capacities for understanding. The right to be involved in making decisions also includes the right to be involved in decisions about the use, return or disposal of any bodily parts or substances removed, changed or added in the course of health care. Children and young people should be offered healthcare choices wherever possible. Further, they are always entitled to a second opinion. Whenever a child or young person has questions and ideas about their healthcare, these should be heard. If their views cannot be acted on, they are entitled to an explanation. In order for children and young people to participate in decision-making, the health professionals caring for them ought to be available, trained and committed to communicating with children and young people. Effective communication is critical in healthcare, as children, young people and their families/whānau require appropriate information in order to provide informed consent to treatment. A child or young person needs to be able to talk with the staff caring for him or her, to understand who the staff are and what they do, and to question them about his or her condition and treatment. Participation can include both verbal and nonverbal communication by children and young people with health professionals. It should also include opportunities to communicate through play, art and other media of the child’s or young person’s choice. Health professionals need to pay attention to ensure that appropriate responses are made to the nonverbal cues and communication by children and young people who use this as their main form (for example, infants, very young children and those with disabilities). The right to participation extends beyond the right of every individual child and young person to participate in his or her care. It includes encouraging and supporting children and young people as groups to be involved in consultation on the development, implementation and evaluation of the services, policies and strategies that have an impact on them. Informed consent is to be sought from children, young people and their families/whānau before they are involved in teaching or research. Also, those who do agree to participate must have the opportunity to withdraw at any time without having to give a reason, even if they consent initially. The decision not to participate in teaching or research must not alter access to treatment. Ethical oversight by a Human Research Ethics Committee of all research projects conducted in child healthcare services is part of protecting the children and young people involved.
You must answer the following questions using only the information found in the provided context block. Do not under any circumstances, use external sources or prior knowledge. Answer in complete sentences but no longer than 250 words. EVIDENCE: 6. Every child and young person has a right to information, in a form that is understandable to them. Children and young people have a right to information that they can understand about their health and healthcare. This includes information about the choice of health care services available. Special attention and some creativity are often necessary to ensure that children have the freedom to seek, receive and impart information and ideas, not only orally but also through other means of the child’s or young person’s choice, such as play and art. Ensuring that the language and format used are appropriate to the child’s or young person’s abilities and level of understanding is essential, as is ensuring that they have understood the information given and had every opportunity to participate in the conversations about their health and care. This right to information includes the right of tamariki and rangatahi to have access to information in Te Reo Māori and for those from culturally and linguistically diverse backgrounds to have access to information in their own language. It is crucial that health professionals talk directly to children and young people, as well as to their families/whānau, even if the child or young person may seem unable to comprehend. Health professionals and families/whānau should be as open as possible with children and young people about their health and healthcare. Like all patients, children and young people are entitled to know what is going to happen to them before a procedure occurs and to be given honest information about their condition and treatment outcomes, and to be helped to select and practice strategies for coping. Giving children and young people timely and accurate information means that they can retain a sense of control about their healthcare, particularly in hospital. Advance preparation for hospitalisation, healthcare procedures or impending surgery provides children and young people with a sense of mastery over the healthcare environment and helps them to cope more effectively with potentially stressful situations. 7. Every child and young person has a right to participate in decision-making and, as appropriate to their capabilities, to make decisions about their care. Children and young people have a right to be involved in decision-making about their healthcare, to the greatest extent possible in line with their capacities for understanding. The right to be involved in making decisions also includes the right to be involved in decisions about the use, return or disposal of any bodily parts or substances removed, changed or added in the course of health care. Children and young people should be offered healthcare choices wherever possible. Further, they are always entitled to a second opinion. Whenever a child or young person has questions and ideas about their healthcare, these should be heard. If their views cannot be acted on, they are entitled to an explanation. In order for children and young people to participate in decision-making, the health professionals caring for them ought to be available, trained and committed to communicating with children and young people. Effective communication is critical in healthcare, as children, young people and their families/whānau require appropriate information in order to provide informed consent to treatment. A child or young person needs to be able to talk with the staff caring for him or her, to understand who the staff are and what they do, and to question them about his or her condition and treatment. Participation can include both verbal and nonverbal communication by children and young people with health professionals. It should also include opportunities to communicate through play, art and other media of the child’s or young person’s choice. Health professionals need to pay attention to ensure that appropriate responses are made to the nonverbal cues and communication by children and young people who use this as their main form (for example, infants, very young children and those with disabilities). The right to participation extends beyond the right of every individual child and young person to participate in his or her care. It includes encouraging and supporting children and young people as groups to be involved in consultation on the development, implementation and evaluation of the services, policies and strategies that have an impact on them. Informed consent is to be sought from children, young people and their families/whānau before they are involved in teaching or research. Also, those who do agree to participate must have the opportunity to withdraw at any time without having to give a reason, even if they consent initially. The decision not to participate in teaching or research must not alter access to treatment. Ethical oversight by a Human Research Ethics Committee of all research projects conducted in child healthcare services is part of protecting the children and young people involved. USER: What are the differences between Points 6 and 7 on the rights of sick children in health care? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
37
18
788
null
217
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document]
I bought my kids some new toothpaste and it contains the ingredient xylitol. I want to know more about what that is so I found this article. Please explain what xylitol is and what benefits it has. Use at least 400 words in your response.
Should I Switch to Xylitol Toothpaste? Dental benefits Xylitol toothpaste Vs. fluoride In children Gum and candy Daily intake Side effects FAQ Takeaway Some research suggests that xylitol toothpaste may benefit your teeth, such as preventing cavities. However, research is mixed. More studies are needed to fully support its dental health benefits. Xylitol is a sugar alcohol. Although it occursTrusted Source naturally in some fruits, it’s considered an artificial sweetener. Some research suggests that xylitol may have several dental benefits. However, the American Academy of Pediatric Dentistry (AAPD) doesn’t support using xylitol toothpaste because there isn’t enough research on its effectiveness for dental health, and the current research is mixed. Keep reading to learn more about the possible dental health benefits and side effects of xylitol toothpaste, as well as how to use it. Xylitol and dental health benefits Xylitol may be an effective defenseTrusted Source against the bacteria Streptococcus mutans (S. mutans). This type of cariogenic, or cavity-causing, bacteria is a key contributor to tooth decay and enamel breakdown. Sugar serves as food for the cariogenic bacteria that live in your mouth. When those bacteria feed on fermentable sugars, they produce lactic acid that damages tooth enamel. This damage can eventually lead to cavities. Xylitol is an unfermentable sugar alcohol that the bacteria can’t process. That means no lactic acid is produced to damage the enamel. Xylitol may also help prevent dental plaque, which may lead to cavities. Benefits of xylitol toothpaste Several studies have found that xylitol toothpaste may be an effective delivery system for xylitol. However, the research is mixed on how much xylitol is needed to experience notable benefits. For instance, a 2023 study found that using 25% xylitol toothpaste twice daily for 24 months significantly reduced levels of S. mutans in the mouth. The researchers concluded that xylitol toothpaste may be an effective home remedy for preventing cavities. A 2024 studyTrusted Source found similar results when using 25% xylitol toothpaste twice daily for 3 months, while a 2022 reviewTrusted Source found that products containing xylitol, such as chewing gum and toothpaste, helped prevent cavities. On the other hand, the AAPD found that taking xylitol less than three times daily had no protective effects, which differs from the positive results above. However, the AAPD did note that consuming 5 to 10 grams (g) of xylitol three times daily may help reduce cavities by up to 80%. ADVERTISEMENT Compare Medicare Advantage Plans See a list of Medicare Advantage plans in your area that may be suited to your unique needs with HelloMedicare™. We offer Medicare Advantage plans in All 50 States Multiple Insurance Carriers Available Compare Plans Medicare Costs Defined Xylitol toothpaste vs. fluoride toothpaste Research comparing xylitol toothpaste and fluoride toothpaste is limited. A small 2018 studyTrusted Source found that fluoride toothpaste was more effective at reducing S. mutans than xylitol toothpaste. Some xylitol proponents suggest that it’s more effective when combined with fluoride in toothpaste. Xylitol helps protect the teeth from damage, and fluoride helps repair any damage that the teeth might sustain. A 2015 reviewTrusted Source of 10 studies compared fluoride toothpaste to fluoride toothpaste with 10% xylitol added. When children used xylitol-fluoride toothpaste for 2.5 to 3 years, their cavities were reduced by an additional 13%. That said, the evidence was deemed to be of low quality. However, a 2014 studyTrusted Source found no significant difference in tooth decay reduction between children using xylitol-fluoride toothpaste and those using fluoride-only toothpaste. More research is needed to compare the effects of fluoride and xylitol toothpaste. Xylitol toothpaste for children Some studies have found that xylitol toothpaste may be an effective strategy for reducing cavities in kids. The AAPD has endorsed xylitol as part of a complete strategy to prevent tooth decay or cavities. However, due to mixed and limited research, the AAPD doesn’t recommend using xylitol toothpaste for children. Xylitol chewing gum and candy According to the AAPD, some research has found that chewing may enhance xylitol’s anti-cariogenic, or anti-tooth decay, effect. This means that chewing gum, lozenges, and candies may be more effective at preventing cavities than toothpaste. A 2014 study also found that erythritol candy was significantly more effective at reducing cavities than xylitol candy. However, more research is needed. How much xylitol you need The research on how much xylitol you need per day is mixed. For instance, a 2014 review suggests that a daily dose of 6 to 10 gTrusted Source could help prevent carries. However, the AAPD notes that three daily doses of 5 to 10 g, for a daily total of 15 to 30 g, are needed to experience dental benefits. Side effects of xylitol Xylitol is digested slowly in the large intestine. This may result in its primary side effects, which may include: flatulence diarrhea more frequent bowel movements It’s also important to note that xylitol is especially toxic to dogs. If your dog eats xylitol toothpaste — or xylitol in any form — take them to the veterinarian immediately. Make sure to bring along the packaging from the xylitol product for the vet’s reference. Frequently asked questions Is xylitol toothpaste good for your teeth? Some research suggests xylitol toothpaste could help reduce plaque buildup and bacteria that may lead to cavities. However, more research is needed. Is there xylitol in Crest toothpaste? Some types of Crest toothpaste may have xylitol, such as Crest 3D white. However, if you want xylitol in your toothpaste, it’s best to read the labels because not all toothpaste contains xylitol. The bottom line Xylitol is a sugar replacement that could help prevent cavities and tooth decay. Some research suggests that xylitol toothpaste may have a significant impact on cavity prevention. However, toothpaste may not be the most effective delivery system for xylitol. If you’re considering switching to a toothpaste with xylitol, speak with a dentist first. They could help you decide whether it’s right for you and provide suggestions to help you prevent cavities. This may include modifying your oral hygiene routine and recommending regular visits to the dentist.
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== I bought my kids some new toothpaste and it contains the ingredient xylitol. I want to know more about what that is so I found this article. Please explain what xylitol is and what benefits it has. Use at least 400 words in your response. {passage 0} ========== Should I Switch to Xylitol Toothpaste? Dental benefits Xylitol toothpaste Vs. fluoride In children Gum and candy Daily intake Side effects FAQ Takeaway Some research suggests that xylitol toothpaste may benefit your teeth, such as preventing cavities. However, research is mixed. More studies are needed to fully support its dental health benefits. Xylitol is a sugar alcohol. Although it occursTrusted Source naturally in some fruits, it’s considered an artificial sweetener. Some research suggests that xylitol may have several dental benefits. However, the American Academy of Pediatric Dentistry (AAPD) doesn’t support using xylitol toothpaste because there isn’t enough research on its effectiveness for dental health, and the current research is mixed. Keep reading to learn more about the possible dental health benefits and side effects of xylitol toothpaste, as well as how to use it. Xylitol and dental health benefits Xylitol may be an effective defenseTrusted Source against the bacteria Streptococcus mutans (S. mutans). This type of cariogenic, or cavity-causing, bacteria is a key contributor to tooth decay and enamel breakdown. Sugar serves as food for the cariogenic bacteria that live in your mouth. When those bacteria feed on fermentable sugars, they produce lactic acid that damages tooth enamel. This damage can eventually lead to cavities. Xylitol is an unfermentable sugar alcohol that the bacteria can’t process. That means no lactic acid is produced to damage the enamel. Xylitol may also help prevent dental plaque, which may lead to cavities. Benefits of xylitol toothpaste Several studies have found that xylitol toothpaste may be an effective delivery system for xylitol. However, the research is mixed on how much xylitol is needed to experience notable benefits. For instance, a 2023 study found that using 25% xylitol toothpaste twice daily for 24 months significantly reduced levels of S. mutans in the mouth. The researchers concluded that xylitol toothpaste may be an effective home remedy for preventing cavities. A 2024 studyTrusted Source found similar results when using 25% xylitol toothpaste twice daily for 3 months, while a 2022 reviewTrusted Source found that products containing xylitol, such as chewing gum and toothpaste, helped prevent cavities. On the other hand, the AAPD found that taking xylitol less than three times daily had no protective effects, which differs from the positive results above. However, the AAPD did note that consuming 5 to 10 grams (g) of xylitol three times daily may help reduce cavities by up to 80%. ADVERTISEMENT Compare Medicare Advantage Plans See a list of Medicare Advantage plans in your area that may be suited to your unique needs with HelloMedicare™. We offer Medicare Advantage plans in All 50 States Multiple Insurance Carriers Available Compare Plans Medicare Costs Defined Xylitol toothpaste vs. fluoride toothpaste Research comparing xylitol toothpaste and fluoride toothpaste is limited. A small 2018 studyTrusted Source found that fluoride toothpaste was more effective at reducing S. mutans than xylitol toothpaste. Some xylitol proponents suggest that it’s more effective when combined with fluoride in toothpaste. Xylitol helps protect the teeth from damage, and fluoride helps repair any damage that the teeth might sustain. A 2015 reviewTrusted Source of 10 studies compared fluoride toothpaste to fluoride toothpaste with 10% xylitol added. When children used xylitol-fluoride toothpaste for 2.5 to 3 years, their cavities were reduced by an additional 13%. That said, the evidence was deemed to be of low quality. However, a 2014 studyTrusted Source found no significant difference in tooth decay reduction between children using xylitol-fluoride toothpaste and those using fluoride-only toothpaste. More research is needed to compare the effects of fluoride and xylitol toothpaste. Xylitol toothpaste for children Some studies have found that xylitol toothpaste may be an effective strategy for reducing cavities in kids. The AAPD has endorsed xylitol as part of a complete strategy to prevent tooth decay or cavities. However, due to mixed and limited research, the AAPD doesn’t recommend using xylitol toothpaste for children. Xylitol chewing gum and candy According to the AAPD, some research has found that chewing may enhance xylitol’s anti-cariogenic, or anti-tooth decay, effect. This means that chewing gum, lozenges, and candies may be more effective at preventing cavities than toothpaste. A 2014 study also found that erythritol candy was significantly more effective at reducing cavities than xylitol candy. However, more research is needed. How much xylitol you need The research on how much xylitol you need per day is mixed. For instance, a 2014 review suggests that a daily dose of 6 to 10 gTrusted Source could help prevent carries. However, the AAPD notes that three daily doses of 5 to 10 g, for a daily total of 15 to 30 g, are needed to experience dental benefits. Side effects of xylitol Xylitol is digested slowly in the large intestine. This may result in its primary side effects, which may include: flatulence diarrhea more frequent bowel movements It’s also important to note that xylitol is especially toxic to dogs. If your dog eats xylitol toothpaste — or xylitol in any form — take them to the veterinarian immediately. Make sure to bring along the packaging from the xylitol product for the vet’s reference. Frequently asked questions Is xylitol toothpaste good for your teeth? Some research suggests xylitol toothpaste could help reduce plaque buildup and bacteria that may lead to cavities. However, more research is needed. Is there xylitol in Crest toothpaste? Some types of Crest toothpaste may have xylitol, such as Crest 3D white. However, if you want xylitol in your toothpaste, it’s best to read the labels because not all toothpaste contains xylitol. The bottom line Xylitol is a sugar replacement that could help prevent cavities and tooth decay. Some research suggests that xylitol toothpaste may have a significant impact on cavity prevention. However, toothpaste may not be the most effective delivery system for xylitol. If you’re considering switching to a toothpaste with xylitol, speak with a dentist first. They could help you decide whether it’s right for you and provide suggestions to help you prevent cavities. This may include modifying your oral hygiene routine and recommending regular visits to the dentist. https://www.healthline.com/health/xylitol-toothpaste#takeaway
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document] EVIDENCE: Should I Switch to Xylitol Toothpaste? Dental benefits Xylitol toothpaste Vs. fluoride In children Gum and candy Daily intake Side effects FAQ Takeaway Some research suggests that xylitol toothpaste may benefit your teeth, such as preventing cavities. However, research is mixed. More studies are needed to fully support its dental health benefits. Xylitol is a sugar alcohol. Although it occursTrusted Source naturally in some fruits, it’s considered an artificial sweetener. Some research suggests that xylitol may have several dental benefits. However, the American Academy of Pediatric Dentistry (AAPD) doesn’t support using xylitol toothpaste because there isn’t enough research on its effectiveness for dental health, and the current research is mixed. Keep reading to learn more about the possible dental health benefits and side effects of xylitol toothpaste, as well as how to use it. Xylitol and dental health benefits Xylitol may be an effective defenseTrusted Source against the bacteria Streptococcus mutans (S. mutans). This type of cariogenic, or cavity-causing, bacteria is a key contributor to tooth decay and enamel breakdown. Sugar serves as food for the cariogenic bacteria that live in your mouth. When those bacteria feed on fermentable sugars, they produce lactic acid that damages tooth enamel. This damage can eventually lead to cavities. Xylitol is an unfermentable sugar alcohol that the bacteria can’t process. That means no lactic acid is produced to damage the enamel. Xylitol may also help prevent dental plaque, which may lead to cavities. Benefits of xylitol toothpaste Several studies have found that xylitol toothpaste may be an effective delivery system for xylitol. However, the research is mixed on how much xylitol is needed to experience notable benefits. For instance, a 2023 study found that using 25% xylitol toothpaste twice daily for 24 months significantly reduced levels of S. mutans in the mouth. The researchers concluded that xylitol toothpaste may be an effective home remedy for preventing cavities. A 2024 studyTrusted Source found similar results when using 25% xylitol toothpaste twice daily for 3 months, while a 2022 reviewTrusted Source found that products containing xylitol, such as chewing gum and toothpaste, helped prevent cavities. On the other hand, the AAPD found that taking xylitol less than three times daily had no protective effects, which differs from the positive results above. However, the AAPD did note that consuming 5 to 10 grams (g) of xylitol three times daily may help reduce cavities by up to 80%. ADVERTISEMENT Compare Medicare Advantage Plans See a list of Medicare Advantage plans in your area that may be suited to your unique needs with HelloMedicare™. We offer Medicare Advantage plans in All 50 States Multiple Insurance Carriers Available Compare Plans Medicare Costs Defined Xylitol toothpaste vs. fluoride toothpaste Research comparing xylitol toothpaste and fluoride toothpaste is limited. A small 2018 studyTrusted Source found that fluoride toothpaste was more effective at reducing S. mutans than xylitol toothpaste. Some xylitol proponents suggest that it’s more effective when combined with fluoride in toothpaste. Xylitol helps protect the teeth from damage, and fluoride helps repair any damage that the teeth might sustain. A 2015 reviewTrusted Source of 10 studies compared fluoride toothpaste to fluoride toothpaste with 10% xylitol added. When children used xylitol-fluoride toothpaste for 2.5 to 3 years, their cavities were reduced by an additional 13%. That said, the evidence was deemed to be of low quality. However, a 2014 studyTrusted Source found no significant difference in tooth decay reduction between children using xylitol-fluoride toothpaste and those using fluoride-only toothpaste. More research is needed to compare the effects of fluoride and xylitol toothpaste. Xylitol toothpaste for children Some studies have found that xylitol toothpaste may be an effective strategy for reducing cavities in kids. The AAPD has endorsed xylitol as part of a complete strategy to prevent tooth decay or cavities. However, due to mixed and limited research, the AAPD doesn’t recommend using xylitol toothpaste for children. Xylitol chewing gum and candy According to the AAPD, some research has found that chewing may enhance xylitol’s anti-cariogenic, or anti-tooth decay, effect. This means that chewing gum, lozenges, and candies may be more effective at preventing cavities than toothpaste. A 2014 study also found that erythritol candy was significantly more effective at reducing cavities than xylitol candy. However, more research is needed. How much xylitol you need The research on how much xylitol you need per day is mixed. For instance, a 2014 review suggests that a daily dose of 6 to 10 gTrusted Source could help prevent carries. However, the AAPD notes that three daily doses of 5 to 10 g, for a daily total of 15 to 30 g, are needed to experience dental benefits. Side effects of xylitol Xylitol is digested slowly in the large intestine. This may result in its primary side effects, which may include: flatulence diarrhea more frequent bowel movements It’s also important to note that xylitol is especially toxic to dogs. If your dog eats xylitol toothpaste — or xylitol in any form — take them to the veterinarian immediately. Make sure to bring along the packaging from the xylitol product for the vet’s reference. Frequently asked questions Is xylitol toothpaste good for your teeth? Some research suggests xylitol toothpaste could help reduce plaque buildup and bacteria that may lead to cavities. However, more research is needed. Is there xylitol in Crest toothpaste? Some types of Crest toothpaste may have xylitol, such as Crest 3D white. However, if you want xylitol in your toothpaste, it’s best to read the labels because not all toothpaste contains xylitol. The bottom line Xylitol is a sugar replacement that could help prevent cavities and tooth decay. Some research suggests that xylitol toothpaste may have a significant impact on cavity prevention. However, toothpaste may not be the most effective delivery system for xylitol. If you’re considering switching to a toothpaste with xylitol, speak with a dentist first. They could help you decide whether it’s right for you and provide suggestions to help you prevent cavities. This may include modifying your oral hygiene routine and recommending regular visits to the dentist. USER: I bought my kids some new toothpaste and it contains the ingredient xylitol. I want to know more about what that is so I found this article. Please explain what xylitol is and what benefits it has. Use at least 400 words in your response. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
45
1,007
null
48
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document]
Growing up with social media, I have always wondered how it has affected my education. How can social media influence students' learning experience both positively and negatively? List one reason for each.
The use of social media is incomparably on the rise among students, influenced by the globalized forms of communication and the post-pandemic rush to use multiple social media platforms for education in different fields of study. Though social media has created tremendous chances for sharing ideas and emotions, the kind of social support it provides might fail to meet students’ emotional needs, or the alleged positive effects might be short-lasting. In recent years, several studies have been conducted to explore the potential effects of social media on students’ affective traits, such as stress, anxiety, depression, and so on. The present paper reviews the findings of the exemplary published works of research to shed light on the positive and negative potential effects of the massive use of social media on students’ emotional well-being. This review can be insightful for teachers who tend to take the potential psychological effects of social media for granted. They may want to know more about the actual effects of the over-reliance on and the excessive (and actually obsessive) use of social media on students’ developing certain images of self and certain emotions which are not necessarily positive. There will be implications for pre- and in-service teacher training and professional development programs and all those involved in student affairs. Social media has turned into an essential element of individuals’ lives including students in today’s world of communication. Its use is growing significantly more than ever before especially in the post-pandemic era, marked by a great revolution happening to the educational systems. Recent investigations of using social media show that approximately 3 billion individuals worldwide are now communicating via social media (Iwamoto and Chun, 2020). This growing population of social media users is spending more and more time on social network groupings, as facts and figures show that individuals spend 2 h a day, on average, on a variety of social media applications, exchanging pictures and messages, updating status, tweeting, favoring, and commenting on many updated socially shared information (Abbott, 2017). Researchers have begun to investigate the psychological effects of using social media on students’ lives. Chukwuere and Chukwuere (2017) maintained that social media platforms can be considered the most important source of changing individuals’ mood, because when someone is passively using a social media platform seemingly with no special purpose, s/he can finally feel that his/her mood has changed as a function of the nature of content overviewed. Therefore, positive and negative moods can easily be transferred among the population using social media networks (Chukwuere and Chukwuere, 2017). This may become increasingly important as students are seen to be using social media platforms more than before and social networking is becoming an integral aspect of their lives. As described by Iwamoto and Chun (2020), when students are affected by social media posts, especially due to the increasing reliance on social media use in life, they may be encouraged to begin comparing themselves to others or develop great unrealistic expectations of themselves or others, which can have several affective consequences. Considering the increasing influence of social media on education, the present paper aims to focus on the affective variables such as depression, stress, and anxiety, and how social media can possibly increase or decrease these emotions in student life. The exemplary works of research on this topic in recent years will be reviewed here, hoping to shed light on the positive and negative effects of these ever-growing influential platforms on the psychology of students. The body of research on the effect of social media on students’ affective and emotional states has led to mixed results. The existing literature shows that there are some positive and some negative affective impacts. Yet, it seems that the latter is pre-dominant. Mathewson (2020) attributed these divergent positive and negative effects to the different theoretical frameworks adopted in different studies and also the different contexts (different countries with whole different educational systems). According to Fredrickson’s broaden-and-build theory of positive emotions (Fredrickson, 2001), the mental repertoires of learners can be built and broadened by how they feel. For instance, some external stimuli might provoke negative emotions such as anxiety and depression in learners. Having experienced these negative emotions, students might repeatedly check their messages on social media or get addicted to them. As a result, their cognitive repertoire and mental capacity might become limited and they might lose their concentration during their learning process. On the other hand, it should be noted that by feeling positive, learners might take full advantage of the affordances of the social media and; thus, be able to follow their learning goals strategically. This point should be highlighted that the link between the use of social media and affective states is bi-directional. Therefore, strategic use of social media or its addictive use by students can direct them toward either positive experiences like enjoyment or negative ones such as anxiety and depression. Also, these mixed positive and negative effects are similar to the findings of several other relevant studies on general populations’ psychological and emotional health. A number of studies (with general research populations not necessarily students) showed that social networks have facilitated the way of staying in touch with family and friends living far away as well as an increased social support (Zhang, 2017). Given the positive and negative emotional effects of social media, social media can either scaffold the emotional repertoire of students, which can develop positive emotions in learners, or induce negative provokers in them, based on which learners might feel negative emotions such as anxiety and depression. However, admittedly, social media has also generated a domain that encourages the act of comparing lives, and striving for approval; therefore, it establishes and internalizes unrealistic perceptions (Virden et al., 2014; Radovic et al., 2017). It should be mentioned that the susceptibility of affective variables to social media should be interpreted from a dynamic lens. This means that the ecology of the social media can make changes in the emotional experiences of learners. More specifically, students’ affective variables might self-organize into different states under the influence of social media. As for the positive correlation found in many studies between the use of social media and such negative effects as anxiety, depression, and stress, it can be hypothesized that this correlation is induced by the continuous comparison the individual makes and the perception that others are doing better than him/her influenced by the posts that appear on social media. Using social media can play a major role in university students’ psychological well-being than expected. Though most of these studies were correlational, and correlation is not the same as causation, as the studies show that the number of participants experiencing these negative emotions under the influence of social media is significantly high, more extensive research is highly suggested to explore causal effects (Mathewson, 2020). As the review of exemplary studies showed, some believed that social media increased comparisons that students made between themselves and others. This finding ratifies the relevance of the Interpretation Comparison Model (Stapel and Koomen, 2000; Stapel, 2007) and Festinger’s (1954) Social Comparison Theory. Concerning the negative effects of social media on students’ psychology, it can be argued that individuals may fail to understand that the content presented in social media is usually changed to only represent the attractive aspects of people’s lives, showing an unrealistic image of things. We can add that this argument also supports the relevance of the Social Comparison Theory and the Interpretation Comparison Model (Stapel and Koomen, 2000; Stapel, 2007), because social media sets standards that students think they should compare themselves with. A constant observation of how other students or peers are showing their instances of achievement leads to higher self-evaluation (Stapel and Koomen, 2000).
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> Growing up with social media, I have always wondered how it has affected my education. How can social media influence students' learning experience both positively and negatively? List one reason for each. <TEXT> The use of social media is incomparably on the rise among students, influenced by the globalized forms of communication and the post-pandemic rush to use multiple social media platforms for education in different fields of study. Though social media has created tremendous chances for sharing ideas and emotions, the kind of social support it provides might fail to meet students’ emotional needs, or the alleged positive effects might be short-lasting. In recent years, several studies have been conducted to explore the potential effects of social media on students’ affective traits, such as stress, anxiety, depression, and so on. The present paper reviews the findings of the exemplary published works of research to shed light on the positive and negative potential effects of the massive use of social media on students’ emotional well-being. This review can be insightful for teachers who tend to take the potential psychological effects of social media for granted. They may want to know more about the actual effects of the over-reliance on and the excessive (and actually obsessive) use of social media on students’ developing certain images of self and certain emotions which are not necessarily positive. There will be implications for pre- and in-service teacher training and professional development programs and all those involved in student affairs. Social media has turned into an essential element of individuals’ lives including students in today’s world of communication. Its use is growing significantly more than ever before especially in the post-pandemic era, marked by a great revolution happening to the educational systems. Recent investigations of using social media show that approximately 3 billion individuals worldwide are now communicating via social media (Iwamoto and Chun, 2020). This growing population of social media users is spending more and more time on social network groupings, as facts and figures show that individuals spend 2 h a day, on average, on a variety of social media applications, exchanging pictures and messages, updating status, tweeting, favoring, and commenting on many updated socially shared information (Abbott, 2017). Researchers have begun to investigate the psychological effects of using social media on students’ lives. Chukwuere and Chukwuere (2017) maintained that social media platforms can be considered the most important source of changing individuals’ mood, because when someone is passively using a social media platform seemingly with no special purpose, s/he can finally feel that his/her mood has changed as a function of the nature of content overviewed. Therefore, positive and negative moods can easily be transferred among the population using social media networks (Chukwuere and Chukwuere, 2017). This may become increasingly important as students are seen to be using social media platforms more than before and social networking is becoming an integral aspect of their lives. As described by Iwamoto and Chun (2020), when students are affected by social media posts, especially due to the increasing reliance on social media use in life, they may be encouraged to begin comparing themselves to others or develop great unrealistic expectations of themselves or others, which can have several affective consequences. Considering the increasing influence of social media on education, the present paper aims to focus on the affective variables such as depression, stress, and anxiety, and how social media can possibly increase or decrease these emotions in student life. The exemplary works of research on this topic in recent years will be reviewed here, hoping to shed light on the positive and negative effects of these ever-growing influential platforms on the psychology of students. The body of research on the effect of social media on students’ affective and emotional states has led to mixed results. The existing literature shows that there are some positive and some negative affective impacts. Yet, it seems that the latter is pre-dominant. Mathewson (2020) attributed these divergent positive and negative effects to the different theoretical frameworks adopted in different studies and also the different contexts (different countries with whole different educational systems). According to Fredrickson’s broaden-and-build theory of positive emotions (Fredrickson, 2001), the mental repertoires of learners can be built and broadened by how they feel. For instance, some external stimuli might provoke negative emotions such as anxiety and depression in learners. Having experienced these negative emotions, students might repeatedly check their messages on social media or get addicted to them. As a result, their cognitive repertoire and mental capacity might become limited and they might lose their concentration during their learning process. On the other hand, it should be noted that by feeling positive, learners might take full advantage of the affordances of the social media and; thus, be able to follow their learning goals strategically. This point should be highlighted that the link between the use of social media and affective states is bi-directional. Therefore, strategic use of social media or its addictive use by students can direct them toward either positive experiences like enjoyment or negative ones such as anxiety and depression. Also, these mixed positive and negative effects are similar to the findings of several other relevant studies on general populations’ psychological and emotional health. A number of studies (with general research populations not necessarily students) showed that social networks have facilitated the way of staying in touch with family and friends living far away as well as an increased social support (Zhang, 2017). Given the positive and negative emotional effects of social media, social media can either scaffold the emotional repertoire of students, which can develop positive emotions in learners, or induce negative provokers in them, based on which learners might feel negative emotions such as anxiety and depression. However, admittedly, social media has also generated a domain that encourages the act of comparing lives, and striving for approval; therefore, it establishes and internalizes unrealistic perceptions (Virden et al., 2014; Radovic et al., 2017). It should be mentioned that the susceptibility of affective variables to social media should be interpreted from a dynamic lens. This means that the ecology of the social media can make changes in the emotional experiences of learners. More specifically, students’ affective variables might self-organize into different states under the influence of social media. As for the positive correlation found in many studies between the use of social media and such negative effects as anxiety, depression, and stress, it can be hypothesized that this correlation is induced by the continuous comparison the individual makes and the perception that others are doing better than him/her influenced by the posts that appear on social media. Using social media can play a major role in university students’ psychological well-being than expected. Though most of these studies were correlational, and correlation is not the same as causation, as the studies show that the number of participants experiencing these negative emotions under the influence of social media is significantly high, more extensive research is highly suggested to explore causal effects (Mathewson, 2020). As the review of exemplary studies showed, some believed that social media increased comparisons that students made between themselves and others. This finding ratifies the relevance of the Interpretation Comparison Model (Stapel and Koomen, 2000; Stapel, 2007) and Festinger’s (1954) Social Comparison Theory. Concerning the negative effects of social media on students’ psychology, it can be argued that individuals may fail to understand that the content presented in social media is usually changed to only represent the attractive aspects of people’s lives, showing an unrealistic image of things. We can add that this argument also supports the relevance of the Social Comparison Theory and the Interpretation Comparison Model (Stapel and Koomen, 2000; Stapel, 2007), because social media sets standards that students think they should compare themselves with. A constant observation of how other students or peers are showing their instances of achievement leads to higher self-evaluation (Stapel and Koomen, 2000). https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2022.1010766/full
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document] EVIDENCE: The use of social media is incomparably on the rise among students, influenced by the globalized forms of communication and the post-pandemic rush to use multiple social media platforms for education in different fields of study. Though social media has created tremendous chances for sharing ideas and emotions, the kind of social support it provides might fail to meet students’ emotional needs, or the alleged positive effects might be short-lasting. In recent years, several studies have been conducted to explore the potential effects of social media on students’ affective traits, such as stress, anxiety, depression, and so on. The present paper reviews the findings of the exemplary published works of research to shed light on the positive and negative potential effects of the massive use of social media on students’ emotional well-being. This review can be insightful for teachers who tend to take the potential psychological effects of social media for granted. They may want to know more about the actual effects of the over-reliance on and the excessive (and actually obsessive) use of social media on students’ developing certain images of self and certain emotions which are not necessarily positive. There will be implications for pre- and in-service teacher training and professional development programs and all those involved in student affairs. Social media has turned into an essential element of individuals’ lives including students in today’s world of communication. Its use is growing significantly more than ever before especially in the post-pandemic era, marked by a great revolution happening to the educational systems. Recent investigations of using social media show that approximately 3 billion individuals worldwide are now communicating via social media (Iwamoto and Chun, 2020). This growing population of social media users is spending more and more time on social network groupings, as facts and figures show that individuals spend 2 h a day, on average, on a variety of social media applications, exchanging pictures and messages, updating status, tweeting, favoring, and commenting on many updated socially shared information (Abbott, 2017). Researchers have begun to investigate the psychological effects of using social media on students’ lives. Chukwuere and Chukwuere (2017) maintained that social media platforms can be considered the most important source of changing individuals’ mood, because when someone is passively using a social media platform seemingly with no special purpose, s/he can finally feel that his/her mood has changed as a function of the nature of content overviewed. Therefore, positive and negative moods can easily be transferred among the population using social media networks (Chukwuere and Chukwuere, 2017). This may become increasingly important as students are seen to be using social media platforms more than before and social networking is becoming an integral aspect of their lives. As described by Iwamoto and Chun (2020), when students are affected by social media posts, especially due to the increasing reliance on social media use in life, they may be encouraged to begin comparing themselves to others or develop great unrealistic expectations of themselves or others, which can have several affective consequences. Considering the increasing influence of social media on education, the present paper aims to focus on the affective variables such as depression, stress, and anxiety, and how social media can possibly increase or decrease these emotions in student life. The exemplary works of research on this topic in recent years will be reviewed here, hoping to shed light on the positive and negative effects of these ever-growing influential platforms on the psychology of students. The body of research on the effect of social media on students’ affective and emotional states has led to mixed results. The existing literature shows that there are some positive and some negative affective impacts. Yet, it seems that the latter is pre-dominant. Mathewson (2020) attributed these divergent positive and negative effects to the different theoretical frameworks adopted in different studies and also the different contexts (different countries with whole different educational systems). According to Fredrickson’s broaden-and-build theory of positive emotions (Fredrickson, 2001), the mental repertoires of learners can be built and broadened by how they feel. For instance, some external stimuli might provoke negative emotions such as anxiety and depression in learners. Having experienced these negative emotions, students might repeatedly check their messages on social media or get addicted to them. As a result, their cognitive repertoire and mental capacity might become limited and they might lose their concentration during their learning process. On the other hand, it should be noted that by feeling positive, learners might take full advantage of the affordances of the social media and; thus, be able to follow their learning goals strategically. This point should be highlighted that the link between the use of social media and affective states is bi-directional. Therefore, strategic use of social media or its addictive use by students can direct them toward either positive experiences like enjoyment or negative ones such as anxiety and depression. Also, these mixed positive and negative effects are similar to the findings of several other relevant studies on general populations’ psychological and emotional health. A number of studies (with general research populations not necessarily students) showed that social networks have facilitated the way of staying in touch with family and friends living far away as well as an increased social support (Zhang, 2017). Given the positive and negative emotional effects of social media, social media can either scaffold the emotional repertoire of students, which can develop positive emotions in learners, or induce negative provokers in them, based on which learners might feel negative emotions such as anxiety and depression. However, admittedly, social media has also generated a domain that encourages the act of comparing lives, and striving for approval; therefore, it establishes and internalizes unrealistic perceptions (Virden et al., 2014; Radovic et al., 2017). It should be mentioned that the susceptibility of affective variables to social media should be interpreted from a dynamic lens. This means that the ecology of the social media can make changes in the emotional experiences of learners. More specifically, students’ affective variables might self-organize into different states under the influence of social media. As for the positive correlation found in many studies between the use of social media and such negative effects as anxiety, depression, and stress, it can be hypothesized that this correlation is induced by the continuous comparison the individual makes and the perception that others are doing better than him/her influenced by the posts that appear on social media. Using social media can play a major role in university students’ psychological well-being than expected. Though most of these studies were correlational, and correlation is not the same as causation, as the studies show that the number of participants experiencing these negative emotions under the influence of social media is significantly high, more extensive research is highly suggested to explore causal effects (Mathewson, 2020). As the review of exemplary studies showed, some believed that social media increased comparisons that students made between themselves and others. This finding ratifies the relevance of the Interpretation Comparison Model (Stapel and Koomen, 2000; Stapel, 2007) and Festinger’s (1954) Social Comparison Theory. Concerning the negative effects of social media on students’ psychology, it can be argued that individuals may fail to understand that the content presented in social media is usually changed to only represent the attractive aspects of people’s lives, showing an unrealistic image of things. We can add that this argument also supports the relevance of the Social Comparison Theory and the Interpretation Comparison Model (Stapel and Koomen, 2000; Stapel, 2007), because social media sets standards that students think they should compare themselves with. A constant observation of how other students or peers are showing their instances of achievement leads to higher self-evaluation (Stapel and Koomen, 2000). USER: Growing up with social media, I have always wondered how it has affected my education. How can social media influence students' learning experience both positively and negatively? List one reason for each. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
20
32
1,283
null
746
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document]
what are the health benefits of laughter and what are the symptoms of having dry eyes and what are some of the ingreients in the eye drops
t Objective: To assess efficacy and safety of laughter exercise in patients with symptomatic dry eye disease. Design: Non-inferiority randomised controlled trial. Setting: Recruitment was from clinics and community and the trial took place at Zhongshan Ophthalmic Center, Sun Yat-sen University, the largest ophthalmic centre in China, between 18 June 2020 to 8 January 2021. Participants: People with symptomatic dry eye disease aged 18-45 years with ocular surface disease index scores ranging from 18 to 80 and tear film break-up time of eight seconds or less. Interventions: Participants were randomised 1:1 to receive laughter exercise or artificial tears (0.1% sodium hyaluronic acid eyedrop, control group) four times daily for eight weeks. The laughter exercise group viewed an instructional video and participants were requested to vocalise the phrases "Hee hee hee, hah hah hah, cheese cheese cheese, cheek cheek cheek, hah hah hah hah hah hah" 30 times per five minute session. Investigators assessing study outcomes were masked to group assignment but participants were unmasked for practical reasons. Main outcome measures: The primary outcome was the mean change in the ocular surface disease index (0-100, higher scores indicating worse ocular surface discomfort) from baseline to eight weeks in the per protocol population. The non-inferiority margin was 6 points of this index score. Main secondary outcomes included the proportion of patients with a decrease from baseline in ocular surface disease index score of at least 10 points and changes in dry eye disease signs, for example, non-invasive tear break up time at eight weeks. Results: 299 participants (mean age 28.9 years; 74% female) were randomly assigned to receive laughter exercise (n=149) or 0.1% sodium hyaluronic acid (n=150). 283 (95%) completed the trial. The mean change in ocular surface disease index score at eight weeks was -10.5 points (95% confidence interval (CI) -13.1 to -7.82) in the laughter exercise group and -8.83 (-11.7 to -6.02) in the control group. The upper boundary of the CI for difference in change between groups was lower than the non-inferiority margin (mean difference -1.45 points (95% CI -5.08 to 2.19); P=0.43), supporting non-inferiority. Among secondary outcomes, the laughter exercise was better in improving non-invasive tear break up time (mean difference 2.30 seconds (95% CI 1.30 to 3.30), P<0.001); other secondary outcomes showed no significant difference. No adverse events were noted in either study group. Conclusions: The laughter exercise was non-inferior to 0.1% sodium hyaluronic acid in relieving subjective symptoms in patients with dry eye disease with limited corneal staining over eight weeks intervention. Trial registration: ClinicalTrials.gov NCT04421300. © Author(s) (or their employer(s)) 2019. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. PubMed Disclaimer Conflict of interest statement Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/disclosure-of-interest/ and declare: support from the National Natural Science Foundation of China (82070922, 82201142) and the High-level Hospital Construction Project (303020101) for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== what are the health benefits of laughter and what are the symptoms of having dry eyes and what are some of the ingreients in the eye drops {passage 0} ========== t Objective: To assess efficacy and safety of laughter exercise in patients with symptomatic dry eye disease. Design: Non-inferiority randomised controlled trial. Setting: Recruitment was from clinics and community and the trial took place at Zhongshan Ophthalmic Center, Sun Yat-sen University, the largest ophthalmic centre in China, between 18 June 2020 to 8 January 2021. Participants: People with symptomatic dry eye disease aged 18-45 years with ocular surface disease index scores ranging from 18 to 80 and tear film break-up time of eight seconds or less. Interventions: Participants were randomised 1:1 to receive laughter exercise or artificial tears (0.1% sodium hyaluronic acid eyedrop, control group) four times daily for eight weeks. The laughter exercise group viewed an instructional video and participants were requested to vocalise the phrases "Hee hee hee, hah hah hah, cheese cheese cheese, cheek cheek cheek, hah hah hah hah hah hah" 30 times per five minute session. Investigators assessing study outcomes were masked to group assignment but participants were unmasked for practical reasons. Main outcome measures: The primary outcome was the mean change in the ocular surface disease index (0-100, higher scores indicating worse ocular surface discomfort) from baseline to eight weeks in the per protocol population. The non-inferiority margin was 6 points of this index score. Main secondary outcomes included the proportion of patients with a decrease from baseline in ocular surface disease index score of at least 10 points and changes in dry eye disease signs, for example, non-invasive tear break up time at eight weeks. Results: 299 participants (mean age 28.9 years; 74% female) were randomly assigned to receive laughter exercise (n=149) or 0.1% sodium hyaluronic acid (n=150). 283 (95%) completed the trial. The mean change in ocular surface disease index score at eight weeks was -10.5 points (95% confidence interval (CI) -13.1 to -7.82) in the laughter exercise group and -8.83 (-11.7 to -6.02) in the control group. The upper boundary of the CI for difference in change between groups was lower than the non-inferiority margin (mean difference -1.45 points (95% CI -5.08 to 2.19); P=0.43), supporting non-inferiority. Among secondary outcomes, the laughter exercise was better in improving non-invasive tear break up time (mean difference 2.30 seconds (95% CI 1.30 to 3.30), P<0.001); other secondary outcomes showed no significant difference. No adverse events were noted in either study group. Conclusions: The laughter exercise was non-inferior to 0.1% sodium hyaluronic acid in relieving subjective symptoms in patients with dry eye disease with limited corneal staining over eight weeks intervention. Trial registration: ClinicalTrials.gov NCT04421300. © Author(s) (or their employer(s)) 2019. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. PubMed Disclaimer Conflict of interest statement Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/disclosure-of-interest/ and declare: support from the National Natural Science Foundation of China (82070922, 82201142) and the High-level Hospital Construction Project (303020101) for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. https://pubmed.ncbi.nlm.nih.gov/39260878/
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document] EVIDENCE: t Objective: To assess efficacy and safety of laughter exercise in patients with symptomatic dry eye disease. Design: Non-inferiority randomised controlled trial. Setting: Recruitment was from clinics and community and the trial took place at Zhongshan Ophthalmic Center, Sun Yat-sen University, the largest ophthalmic centre in China, between 18 June 2020 to 8 January 2021. Participants: People with symptomatic dry eye disease aged 18-45 years with ocular surface disease index scores ranging from 18 to 80 and tear film break-up time of eight seconds or less. Interventions: Participants were randomised 1:1 to receive laughter exercise or artificial tears (0.1% sodium hyaluronic acid eyedrop, control group) four times daily for eight weeks. The laughter exercise group viewed an instructional video and participants were requested to vocalise the phrases "Hee hee hee, hah hah hah, cheese cheese cheese, cheek cheek cheek, hah hah hah hah hah hah" 30 times per five minute session. Investigators assessing study outcomes were masked to group assignment but participants were unmasked for practical reasons. Main outcome measures: The primary outcome was the mean change in the ocular surface disease index (0-100, higher scores indicating worse ocular surface discomfort) from baseline to eight weeks in the per protocol population. The non-inferiority margin was 6 points of this index score. Main secondary outcomes included the proportion of patients with a decrease from baseline in ocular surface disease index score of at least 10 points and changes in dry eye disease signs, for example, non-invasive tear break up time at eight weeks. Results: 299 participants (mean age 28.9 years; 74% female) were randomly assigned to receive laughter exercise (n=149) or 0.1% sodium hyaluronic acid (n=150). 283 (95%) completed the trial. The mean change in ocular surface disease index score at eight weeks was -10.5 points (95% confidence interval (CI) -13.1 to -7.82) in the laughter exercise group and -8.83 (-11.7 to -6.02) in the control group. The upper boundary of the CI for difference in change between groups was lower than the non-inferiority margin (mean difference -1.45 points (95% CI -5.08 to 2.19); P=0.43), supporting non-inferiority. Among secondary outcomes, the laughter exercise was better in improving non-invasive tear break up time (mean difference 2.30 seconds (95% CI 1.30 to 3.30), P<0.001); other secondary outcomes showed no significant difference. No adverse events were noted in either study group. Conclusions: The laughter exercise was non-inferior to 0.1% sodium hyaluronic acid in relieving subjective symptoms in patients with dry eye disease with limited corneal staining over eight weeks intervention. Trial registration: ClinicalTrials.gov NCT04421300. © Author(s) (or their employer(s)) 2019. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. PubMed Disclaimer Conflict of interest statement Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/disclosure-of-interest/ and declare: support from the National Natural Science Foundation of China (82070922, 82201142) and the High-level Hospital Construction Project (303020101) for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. USER: what are the health benefits of laughter and what are the symptoms of having dry eyes and what are some of the ingreients in the eye drops Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
27
518
null
839
Answer the question in the prompt based only on the text provided in the prompt itself.
The text below includes the reactions of a number of organizations to proposed changes to the availability of legal aid in the United Kingdom. According to the text, what is the financial case against these changes?
The most vulnerable will not get the help they need People need advice across different areas of legal scope to solve their problems sustainably Respondents also observed that the client groups who seek help with these areas are amongst the most vulnerable, usually they have multiple problems or experience ‘clusters’ of interrelated problems so need a seamless (or ’holistic’) service. The blanket removal of many areas of civil law from legal aid funding will prevent many people with mental health problems from accessing legal support for issues that cannot be neatly delineated into different types and are often central to managing their mental health. Mind/Rethink joint response Members of migrant communities, may be more likely to present particularly complex cases involving a range of different factors, or to find that language and cultural barriers mean that it is more difficult for them to resolve cases without legal aid support. Migrants Rights Network Priority debts, which could lead to the loss of a home, must be balanced with other key debts, such as council tax arrears and energy bill arrears, which also have a profound effect on consumers’ lives. Addressing a housing related debt in isolation is impracticable and unlikely to lead to a sustainable financial solution. Consumer Focus Wales As a matter of principle, we think that the Government’s list of criteria justifying the retention of legal aid should be supplemented by acknowledging cases where the disparity of resources between the parties is such as to unduly restrict the effective participation of one party in the proceedings for redress. Justice While the attempt to offer legal aid to victims of domestic violence is welcome, it does not cover other vulnerabilities in these cases such as substance misuse, disabilities, and mental health problems. Coordinated Action Against Domestic Abuse To exclude areas of law such as housing and debt from the legal aid scheme denies victims of violence the support they need to live a life free from violence. National Federation of Women’s Institutes Agencies also questioned whether the new proposed definitions and criteria for the scope of civil funding could work actually in practice. The gateways for demonstrating domestic violence are very limited and confined to ongoing proceedings…they do not reflect the pathways victims of domestic violence access to find help and support. Gingerbread It is nonsensical to create a system where the victim would be entitled to legal aid for related family law proceedings if the perpetrator of domestic violence had been convicted of assault occasioning actual bodily harm, but would be denied legal aid and have to represent herself if the perpetrator had been cautioned for the same offence. Rights of Women All the areas which will remain in scope are clearly important, but we believe the definitions and tests proposed would involve greater bureaucracy and problems of legal challenge and interpretation (and) result in many vulnerable people being unable to get the help they need at an early stage if definitions are set too tightly… for example, advising only on debts where a home is at ‘immediate risk’ is not practical, as most clients have multiple debts which must be addressed for them to achieve a sustainable financial position. Citizens Advice No legal aid will mean no help or advice for many There was a widespread view from respondents that for the ‘out of scope’ categories, the Green Paper contained misleading assertions about alternative sources of advice, and the capacity within the pro-bono and voluntary sectors to provide appropriate help. For example, National Debtline do not provide face-to-face debt advice and refer cases requiring specialist legal advice elsewhere.5 The ability of clients to use paid for, or conditional fee (CFA) and insurance funded services as an alternative to public funding was also questioned. The implication that charities like Disability Alliance are available to help people in the advent of legal aid cuts misrepresents the reality that we do not provide such support. Disability Alliance The Green Paper mentions IPSEA, the Advisory Centre for Education (ACE) and Parent Partnership Services (PPS) as alternative sources of support to legal aid in education cases…they do not have the capacity, and in some cases do not have the remit to deliver the level of support parents need in SEN education cases. Ambitious about Autism Reducing legal aid in the area of employment law will increase the demand on our free helpline but in the current economic climate it is unlikely that we will be able to meet the additional demand. Working Families One of the major barriers to the greater use of CFAs is disbursement funding and the costs of investigation. These costs are substantial in clinical negligence claims. Action Against Medical Accidents Legal aid saves the public purse money Many responses pointed to the value of legal aid, both in terms of its social value and its outcomes for clients, but also in terms of the cost savings to the justice system and to other statutory services, and the Government’s broader agenda to improve family and relationship support. Legal aid in administrative justice represents exceptional value for money. For example, welfare benefits legal aid cost £28.3 million in 2009/10, representing less than 0.18 per cent of the £16 billion value of unclaimed benefits. The success rate of legally-aided clients in this and several other administrative jurisdictions is over 90 per cent. Administrative Justice and Tribunals Council There is a strong case for targeting legal aid investment where it can have the greatest impact – this involves taking a broader view than simply looking at issues of loss of liberty or imminent homelessness, but should involve reconfiguring services to be more client-centred and targeting services better at those client groups for whom getting advice has the greatest beneficial impact. Youth Access In many of our cases at the CLC, the provision of legal advice and assistance can help resolve problems quickly and prevent matters from escalating. Removing access to legal advice in many civil and family law matters removes the possibility for problems to be resolved early and efficiently without the need for litigation. Childrens’ Legal Centre Proposals to withdraw legal aid when combined with the evidence of the lack of awareness of alternative support services will undermine Government’s broader agenda for relationship support...and increases the risk of the divorcing and separating population’s personal indebtedness...Couples spend an average of £28,000 when a marriage ends. National Family Mediation If the legal aid cuts go through and people are denied a lawyer for custody cases, I will lose all chance of ever seeing my children again. Client from Crossroads Women’s Centre Respondents found evidence to back this up such as the Legal Services Research Centre’s work on exclusion and legal problems.6 The General Council of the Bar for example had commissioned a ‘cohort’ analysis which compared the outcomes for advice seekers recorded by the LSRC’s civil and social justice survey as between those who sought help from informal sources, non-legal service providers, and legal aid providers. This identified a statistically significant better level of outcomes from legal aid services.7 The Law Society, LAPG, ILPA, EHRC, LASA, ASA and many others referred to the ‘business case’ research by Citizens Advice which used LSC outcomes and data from the LSRC’s civil and social justice survey to estimate (on 2008-9 figures) the cost-benefit ratio for key civil categories of legal aid advice. This research looked at the ‘adverse consequences’ of civil problems and found that: • for every £1 of legal aid expenditure on housing advice, the state potentially saves £2.34 • for every £1 of legal aid expenditure on debt advice, the state potentially saves £2.98 • for every £1 of legal aid expenditure on benefits advice, the state potentially saves £8.80 • for every £1 of legal aid expenditure on employment advice, the state potentially saves £7.13.8 Some respondents referred to work undertaken by the New Economics Foundation for the Law Centres Federation also suggested that the ‘social return’ for legal help for clients with the most complex problems could be as high £10 to every £1 invested.9
The text below includes the reactions of a number of organizations to proposed changes to the availability of legal aid in the United Kingdom. According to the text, what is the financial case against these changes? The most vulnerable will not get the help they need People need advice across different areas of legal scope to solve their problems sustainably Respondents also observed that the client groups who seek help with these areas are amongst the most vulnerable, usually they have multiple problems or experience ‘clusters’ of interrelated problems so need a seamless (or ’holistic’) service. The blanket removal of many areas of civil law from legal aid funding will prevent many people with mental health problems from accessing legal support for issues that cannot be neatly delineated into different types and are often central to managing their mental health. Mind/Rethink joint response Members of migrant communities, may be more likely to present particularly complex cases involving a range of different factors, or to find that language and cultural barriers mean that it is more difficult for them to resolve cases without legal aid support. Migrants Rights Network Priority debts, which could lead to the loss of a home, must be balanced with other key debts, such as council tax arrears and energy bill arrears, which also have a profound effect on consumers’ lives. Addressing a housing related debt in isolation is impracticable and unlikely to lead to a sustainable financial solution. Consumer Focus Wales As a matter of principle, we think that the Government’s list of criteria justifying the retention of legal aid should be supplemented by acknowledging cases where the disparity of resources between the parties is such as to unduly restrict the effective participation of one party in the proceedings for redress. Justice While the attempt to offer legal aid to victims of domestic violence is welcome, it does not cover other vulnerabilities in these cases such as substance misuse, disabilities, and mental health problems. Coordinated Action Against Domestic Abuse To exclude areas of law such as housing and debt from the legal aid scheme denies victims of violence the support they need to live a life free from violence. National Federation of Women’s Institutes Agencies also questioned whether the new proposed definitions and criteria for the scope of civil funding could work actually in practice. The gateways for demonstrating domestic violence are very limited and confined to ongoing proceedings…they do not reflect the pathways victims of domestic violence access to find help and support. Gingerbread It is nonsensical to create a system where the victim would be entitled to legal aid for related family law proceedings if the perpetrator of domestic violence had been convicted of assault occasioning actual bodily harm, but would be denied legal aid and have to represent herself if the perpetrator had been cautioned for the same offence. Rights of Women All the areas which will remain in scope are clearly important, but we believe the definitions and tests proposed would involve greater bureaucracy and problems of legal challenge and interpretation (and) result in many vulnerable people being unable to get the help they need at an early stage if definitions are set too tightly… for example, advising only on debts where a home is at ‘immediate risk’ is not practical, as most clients have multiple debts which must be addressed for them to achieve a sustainable financial position. Citizens Advice No legal aid will mean no help or advice for many There was a widespread view from respondents that for the ‘out of scope’ categories, the Green Paper contained misleading assertions about alternative sources of advice, and the capacity within the pro-bono and voluntary sectors to provide appropriate help. For example, National Debtline do not provide face-to-face debt advice and refer cases requiring specialist legal advice elsewhere.5 The ability of clients to use paid for, or conditional fee (CFA) and insurance funded services as an alternative to public funding was also questioned. The implication that charities like Disability Alliance are available to help people in the advent of legal aid cuts misrepresents the reality that we do not provide such support. Disability Alliance The Green Paper mentions IPSEA, the Advisory Centre for Education (ACE) and Parent Partnership Services (PPS) as alternative sources of support to legal aid in education cases…they do not have the capacity, and in some cases do not have the remit to deliver the level of support parents need in SEN education cases. Ambitious about Autism Reducing legal aid in the area of employment law will increase the demand on our free helpline but in the current economic climate it is unlikely that we will be able to meet the additional demand. Working Families One of the major barriers to the greater use of CFAs is disbursement funding and the costs of investigation. These costs are substantial in clinical negligence claims. Action Against Medical Accidents Legal aid saves the public purse money Many responses pointed to the value of legal aid, both in terms of its social value and its outcomes for clients, but also in terms of the cost savings to the justice system and to other statutory services, and the Government’s broader agenda to improve family and relationship support. Legal aid in administrative justice represents exceptional value for money. For example, welfare benefits legal aid cost £28.3 million in 2009/10, representing less than 0.18 per cent of the £16 billion value of unclaimed benefits. The success rate of legally-aided clients in this and several other administrative jurisdictions is over 90 per cent. Administrative Justice and Tribunals Council There is a strong case for targeting legal aid investment where it can have the greatest impact – this involves taking a broader view than simply looking at issues of loss of liberty or imminent homelessness, but should involve reconfiguring services to be more client-centred and targeting services better at those client groups for whom getting advice has the greatest beneficial impact. Youth Access In many of our cases at the CLC, the provision of legal advice and assistance can help resolve problems quickly and prevent matters from escalating. Removing access to legal advice in many civil and family law matters removes the possibility for problems to be resolved early and efficiently without the need for litigation. Childrens’ Legal Centre Proposals to withdraw legal aid when combined with the evidence of the lack of awareness of alternative support services will undermine Government’s broader agenda for relationship support...and increases the risk of the divorcing and separating population’s personal indebtedness...Couples spend an average of £28,000 when a marriage ends. National Family Mediation If the legal aid cuts go through and people are denied a lawyer for custody cases, I will lose all chance of ever seeing my children again. Client from Crossroads Women’s Centre Respondents found evidence to back this up such as the Legal Services Research Centre’s work on exclusion and legal problems.6 The General Council of the Bar for example had commissioned a ‘cohort’ analysis which compared the outcomes for advice seekers recorded by the LSRC’s civil and social justice survey as between those who sought help from informal sources, non-legal service providers, and legal aid providers. This identified a statistically significant better level of outcomes from legal aid services.7 The Law Society, LAPG, ILPA, EHRC, LASA, ASA and many others referred to the ‘business case’ research by Citizens Advice which used LSC outcomes and data from the LSRC’s civil and social justice survey to estimate (on 2008-9 figures) the cost-benefit ratio for key civil categories of legal aid advice. This research looked at the ‘adverse consequences’ of civil problems and found that: • for every £1 of legal aid expenditure on housing advice, the state potentially saves £2.34 • for every £1 of legal aid expenditure on debt advice, the state potentially saves £2.98 • for every £1 of legal aid expenditure on benefits advice, the state potentially saves £8.80 • for every £1 of legal aid expenditure on employment advice, the state potentially saves £7.13.8 Some respondents referred to work undertaken by the New Economics Foundation for the Law Centres Federation also suggested that the ‘social return’ for legal help for clients with the most complex problems could be as high £10 to every £1 invested.9
Answer the question in the prompt based only on the text provided in the prompt itself. EVIDENCE: The most vulnerable will not get the help they need People need advice across different areas of legal scope to solve their problems sustainably Respondents also observed that the client groups who seek help with these areas are amongst the most vulnerable, usually they have multiple problems or experience ‘clusters’ of interrelated problems so need a seamless (or ’holistic’) service. The blanket removal of many areas of civil law from legal aid funding will prevent many people with mental health problems from accessing legal support for issues that cannot be neatly delineated into different types and are often central to managing their mental health. Mind/Rethink joint response Members of migrant communities, may be more likely to present particularly complex cases involving a range of different factors, or to find that language and cultural barriers mean that it is more difficult for them to resolve cases without legal aid support. Migrants Rights Network Priority debts, which could lead to the loss of a home, must be balanced with other key debts, such as council tax arrears and energy bill arrears, which also have a profound effect on consumers’ lives. Addressing a housing related debt in isolation is impracticable and unlikely to lead to a sustainable financial solution. Consumer Focus Wales As a matter of principle, we think that the Government’s list of criteria justifying the retention of legal aid should be supplemented by acknowledging cases where the disparity of resources between the parties is such as to unduly restrict the effective participation of one party in the proceedings for redress. Justice While the attempt to offer legal aid to victims of domestic violence is welcome, it does not cover other vulnerabilities in these cases such as substance misuse, disabilities, and mental health problems. Coordinated Action Against Domestic Abuse To exclude areas of law such as housing and debt from the legal aid scheme denies victims of violence the support they need to live a life free from violence. National Federation of Women’s Institutes Agencies also questioned whether the new proposed definitions and criteria for the scope of civil funding could work actually in practice. The gateways for demonstrating domestic violence are very limited and confined to ongoing proceedings…they do not reflect the pathways victims of domestic violence access to find help and support. Gingerbread It is nonsensical to create a system where the victim would be entitled to legal aid for related family law proceedings if the perpetrator of domestic violence had been convicted of assault occasioning actual bodily harm, but would be denied legal aid and have to represent herself if the perpetrator had been cautioned for the same offence. Rights of Women All the areas which will remain in scope are clearly important, but we believe the definitions and tests proposed would involve greater bureaucracy and problems of legal challenge and interpretation (and) result in many vulnerable people being unable to get the help they need at an early stage if definitions are set too tightly… for example, advising only on debts where a home is at ‘immediate risk’ is not practical, as most clients have multiple debts which must be addressed for them to achieve a sustainable financial position. Citizens Advice No legal aid will mean no help or advice for many There was a widespread view from respondents that for the ‘out of scope’ categories, the Green Paper contained misleading assertions about alternative sources of advice, and the capacity within the pro-bono and voluntary sectors to provide appropriate help. For example, National Debtline do not provide face-to-face debt advice and refer cases requiring specialist legal advice elsewhere.5 The ability of clients to use paid for, or conditional fee (CFA) and insurance funded services as an alternative to public funding was also questioned. The implication that charities like Disability Alliance are available to help people in the advent of legal aid cuts misrepresents the reality that we do not provide such support. Disability Alliance The Green Paper mentions IPSEA, the Advisory Centre for Education (ACE) and Parent Partnership Services (PPS) as alternative sources of support to legal aid in education cases…they do not have the capacity, and in some cases do not have the remit to deliver the level of support parents need in SEN education cases. Ambitious about Autism Reducing legal aid in the area of employment law will increase the demand on our free helpline but in the current economic climate it is unlikely that we will be able to meet the additional demand. Working Families One of the major barriers to the greater use of CFAs is disbursement funding and the costs of investigation. These costs are substantial in clinical negligence claims. Action Against Medical Accidents Legal aid saves the public purse money Many responses pointed to the value of legal aid, both in terms of its social value and its outcomes for clients, but also in terms of the cost savings to the justice system and to other statutory services, and the Government’s broader agenda to improve family and relationship support. Legal aid in administrative justice represents exceptional value for money. For example, welfare benefits legal aid cost £28.3 million in 2009/10, representing less than 0.18 per cent of the £16 billion value of unclaimed benefits. The success rate of legally-aided clients in this and several other administrative jurisdictions is over 90 per cent. Administrative Justice and Tribunals Council There is a strong case for targeting legal aid investment where it can have the greatest impact – this involves taking a broader view than simply looking at issues of loss of liberty or imminent homelessness, but should involve reconfiguring services to be more client-centred and targeting services better at those client groups for whom getting advice has the greatest beneficial impact. Youth Access In many of our cases at the CLC, the provision of legal advice and assistance can help resolve problems quickly and prevent matters from escalating. Removing access to legal advice in many civil and family law matters removes the possibility for problems to be resolved early and efficiently without the need for litigation. Childrens’ Legal Centre Proposals to withdraw legal aid when combined with the evidence of the lack of awareness of alternative support services will undermine Government’s broader agenda for relationship support...and increases the risk of the divorcing and separating population’s personal indebtedness...Couples spend an average of £28,000 when a marriage ends. National Family Mediation If the legal aid cuts go through and people are denied a lawyer for custody cases, I will lose all chance of ever seeing my children again. Client from Crossroads Women’s Centre Respondents found evidence to back this up such as the Legal Services Research Centre’s work on exclusion and legal problems.6 The General Council of the Bar for example had commissioned a ‘cohort’ analysis which compared the outcomes for advice seekers recorded by the LSRC’s civil and social justice survey as between those who sought help from informal sources, non-legal service providers, and legal aid providers. This identified a statistically significant better level of outcomes from legal aid services.7 The Law Society, LAPG, ILPA, EHRC, LASA, ASA and many others referred to the ‘business case’ research by Citizens Advice which used LSC outcomes and data from the LSRC’s civil and social justice survey to estimate (on 2008-9 figures) the cost-benefit ratio for key civil categories of legal aid advice. This research looked at the ‘adverse consequences’ of civil problems and found that: • for every £1 of legal aid expenditure on housing advice, the state potentially saves £2.34 • for every £1 of legal aid expenditure on debt advice, the state potentially saves £2.98 • for every £1 of legal aid expenditure on benefits advice, the state potentially saves £8.80 • for every £1 of legal aid expenditure on employment advice, the state potentially saves £7.13.8 Some respondents referred to work undertaken by the New Economics Foundation for the Law Centres Federation also suggested that the ‘social return’ for legal help for clients with the most complex problems could be as high £10 to every £1 invested.9 USER: The text below includes the reactions of a number of organizations to proposed changes to the availability of legal aid in the United Kingdom. According to the text, what is the financial case against these changes? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
16
36
1,343
null
859
Using only the provided text, answer all following questions and follow all guidelines that are explicitly given.
What is the meaning of "forward-looking statements," what terms are listed as examples, and why should the reader be made aware of these kinds of statements according to the text?
JOANN Receives Court Approval for Prepackaged Financial Recapitalization Plan Apr 25, 2024 Expects to Emerge from Court-Supervised Process in the Coming Days with the Lowest Level of Debt in More than a Decade HUDSON, Ohio, April 25, 2024 (GLOBE NEWSWIRE) -- JOANN Inc. (“JOANN” or the “Company”), the nation’s category leader in sewing and fabrics with one of the largest arts and crafts offerings, today announced that the U.S. Bankruptcy Court for the District of Delaware has confirmed the Company’s Prepackaged Joint Plan of Reorganization. JOANN expects to successfully complete its financial restructuring and emerge from the court supervised process in the coming days. As reiterated throughout this expedited process, the Company’s more than 800 store locations remain open and JOANN.com continues to offer supplies for any creative need, and the Company was able to preserve the jobs of its more than 18,000 Team Members in connection with this process. About JOANN For 80 years, JOANN has inspired creativity in the hearts, hands, and minds of its customers. From a single storefront in Cleveland, Ohio, the nation’s category leader in sewing and fabrics and one of the fastest growing competitors in the arts and crafts industry has grown to include 829 store locations across 49 states and a robust e-commerce business. With the goal of helping every customer find their creative Happy Place, JOANN serves as a convenient single source for all of the supplies, guidance, and inspiration needed to achieve any project or passion. Forward-Looking Statements This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. The Company intends such forward-looking statements to be covered by the safe harbor provisions for forward-looking statements contained in Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Readers can generally identify forward-looking statements by the use of forward-looking terminology such as “anticipate,” “believe,” “continue,” “could,” “estimate,” “expect,” “intend,” “may,” “might,” “plan,” “potential,” “predict,” “seek,” “vision,” “should,” or the negative thereof or other variations thereon or comparable terminology. Forward-looking statements include those we make regarding the Company’s ability to continuing operating its business and implement the restructuring pursuant to the Chapter 11 cases, including the timetable of completing such transactions, if at all. The preceding list is not intended to be an exhaustive list of all of the Company’s forward-looking statements. The Company has based these forward-looking statements on its current expectations, assumptions, estimates and projections. While the Company believes these expectations, assumptions, estimates and projections are reasonable, such forward-looking statements are only predictions and involve known and unknown risks and uncertainties, many of which are beyond the Company’s control. Given these risks and uncertainties, readers are cautioned not to place undue reliance on such forward-looking statements. The forward-looking statements included elsewhere in this press release are not guarantees. Any forward-looking statement that the Company makes in this press release speaks only as of the date of such statement. Except as required by law, the Company does not undertake any obligation to update or revise, or to publicly announce any update or revision to, any of the forward-looking statements, whether as a result of new information, future events or otherwise after the date of this press release.
Using only the provided text, answer all following questions and follow all guidelines that are explicitly given. What is the meaning of "forward-looking statements," what terms are listed as examples, and why should the reader be made aware of these kinds of statements according to the text? JOANN Receives Court Approval for Prepackaged Financial Recapitalization Plan Apr 25, 2024 Expects to Emerge from Court-Supervised Process in the Coming Days with the Lowest Level of Debt in More than a Decade HUDSON, Ohio, April 25, 2024 (GLOBE NEWSWIRE) -- JOANN Inc. (“JOANN” or the “Company”), the nation’s category leader in sewing and fabrics with one of the largest arts and crafts offerings, today announced that the U.S. Bankruptcy Court for the District of Delaware has confirmed the Company’s Prepackaged Joint Plan of Reorganization. JOANN expects to successfully complete its financial restructuring and emerge from the court supervised process in the coming days. As reiterated throughout this expedited process, the Company’s more than 800 store locations remain open and JOANN.com continues to offer supplies for any creative need, and the Company was able to preserve the jobs of its more than 18,000 Team Members in connection with this process. About JOANN For 80 years, JOANN has inspired creativity in the hearts, hands, and minds of its customers. From a single storefront in Cleveland, Ohio, the nation’s category leader in sewing and fabrics and one of the fastest growing competitors in the arts and crafts industry has grown to include 829 store locations across 49 states and a robust e-commerce business. With the goal of helping every customer find their creative Happy Place, JOANN serves as a convenient single source for all of the supplies, guidance, and inspiration needed to achieve any project or passion. Forward-Looking Statements This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. The Company intends such forward-looking statements to be covered by the safe harbor provisions for forward-looking statements contained in Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Readers can generally identify forward-looking statements by the use of forward-looking terminology such as “anticipate,” “believe,” “continue,” “could,” “estimate,” “expect,” “intend,” “may,” “might,” “plan,” “potential,” “predict,” “seek,” “vision,” “should,” or the negative thereof or other variations thereon or comparable terminology. Forward-looking statements include those we make regarding the Company’s ability to continuing operating its business and implement the restructuring pursuant to the Chapter 11 cases, including the timetable of completing such transactions, if at all. The preceding list is not intended to be an exhaustive list of all of the Company’s forward-looking statements. The Company has based these forward-looking statements on its current expectations, assumptions, estimates and projections. While the Company believes these expectations, assumptions, estimates and projections are reasonable, such forward-looking statements are only predictions and involve known and unknown risks and uncertainties, many of which are beyond the Company’s control. Given these risks and uncertainties, readers are cautioned not to place undue reliance on such forward-looking statements. The forward-looking statements included elsewhere in this press release are not guarantees. Any forward-looking statement that the Company makes in this press release speaks only as of the date of such statement. Except as required by law, the Company does not undertake any obligation to update or revise, or to publicly announce any update or revision to, any of the forward-looking statements, whether as a result of new information, future events or otherwise after the date of this press release.
Using only the provided text, answer all following questions and follow all guidelines that are explicitly given. EVIDENCE: JOANN Receives Court Approval for Prepackaged Financial Recapitalization Plan Apr 25, 2024 Expects to Emerge from Court-Supervised Process in the Coming Days with the Lowest Level of Debt in More than a Decade HUDSON, Ohio, April 25, 2024 (GLOBE NEWSWIRE) -- JOANN Inc. (“JOANN” or the “Company”), the nation’s category leader in sewing and fabrics with one of the largest arts and crafts offerings, today announced that the U.S. Bankruptcy Court for the District of Delaware has confirmed the Company’s Prepackaged Joint Plan of Reorganization. JOANN expects to successfully complete its financial restructuring and emerge from the court supervised process in the coming days. As reiterated throughout this expedited process, the Company’s more than 800 store locations remain open and JOANN.com continues to offer supplies for any creative need, and the Company was able to preserve the jobs of its more than 18,000 Team Members in connection with this process. About JOANN For 80 years, JOANN has inspired creativity in the hearts, hands, and minds of its customers. From a single storefront in Cleveland, Ohio, the nation’s category leader in sewing and fabrics and one of the fastest growing competitors in the arts and crafts industry has grown to include 829 store locations across 49 states and a robust e-commerce business. With the goal of helping every customer find their creative Happy Place, JOANN serves as a convenient single source for all of the supplies, guidance, and inspiration needed to achieve any project or passion. Forward-Looking Statements This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. The Company intends such forward-looking statements to be covered by the safe harbor provisions for forward-looking statements contained in Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Readers can generally identify forward-looking statements by the use of forward-looking terminology such as “anticipate,” “believe,” “continue,” “could,” “estimate,” “expect,” “intend,” “may,” “might,” “plan,” “potential,” “predict,” “seek,” “vision,” “should,” or the negative thereof or other variations thereon or comparable terminology. Forward-looking statements include those we make regarding the Company’s ability to continuing operating its business and implement the restructuring pursuant to the Chapter 11 cases, including the timetable of completing such transactions, if at all. The preceding list is not intended to be an exhaustive list of all of the Company’s forward-looking statements. The Company has based these forward-looking statements on its current expectations, assumptions, estimates and projections. While the Company believes these expectations, assumptions, estimates and projections are reasonable, such forward-looking statements are only predictions and involve known and unknown risks and uncertainties, many of which are beyond the Company’s control. Given these risks and uncertainties, readers are cautioned not to place undue reliance on such forward-looking statements. The forward-looking statements included elsewhere in this press release are not guarantees. Any forward-looking statement that the Company makes in this press release speaks only as of the date of such statement. Except as required by law, the Company does not undertake any obligation to update or revise, or to publicly announce any update or revision to, any of the forward-looking statements, whether as a result of new information, future events or otherwise after the date of this press release. USER: What is the meaning of "forward-looking statements," what terms are listed as examples, and why should the reader be made aware of these kinds of statements according to the text? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
17
30
545
null
855
Only refer to the attached document in providing your response.
What are the health benefits of stretching?
Stretching: 9 Benefits Is stretching good for you? There are many benefits to regular stretching. Not only can stretching help increase your flexibility, which is an important factor of fitness, but it can also improve your posture, reduce stress and body aches, and more. 9 Benefits of stretching 1. Increases your flexibility Regular stretching can help increase your flexibility, which is crucial for your overall health. Not only can improved flexibility help you to perform everyday activities with relative ease, but it can also help delay the reduced mobility that can come with aging. 2. Increases your range of motion Being able to move a joint through its full range of motion gives you more freedom of movement. Stretching on a regular basis can help increase your range of motion. 3. Improves your performance in physical activities Performing dynamic stretches (moving stretches) prior to physical activities has been shown to help. It may also help improve your performance in an athletic event or exercise. 4. Increases blood flow to your muscles Performing stretches on a regular basis may improve your circulation. Improved circulation increases blood flow to your muscles, which can shorten your recovery time and reduce muscle soreness (also known as delayed onset muscle soreness or DOMS). 5. Improves your posture Muscle imbalances are common and can lead to poor posture. One source found that a combination of strengthening and stretching specific muscle groups can reduce musculoskeletal pain and encourage proper alignment. That, in turn, may help improve your posture. 6. Helps to heal and prevent back pain Tight muscles can lead to a decrease in your range of motion. When this happens, you increase the likelihood of straining the muscles in your back. Stretching can help heal an existing back injury by stretching the muscles. A regular stretching routine can also help prevent future back pain by strengthening your back muscles and reducing your risk for muscle strain. 7. Is great for stress relief When you’re experiencing stress, there’s a good chance your muscles are tense. That’s because your muscles tend to tighten up in response to physical and emotional stress. Focus on areas of your body where you tend to hold your stress, such as your neck, shoulders, and upper back. 8. Can calm your mind Participating in a regular stretching program not only helps increase your flexibility, but it can also calm your mind. While you stretch, focus on mindfulness and meditation exercises, which give your mind a mental break. 9. Helps decrease tension headaches Tension and stress headaches can interfere with your daily life. In addition to a proper diet, adequate hydration, and plenty of rest, stretching may help reduce the tension you feel from headaches.
Only refer to the attached document in providing your response. What are the health benefits of stretching? Stretching: 9 Benefits Is stretching good for you? There are many benefits to regular stretching. Not only can stretching help increase your flexibility, which is an important factor of fitness, but it can also improve your posture, reduce stress and body aches, and more. 9 Benefits of stretching 1. Increases your flexibility Regular stretching can help increase your flexibility, which is crucial for your overall health. Not only can improved flexibility help you to perform everyday activities with relative ease, but it can also help delay the reduced mobility that can come with aging. 2. Increases your range of motion Being able to move a joint through its full range of motion gives you more freedom of movement. Stretching on a regular basis can help increase your range of motion. 3. Improves your performance in physical activities Performing dynamic stretches (moving stretches) prior to physical activities has been shown to help. It may also help improve your performance in an athletic event or exercise. 4. Increases blood flow to your muscles Performing stretches on a regular basis may improve your circulation. Improved circulation increases blood flow to your muscles, which can shorten your recovery time and reduce muscle soreness (also known as delayed onset muscle soreness or DOMS). 5. Improves your posture Muscle imbalances are common and can lead to poor posture. One source found that a combination of strengthening and stretching specific muscle groups can reduce musculoskeletal pain and encourage proper alignment. That, in turn, may help improve your posture. 6. Helps to heal and prevent back pain Tight muscles can lead to a decrease in your range of motion. When this happens, you increase the likelihood of straining the muscles in your back. Stretching can help heal an existing back injury by stretching the muscles. A regular stretching routine can also help prevent future back pain by strengthening your back muscles and reducing your risk for muscle strain. 7. Is great for stress relief When you’re experiencing stress, there’s a good chance your muscles are tense. That’s because your muscles tend to tighten up in response to physical and emotional stress. Focus on areas of your body where you tend to hold your stress, such as your neck, shoulders, and upper back. 8. Can calm your mind Participating in a regular stretching program not only helps increase your flexibility, but it can also calm your mind. While you stretch, focus on mindfulness and meditation exercises, which give your mind a mental break. 9. Helps decrease tension headaches Tension and stress headaches can interfere with your daily life. In addition to a proper diet, adequate hydration, and plenty of rest, stretching may help reduce the tension you feel from headaches.
Only refer to the attached document in providing your response. EVIDENCE: Stretching: 9 Benefits Is stretching good for you? There are many benefits to regular stretching. Not only can stretching help increase your flexibility, which is an important factor of fitness, but it can also improve your posture, reduce stress and body aches, and more. 9 Benefits of stretching 1. Increases your flexibility Regular stretching can help increase your flexibility, which is crucial for your overall health. Not only can improved flexibility help you to perform everyday activities with relative ease, but it can also help delay the reduced mobility that can come with aging. 2. Increases your range of motion Being able to move a joint through its full range of motion gives you more freedom of movement. Stretching on a regular basis can help increase your range of motion. 3. Improves your performance in physical activities Performing dynamic stretches (moving stretches) prior to physical activities has been shown to help. It may also help improve your performance in an athletic event or exercise. 4. Increases blood flow to your muscles Performing stretches on a regular basis may improve your circulation. Improved circulation increases blood flow to your muscles, which can shorten your recovery time and reduce muscle soreness (also known as delayed onset muscle soreness or DOMS). 5. Improves your posture Muscle imbalances are common and can lead to poor posture. One source found that a combination of strengthening and stretching specific muscle groups can reduce musculoskeletal pain and encourage proper alignment. That, in turn, may help improve your posture. 6. Helps to heal and prevent back pain Tight muscles can lead to a decrease in your range of motion. When this happens, you increase the likelihood of straining the muscles in your back. Stretching can help heal an existing back injury by stretching the muscles. A regular stretching routine can also help prevent future back pain by strengthening your back muscles and reducing your risk for muscle strain. 7. Is great for stress relief When you’re experiencing stress, there’s a good chance your muscles are tense. That’s because your muscles tend to tighten up in response to physical and emotional stress. Focus on areas of your body where you tend to hold your stress, such as your neck, shoulders, and upper back. 8. Can calm your mind Participating in a regular stretching program not only helps increase your flexibility, but it can also calm your mind. While you stretch, focus on mindfulness and meditation exercises, which give your mind a mental break. 9. Helps decrease tension headaches Tension and stress headaches can interfere with your daily life. In addition to a proper diet, adequate hydration, and plenty of rest, stretching may help reduce the tension you feel from headaches. USER: What are the health benefits of stretching? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
10
7
451
null
363
Create your answer using only information from the context to answer this question:
What advantages does Nintendo have over its competitors?
Internal environment of Nintendo (1) Analysis of Nintendo's advantages Competitive advantage refers to an enterprise's ability to outperform its competitors, which helps to achieve its main goal -- profit. Nintendo's strengths lie in the following ways [3]. Nintendo has developed a unique profit distribution system based on its nearly 50 years of experience in the game industry. At that time, the manager in charge of Nintendo drew lessons from the "collapse of Atari". First, he set up a "Mario Club" game quality supervision agency to strictly screen the game software on the Nintendo game console. Later, he set up a "royalty system" and formulated a set of rules for game review, platform access, and game revenue sharing, which brought huge profits to Nintendo, At the same time, it objectively promoted the benign development of the Japanese game industry at that time. These systems are also the internal reason why the overall quality of Nintendo Switch games is much better than its competitors. Super big IPs such as Super Mario and Legend of Zelda have always maintained a good reputation and remain popular among players. It is these unique systems that enable Nintendo to maintain a high-profit margin even in the context of economic depression [4]. (2) Analysis of Nintendo's disadvantages First, Nintendo's failed family business management style. The presidents of Nintendo's Japanese and American divisions (NOA) often have differences due to huge differences in management methods, and the consequences of such differences are devastating because they will lead to key employees becoming vulnerable and even leaving to do other work. Therefore, the family business management model seriously leads to the failure of efficient cooperation between Nintendo's various branches, and will also seriously harm Nintendo's external reputation and damage Nintendo's overall interests [5]. Secondly, weak technology research and development capability. With the growing demand for personalized service, companies are required to provide increasingly specialized service strategies and differentiated solutions, for example, more and more game companies begin to focus on creating products with local characteristics based on the language and cultural background of different regions, which is also the basis for a game to be promoted around the world. In addition, the development of modern games also needs the support of technological innovation. For example, more and more VR, AR, and motion capture games are emerging in the market. It would be unthinkable for Nintendo to spend huge resources on a new generation of high-performance consoles to compete with SONY. Conversely, it is in the area of research and development that Nintendo's other rival, Microsoft, has the greatest advantage. 3.3.2 External environment of Nintendo (1) Opportunity for Nintendo In terms of technology and business environment, Nintendo has much more experience than its competitors with the background of years of exploration in the gaming industry. Even though Microsoft recently acquired the game giant Blizzard, it intends to expand its market share in the game industry. However, Microsoft did not intervene in Blizzard's daily operation, which was also due to its limited experience in the game industry. These factors can also reduce Nintendo's competitive pressure [6]. In terms of the political and legal environment, the government's favorable policies also help game companies expand their overseas markets. Also, under the catalysis of the epidemic economy, video games have become one of the most popular cultural and creative activities for young people in the world. Many governments are aware of this trend and have introduced a series of supportive policies, such as setting up special funds for the game industry; Promise the game developer that adding landmark landmarks in the game can get financial support and tax concessions. These policies are good news for multinational game companies like Nintendo to expand their overseas markets. (2) Threat for Nintendo Highlights in Business, Economics and Management FMIBM 2023 Volume 10 (2023) 194 Microsoft's strong economic strength enables them to continue to operate in the gaming industry after experiencing the cost of failure in the game product competition of 4 billion dollars and make up for the shortcomings of their predecessors in new products. At the same time, Microsoft could use money offensive to buy third-party platform certification. Finally, and most importantly, Microsoft's latest games are coming out one year earlier than Nintendo's or SONY's, making it harder for Nintendo to time and win the market. 4. Nintendo’s market strategy suggestion As the gaming industry continues to evolve, scholars generally agree that gaming companies need not only excellent hardware and software technology but also effective marketing strategies. Schilling MA (2003) believes that if companies in the game industry want to maintain their market share, they need to improve their marketing strategies to follow or even guide the market trend [7]. Marchand A and Hennig-Thurau T (2013) think that companies in the game industry need to pay attention to consumers' preferences in the market, understand consumers' needs in the form of questionnaires, etc., to design their products in a targeted way [8]. SC Jain (1989) thinks that companies in the game industry need to pay attention to consumers' preferences in the market, understand consumers' needs in the form of questionnaires, etc., to design their products in a targeted way [9]. Based on the above analysis of Nintendo's internal and external conditions, this report proposes the following improvement suggestions [10]. Nintendo could consider setting up more offline experience stores overseas. Nintendo's classic game characters, such as Mario, Pokemon, Kirby, and Link, are familiar to the public. Taking these characters as ambassadors of offline experience stores, they can attract enough attention without too much publicity expenses and are very attractive to children and adults. Nintendo could consider adding episode-by-episode, level-by-level incrementally unlocked purchases. Because Nintendo's software games are priced in a complementary way to the console's price, they are generally priced higher than other games in the market. This can lead to players who want to play a game but don't buy it because the price is too high. In this case, the buy-out system can be supplemented with the option of gradually unlocking purchases by episode or level, and players can choose to buy them out or buy them separately. That way, players can play more games, and buying incrementally doesn't feel like a buy-out. This is also a great way for Nintendo to increase its sales. Nintendo could increase its ban on cracking consoles and develop new encryption technologies. For now, Nintendo's crackdown on cracked consoles is not strong enough, and only users who use cracked consoles to connect to the Internet have been blocked. In this case, to protect their intellectual property rights, but also promote the sale of their legitimate games, the development of a new set of encryption technology is worth considering.
Create your answer using only information from the context to answer this question: What advantages does Nintendo have over its competitors? Internal environment of Nintendo (1) Analysis of Nintendo's advantages Competitive advantage refers to an enterprise's ability to outperform its competitors, which helps to achieve its main goal -- profit. Nintendo's strengths lie in the following ways [3]. Nintendo has developed a unique profit distribution system based on its nearly 50 years of experience in the game industry. At that time, the manager in charge of Nintendo drew lessons from the "collapse of Atari". First, he set up a "Mario Club" game quality supervision agency to strictly screen the game software on the Nintendo game console. Later, he set up a "royalty system" and formulated a set of rules for game review, platform access, and game revenue sharing, which brought huge profits to Nintendo, At the same time, it objectively promoted the benign development of the Japanese game industry at that time. These systems are also the internal reason why the overall quality of Nintendo Switch games is much better than its competitors. Super big IPs such as Super Mario and Legend of Zelda have always maintained a good reputation and remain popular among players. It is these unique systems that enable Nintendo to maintain a high-profit margin even in the context of economic depression [4]. (2) Analysis of Nintendo's disadvantages First, Nintendo's failed family business management style. The presidents of Nintendo's Japanese and American divisions (NOA) often have differences due to huge differences in management methods, and the consequences of such differences are devastating because they will lead to key employees becoming vulnerable and even leaving to do other work. Therefore, the family business management model seriously leads to the failure of efficient cooperation between Nintendo's various branches, and will also seriously harm Nintendo's external reputation and damage Nintendo's overall interests [5]. Secondly, weak technology research and development capability. With the growing demand for personalized service, companies are required to provide increasingly specialized service strategies and differentiated solutions, for example, more and more game companies begin to focus on creating products with local characteristics based on the language and cultural background of different regions, which is also the basis for a game to be promoted around the world. In addition, the development of modern games also needs the support of technological innovation. For example, more and more VR, AR, and motion capture games are emerging in the market. It would be unthinkable for Nintendo to spend huge resources on a new generation of high-performance consoles to compete with SONY. Conversely, it is in the area of research and development that Nintendo's other rival, Microsoft, has the greatest advantage. 3.3.2 External environment of Nintendo (1) Opportunity for Nintendo In terms of technology and business environment, Nintendo has much more experience than its competitors with the background of years of exploration in the gaming industry. Even though Microsoft recently acquired the game giant Blizzard, it intends to expand its market share in the game industry. However, Microsoft did not intervene in Blizzard's daily operation, which was also due to its limited experience in the game industry. These factors can also reduce Nintendo's competitive pressure [6]. In terms of the political and legal environment, the government's favorable policies also help game companies expand their overseas markets. Also, under the catalysis of the epidemic economy, video games have become one of the most popular cultural and creative activities for young people in the world. Many governments are aware of this trend and have introduced a series of supportive policies, such as setting up special funds for the game industry; Promise the game developer that adding landmark landmarks in the game can get financial support and tax concessions. These policies are good news for multinational game companies like Nintendo to expand their overseas markets. (2) Threat for Nintendo Highlights in Business, Economics and Management FMIBM 2023 Volume 10 (2023) 194 Microsoft's strong economic strength enables them to continue to operate in the gaming industry after experiencing the cost of failure in the game product competition of 4 billion dollars and make up for the shortcomings of their predecessors in new products. At the same time, Microsoft could use money offensive to buy third-party platform certification. Finally, and most importantly, Microsoft's latest games are coming out one year earlier than Nintendo's or SONY's, making it harder for Nintendo to time and win the market. 4. Nintendo’s market strategy suggestion As the gaming industry continues to evolve, scholars generally agree that gaming companies need not only excellent hardware and software technology but also effective marketing strategies. Schilling MA (2003) believes that if companies in the game industry want to maintain their market share, they need to improve their marketing strategies to follow or even guide the market trend [7]. Marchand A and Hennig-Thurau T (2013) think that companies in the game industry need to pay attention to consumers' preferences in the market, understand consumers' needs in the form of questionnaires, etc., to design their products in a targeted way [8]. SC Jain (1989) thinks that companies in the game industry need to pay attention to consumers' preferences in the market, understand consumers' needs in the form of questionnaires, etc., to design their products in a targeted way [9]. Based on the above analysis of Nintendo's internal and external conditions, this report proposes the following improvement suggestions [10]. Nintendo could consider setting up more offline experience stores overseas. Nintendo's classic game characters, such as Mario, Pokemon, Kirby, and Link, are familiar to the public. Taking these characters as ambassadors of offline experience stores, they can attract enough attention without too much publicity expenses and are very attractive to children and adults. Nintendo could consider adding episode-by-episode, level-by-level incrementally unlocked purchases. Because Nintendo's software games are priced in a complementary way to the console's price, they are generally priced higher than other games in the market. This can lead to players who want to play a game but don't buy it because the price is too high. In this case, the buy-out system can be supplemented with the option of gradually unlocking purchases by episode or level, and players can choose to buy them out or buy them separately. That way, players can play more games, and buying incrementally doesn't feel like a buy-out. This is also a great way for Nintendo to increase its sales. Nintendo could increase its ban on cracking consoles and develop new encryption technologies. For now, Nintendo's crackdown on cracked consoles is not strong enough, and only users who use cracked consoles to connect to the Internet have been blocked. In this case, to protect their intellectual property rights, but also promote the sale of their legitimate games, the development of a new set of encryption technology is worth considering.
Create your answer using only information from the context to answer this question: EVIDENCE: Internal environment of Nintendo (1) Analysis of Nintendo's advantages Competitive advantage refers to an enterprise's ability to outperform its competitors, which helps to achieve its main goal -- profit. Nintendo's strengths lie in the following ways [3]. Nintendo has developed a unique profit distribution system based on its nearly 50 years of experience in the game industry. At that time, the manager in charge of Nintendo drew lessons from the "collapse of Atari". First, he set up a "Mario Club" game quality supervision agency to strictly screen the game software on the Nintendo game console. Later, he set up a "royalty system" and formulated a set of rules for game review, platform access, and game revenue sharing, which brought huge profits to Nintendo, At the same time, it objectively promoted the benign development of the Japanese game industry at that time. These systems are also the internal reason why the overall quality of Nintendo Switch games is much better than its competitors. Super big IPs such as Super Mario and Legend of Zelda have always maintained a good reputation and remain popular among players. It is these unique systems that enable Nintendo to maintain a high-profit margin even in the context of economic depression [4]. (2) Analysis of Nintendo's disadvantages First, Nintendo's failed family business management style. The presidents of Nintendo's Japanese and American divisions (NOA) often have differences due to huge differences in management methods, and the consequences of such differences are devastating because they will lead to key employees becoming vulnerable and even leaving to do other work. Therefore, the family business management model seriously leads to the failure of efficient cooperation between Nintendo's various branches, and will also seriously harm Nintendo's external reputation and damage Nintendo's overall interests [5]. Secondly, weak technology research and development capability. With the growing demand for personalized service, companies are required to provide increasingly specialized service strategies and differentiated solutions, for example, more and more game companies begin to focus on creating products with local characteristics based on the language and cultural background of different regions, which is also the basis for a game to be promoted around the world. In addition, the development of modern games also needs the support of technological innovation. For example, more and more VR, AR, and motion capture games are emerging in the market. It would be unthinkable for Nintendo to spend huge resources on a new generation of high-performance consoles to compete with SONY. Conversely, it is in the area of research and development that Nintendo's other rival, Microsoft, has the greatest advantage. 3.3.2 External environment of Nintendo (1) Opportunity for Nintendo In terms of technology and business environment, Nintendo has much more experience than its competitors with the background of years of exploration in the gaming industry. Even though Microsoft recently acquired the game giant Blizzard, it intends to expand its market share in the game industry. However, Microsoft did not intervene in Blizzard's daily operation, which was also due to its limited experience in the game industry. These factors can also reduce Nintendo's competitive pressure [6]. In terms of the political and legal environment, the government's favorable policies also help game companies expand their overseas markets. Also, under the catalysis of the epidemic economy, video games have become one of the most popular cultural and creative activities for young people in the world. Many governments are aware of this trend and have introduced a series of supportive policies, such as setting up special funds for the game industry; Promise the game developer that adding landmark landmarks in the game can get financial support and tax concessions. These policies are good news for multinational game companies like Nintendo to expand their overseas markets. (2) Threat for Nintendo Highlights in Business, Economics and Management FMIBM 2023 Volume 10 (2023) 194 Microsoft's strong economic strength enables them to continue to operate in the gaming industry after experiencing the cost of failure in the game product competition of 4 billion dollars and make up for the shortcomings of their predecessors in new products. At the same time, Microsoft could use money offensive to buy third-party platform certification. Finally, and most importantly, Microsoft's latest games are coming out one year earlier than Nintendo's or SONY's, making it harder for Nintendo to time and win the market. 4. Nintendo’s market strategy suggestion As the gaming industry continues to evolve, scholars generally agree that gaming companies need not only excellent hardware and software technology but also effective marketing strategies. Schilling MA (2003) believes that if companies in the game industry want to maintain their market share, they need to improve their marketing strategies to follow or even guide the market trend [7]. Marchand A and Hennig-Thurau T (2013) think that companies in the game industry need to pay attention to consumers' preferences in the market, understand consumers' needs in the form of questionnaires, etc., to design their products in a targeted way [8]. SC Jain (1989) thinks that companies in the game industry need to pay attention to consumers' preferences in the market, understand consumers' needs in the form of questionnaires, etc., to design their products in a targeted way [9]. Based on the above analysis of Nintendo's internal and external conditions, this report proposes the following improvement suggestions [10]. Nintendo could consider setting up more offline experience stores overseas. Nintendo's classic game characters, such as Mario, Pokemon, Kirby, and Link, are familiar to the public. Taking these characters as ambassadors of offline experience stores, they can attract enough attention without too much publicity expenses and are very attractive to children and adults. Nintendo could consider adding episode-by-episode, level-by-level incrementally unlocked purchases. Because Nintendo's software games are priced in a complementary way to the console's price, they are generally priced higher than other games in the market. This can lead to players who want to play a game but don't buy it because the price is too high. In this case, the buy-out system can be supplemented with the option of gradually unlocking purchases by episode or level, and players can choose to buy them out or buy them separately. That way, players can play more games, and buying incrementally doesn't feel like a buy-out. This is also a great way for Nintendo to increase its sales. Nintendo could increase its ban on cracking consoles and develop new encryption technologies. For now, Nintendo's crackdown on cracked consoles is not strong enough, and only users who use cracked consoles to connect to the Internet have been blocked. In this case, to protect their intellectual property rights, but also promote the sale of their legitimate games, the development of a new set of encryption technology is worth considering. USER: What advantages does Nintendo have over its competitors? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
13
8
1,116
null
14
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document]
What do you expect of Bitcoin in the near future? Will it grow or diminish? Make your response thorough and no less than 150 words?
Bitcoin's recent price movements have caused concern among investors about what might come next. However, by looking at key indicators such as the 200-week moving average, Pi Cycle Top Indicator, and the Golden Ratio Multiplier, we can gain insights into potential support and resistance levels for Bitcoin. Leaning Bearish? If this bearish price action is to continue and price breaks to lower lows the 200-week moving average heatmap (blue line), a historically critical support level, is currently close to $39,000 but fast approaching $40,000 (white line). This round psychological level also aligns with the Bitcoin Investor Tool (green line), which has also converged with the 200-week moving average, could serve as potential downside targets. Figure 1: Converging levels of support at $40,000 if bearish price action continues. Figure 1: Converging levels of support at $40,000 if bearish price action continues. Nearby Targets Above current price there are several important levels closer to the current price that investors need to keep an eye on. The Pi Cycle Top Indicator (upper orange line) suggests a crucial resistance level around $62,000, based on the 111-day moving average. The Golden Ratio Multiplier (lower orange line) indicates that the 350-day moving average, currently around $53,000, has been a solid level of support during this market cycle, especially as this is close to the technical $52,000 support and significant psychological support of $50,000. ADVERTISING Figure 2: Nearby support between $53,000 and $50,000, with immediate resistance between $60,000 and $62,000. Figure 2: Nearby support between $53,000 and $50,000, with immediate resistance between $60,000 and $62,000. More Chop? In the short term, Bitcoin could very well continue ranging between the low $50,000 region and the $60,000 resistance, similar to the range we had formed between $70,000 and $60,000 that led to fairly stagnant price action for a majority of 2024. Despite recent downturns, Bitcoin's long-term outlook is still promising. In the past, Bitcoin has experienced similar periods of fluctuating prices before eventually reaching new highs. However, this process can take some time, potentially weeks or even months, before a sustainable trend reversal occurs following periods of low volatility. Conclusion For long-term investors, it's important to remain calm and not be swayed by day-to-day price changes. Over-trading often leads to poor decisions and losses, and the key is to stick to a strategy, whether it involves accumulating at support levels or taking profits at resistance. Bitcoin's recent price action has not been ideal, but with some simple technical analysis and a clear understanding of support and resistance levels, investors can prepare and react rather than over overreact to natural market fluctuations. While investing in Bitcoin is still considered a wild ride, the asset is quickly maturing. Financial institutions are closing in and creating hybrid vehicles to invest in cryptocurrency. The ecosystem reached a new milestone with the advent of Bitcoin ETFs, making people realize the immensity of Bitcoin’s potential in traditional markets and spurring new demand. It is not enough to leave the knowledge to technical experts or institutions. By understanding the importance of secure Bitcoin storage and the advancements in custody solutions, investors can make better-informed decisions about safeguarding their digital assets.
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== What do you expect of Bitcoin in the near future? Will it grow or diminish? Make your response thorough and no less than 150 words? {passage 0} ========== Bitcoin's recent price movements have caused concern among investors about what might come next. However, by looking at key indicators such as the 200-week moving average, Pi Cycle Top Indicator, and the Golden Ratio Multiplier, we can gain insights into potential support and resistance levels for Bitcoin. Leaning Bearish? If this bearish price action is to continue and price breaks to lower lows the 200-week moving average heatmap (blue line), a historically critical support level, is currently close to $39,000 but fast approaching $40,000 (white line). This round psychological level also aligns with the Bitcoin Investor Tool (green line), which has also converged with the 200-week moving average, could serve as potential downside targets. Figure 1: Converging levels of support at $40,000 if bearish price action continues. Figure 1: Converging levels of support at $40,000 if bearish price action continues. Nearby Targets Above current price there are several important levels closer to the current price that investors need to keep an eye on. The Pi Cycle Top Indicator (upper orange line) suggests a crucial resistance level around $62,000, based on the 111-day moving average. The Golden Ratio Multiplier (lower orange line) indicates that the 350-day moving average, currently around $53,000, has been a solid level of support during this market cycle, especially as this is close to the technical $52,000 support and significant psychological support of $50,000. ADVERTISING Figure 2: Nearby support between $53,000 and $50,000, with immediate resistance between $60,000 and $62,000. Figure 2: Nearby support between $53,000 and $50,000, with immediate resistance between $60,000 and $62,000. More Chop? In the short term, Bitcoin could very well continue ranging between the low $50,000 region and the $60,000 resistance, similar to the range we had formed between $70,000 and $60,000 that led to fairly stagnant price action for a majority of 2024. Despite recent downturns, Bitcoin's long-term outlook is still promising. In the past, Bitcoin has experienced similar periods of fluctuating prices before eventually reaching new highs. However, this process can take some time, potentially weeks or even months, before a sustainable trend reversal occurs following periods of low volatility. Conclusion For long-term investors, it's important to remain calm and not be swayed by day-to-day price changes. Over-trading often leads to poor decisions and losses, and the key is to stick to a strategy, whether it involves accumulating at support levels or taking profits at resistance. Bitcoin's recent price action has not been ideal, but with some simple technical analysis and a clear understanding of support and resistance levels, investors can prepare and react rather than over overreact to natural market fluctuations. While investing in Bitcoin is still considered a wild ride, the asset is quickly maturing. Financial institutions are closing in and creating hybrid vehicles to invest in cryptocurrency. The ecosystem reached a new milestone with the advent of Bitcoin ETFs, making people realize the immensity of Bitcoin’s potential in traditional markets and spurring new demand. It is not enough to leave the knowledge to technical experts or institutions. By understanding the importance of secure Bitcoin storage and the advancements in custody solutions, investors can make better-informed decisions about safeguarding their digital assets. https://bitcoinmagazine.com/markets/bitcoin-price-action-what-to-expect-next
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document] EVIDENCE: Bitcoin's recent price movements have caused concern among investors about what might come next. However, by looking at key indicators such as the 200-week moving average, Pi Cycle Top Indicator, and the Golden Ratio Multiplier, we can gain insights into potential support and resistance levels for Bitcoin. Leaning Bearish? If this bearish price action is to continue and price breaks to lower lows the 200-week moving average heatmap (blue line), a historically critical support level, is currently close to $39,000 but fast approaching $40,000 (white line). This round psychological level also aligns with the Bitcoin Investor Tool (green line), which has also converged with the 200-week moving average, could serve as potential downside targets. Figure 1: Converging levels of support at $40,000 if bearish price action continues. Figure 1: Converging levels of support at $40,000 if bearish price action continues. Nearby Targets Above current price there are several important levels closer to the current price that investors need to keep an eye on. The Pi Cycle Top Indicator (upper orange line) suggests a crucial resistance level around $62,000, based on the 111-day moving average. The Golden Ratio Multiplier (lower orange line) indicates that the 350-day moving average, currently around $53,000, has been a solid level of support during this market cycle, especially as this is close to the technical $52,000 support and significant psychological support of $50,000. ADVERTISING Figure 2: Nearby support between $53,000 and $50,000, with immediate resistance between $60,000 and $62,000. Figure 2: Nearby support between $53,000 and $50,000, with immediate resistance between $60,000 and $62,000. More Chop? In the short term, Bitcoin could very well continue ranging between the low $50,000 region and the $60,000 resistance, similar to the range we had formed between $70,000 and $60,000 that led to fairly stagnant price action for a majority of 2024. Despite recent downturns, Bitcoin's long-term outlook is still promising. In the past, Bitcoin has experienced similar periods of fluctuating prices before eventually reaching new highs. However, this process can take some time, potentially weeks or even months, before a sustainable trend reversal occurs following periods of low volatility. Conclusion For long-term investors, it's important to remain calm and not be swayed by day-to-day price changes. Over-trading often leads to poor decisions and losses, and the key is to stick to a strategy, whether it involves accumulating at support levels or taking profits at resistance. Bitcoin's recent price action has not been ideal, but with some simple technical analysis and a clear understanding of support and resistance levels, investors can prepare and react rather than over overreact to natural market fluctuations. While investing in Bitcoin is still considered a wild ride, the asset is quickly maturing. Financial institutions are closing in and creating hybrid vehicles to invest in cryptocurrency. The ecosystem reached a new milestone with the advent of Bitcoin ETFs, making people realize the immensity of Bitcoin’s potential in traditional markets and spurring new demand. It is not enough to leave the knowledge to technical experts or institutions. By understanding the importance of secure Bitcoin storage and the advancements in custody solutions, investors can make better-informed decisions about safeguarding their digital assets. USER: What do you expect of Bitcoin in the near future? Will it grow or diminish? Make your response thorough and no less than 150 words? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
25
524
null
86
Only use the provided text to answer the question. Do not use outside resources. The entire answer should be short.
According to the provided text, what is the typical maximum range for Infrared (IR)?
**Robotics Sensors and Actuators** Robot Sensors • Sensors are devices that can sense and measure physical properties of the environment, • e.g. temperature, luminance, resistance to touch, weight, size, etc. • The key phenomenon is transduction • Transduction (engineering) is a process that converts one type of energy to another • They deliver low-­‐level information about the environment the robot is working in. – Return an incomplete description of the world • This information is noisy (imprecise). • Cannot be modelled completely: – Reading = f(env) where f is the model of the sensor – Finding the inverse: • ill posed problem (solution not uniquely deEined) • collapsing of dimensionality leads to ambiguity Types of Sensor • General classification: – active versus passive • Active: emit energy in environment – More robust, less efEicient • Passive: passively receive energy from env. – Less intrusive, but depends on env. e.g. light for camera • Example: stereo vision versus range Einder. – contact versus non-­‐contact Sensors • Proprioceptive Sensors (monitor state of robot) – IMU (accels & gyros) – Wheel encoders – Doppler radar … • Exteroceptive Sensors (monitor environment) – Cameras (single, stereo, omni, FLIR …) – Laser scanner – MW radar – Sonar – Tactile… Sensor Characteristics All sensors are characterized by various properties that describe their capabilities – Sensitivity: (change of output) ÷ (change of input) – Linearity: constancy of (output ÷ input) • Exception: logarithmic response cameras == wider dynamic range. – Measurement/Dynamic range: difference between min. and max. Response Time: time required for a change in input to cause a change in the output – Accuracy: difference between measured & actual – Repeatability: difference between repeated measures – Resolution: smallest observable increment – Bandwidth: result of high resolution or cycle time Types of Sensor Specific examples – tactile – close-­‐range proximity – angular position – infrared – Sonar – laser (various types) – radar – compasses, gyroscopes – Force – GPS – vision Tactile Sensors There are many different technologies – e.g. contact closure, magnetic, piezoelectric, etc. • For mobile robots these can be classiEied as – tactile feelers (antennae) often some form of metal wire passing through a wire loop -­‐ can be active (powered to mechanically search for surfaces) § tactile bumpers solid bar / plate acts on some form of contact switch e.g. mirror deElecting light beam, pressure bladder, wire loops, etc. § Pressure-­‐sensitive rubber with scanning array Vibrassae/whiskers of rats – Surface texture information. – Distance of deElection. – Blind people using a cane. Proximity Sensors Tactile sensors allow obstacle detection – proximity sensors needed for true obstacle avoidance • Several technologies can detect the presence of particular Eields without mechanical contact – magnetic reed switches • two thin magnetic strips of opposite polarity not quite touching • an external magnetic Eield closes the strip & makes contact Hall effect sensors • small voltage generated across a conductor carrying current – inductive sensors, capacitive sensors • inductive sensors can detect presence of metallic objects • capacitive sensors can detect metallic or dielectric materials Infrared Sensors Infrared sensors are probably the simplest type of non-­contact sensor – widely used in mobile robotics to avoid obstacles • They work by – emitting infrared light • to differentiate emitted IR from ambient IR (e.g. lights, sun, etc.), the signal is modulated with a low frequency (100 Hz) – detecting any reElections off nearby surfaces • In certain environments, with careful calibration, IR sensors can be used for measuring the distance to the object – requires uniform surface colours and structures Infrared Sensors (Sharp) Measures the return angle of the infrared beam. Infrared Problems If the IR signal is detected, it is safe to assume that an object is present • However, the absence of reElected IR does not mean that no object is present! – “Absence of evidence is not evidence of absence.” C. Sagan – certain dark colours (black) are almost invisible to IR – IR sensors are not absolutely safe for object detection • In realistic situations (different colours & types of objects) there is no accurate distance information – it is best to avoid objects as soon as possible • IR are short range – typical maximum range is 50 to 100 cm Sonar Sensors • The fundamental principle of robot sonar sensors is the same as that used by bats – emit a chirp (e.g. 1.2 milliseconds) • a short powerful pulse of a range of frequencies of sound – its reElection off nearby surfaces is detected • As the speed of sound in air is known (≈ 330 m·s-­‐1) the distance to the object can be computed from the elapsed time between chirp and echo – minimum distance = 165 tchirp (e.g. 21 cm at 1.2 ms) – maximum distance = 165 twait (e.g. 165 m at 1 s) • Usually referred to as ultrasonic sensors Sonar Problems • There are a number of problems and uncertainties associated with readings from sonar sensors – it is difEicult to be sure in which direction an object is because the 3D sonar beam spreads out as it travels – specular re5lections give rise to erroneous readings • the sonar beam hits a smooth surface at a shallow angle and so reElects away from the sensor • only when an object further away reElects the beam back does the sensor obtain a reading -­‐ but distance is incorrect – arrays of sonar sensors can experience crosstalk • one sensor detects the reElected beam of another sensor – the speed of sound varies with air temp. and pressure • a 16° C temp. change can cause a 30cm error at 10m Laser Range Finders • Laser range Einders commonly used to measure the distance, velocity and acceleration of objects – also known as laser radar or lidar • The operating principle is the same as sonar – a short pulse of (laser) light is emitted – the time elapsed between emission and detection is used to determine distance (using the speed of light) • Due to the shorter wavelengths of lasers, the chance of specular reElections is much less – accuracies of millimetres (16 -­‐ 50mm) over 100m – 1D beam is usually swept to give a 2D planar beam • May not detect transparent surfaces (e.g. glass!) or dark objects RADAR • Radar usually uses electromagnetic energy in the 1 -­‐ 12.5 GHz frequency range – this corresponds to wavelengths of 30 cm -­‐ 2 cm • microwave energy – unaffected by fog, rain, dust, haze and smoke • It may use a pulsed time-­‐of-­‐Elight methodology of sonar and lidar, but may also use other methods – continuous-­‐wave phase detection – continuous-­‐wave frequency modulation • Continuous-­‐wave systems make use of Doppler effect to measure relative velocity of the target Angular Position: Rotary Encoder • Potentiometer – Used in the Servo on the boebots • Optical Disks (Relative) – Counting the slots – Direction by having pars of emitters/receivers out of phase: Quadrature decoding – Can spin very fast: 500 kHz • Optical Disks (Absolute) – Grey encoding for absolute: • 0:0000, 1:1000, 2:1100, 3:0100, 4:0110, • 5:1110, 6:1010, 7:0010, 8:0011 • 9:1011, 10:1111, 11:0111, 12:0101, 13:1101, 14:1001, 15:0001 Compass Sensors • Compass sensors measure the horizontal component of the earth’s magnetic Eield – some birds use the vertical component too • The earth’s magnetic Eield is very weak and non-­‐uniform, and changes over time – indoors there are likely to be many other Eield sources • steel girders, reinforced concrete, power lines, motors, etc. – an accurate absolute reference is unlikely, but the Eield is approx. constant, so can be used for local reference Gyroscopes • A gyroscope is a spinning wheel with most of its mass concentrated in the outer periphery – e.g. a bicycle wheel • Due to the law of conservation of momentum – the spinning wheel will stay in its original orientation – a force is required to rotate the gyroscope • A gyro. can thus be used to maintain orientation or to measure the rate and direction of rotation • In fact there are different types of mechanical gyro. – and even optical gyro’s with no moving parts! • these can be used in e.g. space probes to maintain orientation Ring Gyro's • Use standing waves set up – between mirrors (laser ring gyro) – within a Eiber optic cable (Eibre optic ring gyro) • Measure rotation by observing beats in standing wave as the mirrors "rotate through it". IMU's • Gyro, accelerometer combination. • Typical designs (e.g. 3DM-­‐GX1™) use tri-­‐axial gyros to track dynamic orientation and tri-­‐axial DC accelerometers along with the tri-­‐axial magnetometers to track static orientation. • The embedded microprocessors contains programmable Eilter algorithms, which blend these static and dynamic responses in real-­‐time. GPS • GPS uses a constellation of between 24 and 32 Medium Earth Orbit satellites. • Satellite broadcast their position + time. • Use travel time of 4 satellites and trilateration. • Suffers from “canyon” effect in cities. WiFi • Using the SSID and database. Odor Sensing Smell is ubiquitous in nature … both as a active and a passive sensor. Why is it so important? Advantages: evanescent, controllable, multi-­‐valued, useful. What is an actuator? • Device for moving or controlling a system. • “Robot Muscles Hydraulic Actuators • Pros: – Powerful – Fast – Stiff • Cons – Messy – Maintenance – External Pump Pneumatic Actuators • Pros: – Powerful – Cheap • Cons – Soft/Compliant – External Compressor Shape Memory Alloy Actuators • Works by warming and cooling Nitinol wires. • Pros: – Light – Powerful • Cons: – Slow (cooling) Electric Actuators • Pros – Better position precision – Well understood – No separate power source – Cheap • Cons – Heavy – Weaker/slower than hydraulics – Cooling issue • Stepper motors • DC motors – Servos • Continuous • Position • Others (not discussed) – Linear actuators – AC motors
<System Instruction> Only use the provided text to answer the question. Do not use outside resources. The entire answer should be short. ---------------- <Question> According to the provided text, what is the typical maximum range for Infrared (IR)? ---------------- <Text> **Robotics Sensors and Actuators** Robot Sensors • Sensors are devices that can sense and measure physical properties of the environment, • e.g. temperature, luminance, resistance to touch, weight, size, etc. • The key phenomenon is transduction • Transduction (engineering) is a process that converts one type of energy to another • They deliver low-­‐level information about the environment the robot is working in. – Return an incomplete description of the world • This information is noisy (imprecise). • Cannot be modelled completely: – Reading = f(env) where f is the model of the sensor – Finding the inverse: • ill posed problem (solution not uniquely deEined) • collapsing of dimensionality leads to ambiguity Types of Sensor • General classification: – active versus passive • Active: emit energy in environment – More robust, less efEicient • Passive: passively receive energy from env. – Less intrusive, but depends on env. e.g. light for camera • Example: stereo vision versus range Einder. – contact versus non-­‐contact Sensors • Proprioceptive Sensors (monitor state of robot) – IMU (accels & gyros) – Wheel encoders – Doppler radar … • Exteroceptive Sensors (monitor environment) – Cameras (single, stereo, omni, FLIR …) – Laser scanner – MW radar – Sonar – Tactile… Sensor Characteristics All sensors are characterized by various properties that describe their capabilities – Sensitivity: (change of output) ÷ (change of input) – Linearity: constancy of (output ÷ input) • Exception: logarithmic response cameras == wider dynamic range. – Measurement/Dynamic range: difference between min. and max. Response Time: time required for a change in input to cause a change in the output – Accuracy: difference between measured & actual – Repeatability: difference between repeated measures – Resolution: smallest observable increment – Bandwidth: result of high resolution or cycle time Types of Sensor Specific examples – tactile – close-­‐range proximity – angular position – infrared – Sonar – laser (various types) – radar – compasses, gyroscopes – Force – GPS – vision Tactile Sensors There are many different technologies – e.g. contact closure, magnetic, piezoelectric, etc. • For mobile robots these can be classiEied as – tactile feelers (antennae) often some form of metal wire passing through a wire loop -­‐ can be active (powered to mechanically search for surfaces) § tactile bumpers solid bar / plate acts on some form of contact switch e.g. mirror deElecting light beam, pressure bladder, wire loops, etc. § Pressure-­‐sensitive rubber with scanning array Vibrassae/whiskers of rats – Surface texture information. – Distance of deElection. – Blind people using a cane. Proximity Sensors Tactile sensors allow obstacle detection – proximity sensors needed for true obstacle avoidance • Several technologies can detect the presence of particular Eields without mechanical contact – magnetic reed switches • two thin magnetic strips of opposite polarity not quite touching • an external magnetic Eield closes the strip & makes contact Hall effect sensors • small voltage generated across a conductor carrying current – inductive sensors, capacitive sensors • inductive sensors can detect presence of metallic objects • capacitive sensors can detect metallic or dielectric materials Infrared Sensors Infrared sensors are probably the simplest type of non-­contact sensor – widely used in mobile robotics to avoid obstacles • They work by – emitting infrared light • to differentiate emitted IR from ambient IR (e.g. lights, sun, etc.), the signal is modulated with a low frequency (100 Hz) – detecting any reElections off nearby surfaces • In certain environments, with careful calibration, IR sensors can be used for measuring the distance to the object – requires uniform surface colours and structures Infrared Sensors (Sharp) Measures the return angle of the infrared beam. Infrared Problems If the IR signal is detected, it is safe to assume that an object is present • However, the absence of reElected IR does not mean that no object is present! – “Absence of evidence is not evidence of absence.” C. Sagan – certain dark colours (black) are almost invisible to IR – IR sensors are not absolutely safe for object detection • In realistic situations (different colours & types of objects) there is no accurate distance information – it is best to avoid objects as soon as possible • IR are short range – typical maximum range is 50 to 100 cm Sonar Sensors • The fundamental principle of robot sonar sensors is the same as that used by bats – emit a chirp (e.g. 1.2 milliseconds) • a short powerful pulse of a range of frequencies of sound – its reElection off nearby surfaces is detected • As the speed of sound in air is known (≈ 330 m·s-­‐1) the distance to the object can be computed from the elapsed time between chirp and echo – minimum distance = 165 tchirp (e.g. 21 cm at 1.2 ms) – maximum distance = 165 twait (e.g. 165 m at 1 s) • Usually referred to as ultrasonic sensors Sonar Problems • There are a number of problems and uncertainties associated with readings from sonar sensors – it is difEicult to be sure in which direction an object is because the 3D sonar beam spreads out as it travels – specular re5lections give rise to erroneous readings • the sonar beam hits a smooth surface at a shallow angle and so reElects away from the sensor • only when an object further away reElects the beam back does the sensor obtain a reading -­‐ but distance is incorrect – arrays of sonar sensors can experience crosstalk • one sensor detects the reElected beam of another sensor – the speed of sound varies with air temp. and pressure • a 16° C temp. change can cause a 30cm error at 10m Laser Range Finders • Laser range Einders commonly used to measure the distance, velocity and acceleration of objects – also known as laser radar or lidar • The operating principle is the same as sonar – a short pulse of (laser) light is emitted – the time elapsed between emission and detection is used to determine distance (using the speed of light) • Due to the shorter wavelengths of lasers, the chance of specular reElections is much less – accuracies of millimetres (16 -­‐ 50mm) over 100m – 1D beam is usually swept to give a 2D planar beam • May not detect transparent surfaces (e.g. glass!) or dark objects RADAR • Radar usually uses electromagnetic energy in the 1 -­‐ 12.5 GHz frequency range – this corresponds to wavelengths of 30 cm -­‐ 2 cm • microwave energy – unaffected by fog, rain, dust, haze and smoke • It may use a pulsed time-­‐of-­‐Elight methodology of sonar and lidar, but may also use other methods – continuous-­‐wave phase detection – continuous-­‐wave frequency modulation • Continuous-­‐wave systems make use of Doppler effect to measure relative velocity of the target Angular Position: Rotary Encoder • Potentiometer – Used in the Servo on the boebots • Optical Disks (Relative) – Counting the slots – Direction by having pars of emitters/receivers out of phase: Quadrature decoding – Can spin very fast: 500 kHz • Optical Disks (Absolute) – Grey encoding for absolute: • 0:0000, 1:1000, 2:1100, 3:0100, 4:0110, • 5:1110, 6:1010, 7:0010, 8:0011 • 9:1011, 10:1111, 11:0111, 12:0101, 13:1101, 14:1001, 15:0001 Compass Sensors • Compass sensors measure the horizontal component of the earth’s magnetic Eield – some birds use the vertical component too • The earth’s magnetic Eield is very weak and non-­‐uniform, and changes over time – indoors there are likely to be many other Eield sources • steel girders, reinforced concrete, power lines, motors, etc. – an accurate absolute reference is unlikely, but the Eield is approx. constant, so can be used for local reference Gyroscopes • A gyroscope is a spinning wheel with most of its mass concentrated in the outer periphery – e.g. a bicycle wheel • Due to the law of conservation of momentum – the spinning wheel will stay in its original orientation – a force is required to rotate the gyroscope • A gyro. can thus be used to maintain orientation or to measure the rate and direction of rotation • In fact there are different types of mechanical gyro. – and even optical gyro’s with no moving parts! • these can be used in e.g. space probes to maintain orientation Ring Gyro's • Use standing waves set up – between mirrors (laser ring gyro) – within a Eiber optic cable (Eibre optic ring gyro) • Measure rotation by observing beats in standing wave as the mirrors "rotate through it". IMU's • Gyro, accelerometer combination. • Typical designs (e.g. 3DM-­‐GX1™) use tri-­‐axial gyros to track dynamic orientation and tri-­‐axial DC accelerometers along with the tri-­‐axial magnetometers to track static orientation. • The embedded microprocessors contains programmable Eilter algorithms, which blend these static and dynamic responses in real-­‐time. GPS • GPS uses a constellation of between 24 and 32 Medium Earth Orbit satellites. • Satellite broadcast their position + time. • Use travel time of 4 satellites and trilateration. • Suffers from “canyon” effect in cities. WiFi • Using the SSID and database. Odor Sensing Smell is ubiquitous in nature … both as a active and a passive sensor. Why is it so important? Advantages: evanescent, controllable, multi-­‐valued, useful. What is an actuator? • Device for moving or controlling a system. • “Robot Muscles Hydraulic Actuators • Pros: – Powerful – Fast – Stiff • Cons – Messy – Maintenance – External Pump Pneumatic Actuators • Pros: – Powerful – Cheap • Cons – Soft/Compliant – External Compressor Shape Memory Alloy Actuators • Works by warming and cooling Nitinol wires. • Pros: – Light – Powerful • Cons: – Slow (cooling) Electric Actuators • Pros – Better position precision – Well understood – No separate power source – Cheap • Cons – Heavy – Weaker/slower than hydraulics – Cooling issue • Stepper motors • DC motors – Servos • Continuous • Position • Others (not discussed) – Linear actuators – AC motors
Only use the provided text to answer the question. Do not use outside resources. The entire answer should be short. EVIDENCE: **Robotics Sensors and Actuators** Robot Sensors • Sensors are devices that can sense and measure physical properties of the environment, • e.g. temperature, luminance, resistance to touch, weight, size, etc. • The key phenomenon is transduction • Transduction (engineering) is a process that converts one type of energy to another • They deliver low-­‐level information about the environment the robot is working in. – Return an incomplete description of the world • This information is noisy (imprecise). • Cannot be modelled completely: – Reading = f(env) where f is the model of the sensor – Finding the inverse: • ill posed problem (solution not uniquely deEined) • collapsing of dimensionality leads to ambiguity Types of Sensor • General classification: – active versus passive • Active: emit energy in environment – More robust, less efEicient • Passive: passively receive energy from env. – Less intrusive, but depends on env. e.g. light for camera • Example: stereo vision versus range Einder. – contact versus non-­‐contact Sensors • Proprioceptive Sensors (monitor state of robot) – IMU (accels & gyros) – Wheel encoders – Doppler radar … • Exteroceptive Sensors (monitor environment) – Cameras (single, stereo, omni, FLIR …) – Laser scanner – MW radar – Sonar – Tactile… Sensor Characteristics All sensors are characterized by various properties that describe their capabilities – Sensitivity: (change of output) ÷ (change of input) – Linearity: constancy of (output ÷ input) • Exception: logarithmic response cameras == wider dynamic range. – Measurement/Dynamic range: difference between min. and max. Response Time: time required for a change in input to cause a change in the output – Accuracy: difference between measured & actual – Repeatability: difference between repeated measures – Resolution: smallest observable increment – Bandwidth: result of high resolution or cycle time Types of Sensor Specific examples – tactile – close-­‐range proximity – angular position – infrared – Sonar – laser (various types) – radar – compasses, gyroscopes – Force – GPS – vision Tactile Sensors There are many different technologies – e.g. contact closure, magnetic, piezoelectric, etc. • For mobile robots these can be classiEied as – tactile feelers (antennae) often some form of metal wire passing through a wire loop -­‐ can be active (powered to mechanically search for surfaces) § tactile bumpers solid bar / plate acts on some form of contact switch e.g. mirror deElecting light beam, pressure bladder, wire loops, etc. § Pressure-­‐sensitive rubber with scanning array Vibrassae/whiskers of rats – Surface texture information. – Distance of deElection. – Blind people using a cane. Proximity Sensors Tactile sensors allow obstacle detection – proximity sensors needed for true obstacle avoidance • Several technologies can detect the presence of particular Eields without mechanical contact – magnetic reed switches • two thin magnetic strips of opposite polarity not quite touching • an external magnetic Eield closes the strip & makes contact Hall effect sensors • small voltage generated across a conductor carrying current – inductive sensors, capacitive sensors • inductive sensors can detect presence of metallic objects • capacitive sensors can detect metallic or dielectric materials Infrared Sensors Infrared sensors are probably the simplest type of non-­contact sensor – widely used in mobile robotics to avoid obstacles • They work by – emitting infrared light • to differentiate emitted IR from ambient IR (e.g. lights, sun, etc.), the signal is modulated with a low frequency (100 Hz) – detecting any reElections off nearby surfaces • In certain environments, with careful calibration, IR sensors can be used for measuring the distance to the object – requires uniform surface colours and structures Infrared Sensors (Sharp) Measures the return angle of the infrared beam. Infrared Problems If the IR signal is detected, it is safe to assume that an object is present • However, the absence of reElected IR does not mean that no object is present! – “Absence of evidence is not evidence of absence.” C. Sagan – certain dark colours (black) are almost invisible to IR – IR sensors are not absolutely safe for object detection • In realistic situations (different colours & types of objects) there is no accurate distance information – it is best to avoid objects as soon as possible • IR are short range – typical maximum range is 50 to 100 cm Sonar Sensors • The fundamental principle of robot sonar sensors is the same as that used by bats – emit a chirp (e.g. 1.2 milliseconds) • a short powerful pulse of a range of frequencies of sound – its reElection off nearby surfaces is detected • As the speed of sound in air is known (≈ 330 m·s-­‐1) the distance to the object can be computed from the elapsed time between chirp and echo – minimum distance = 165 tchirp (e.g. 21 cm at 1.2 ms) – maximum distance = 165 twait (e.g. 165 m at 1 s) • Usually referred to as ultrasonic sensors Sonar Problems • There are a number of problems and uncertainties associated with readings from sonar sensors – it is difEicult to be sure in which direction an object is because the 3D sonar beam spreads out as it travels – specular re5lections give rise to erroneous readings • the sonar beam hits a smooth surface at a shallow angle and so reElects away from the sensor • only when an object further away reElects the beam back does the sensor obtain a reading -­‐ but distance is incorrect – arrays of sonar sensors can experience crosstalk • one sensor detects the reElected beam of another sensor – the speed of sound varies with air temp. and pressure • a 16° C temp. change can cause a 30cm error at 10m Laser Range Finders • Laser range Einders commonly used to measure the distance, velocity and acceleration of objects – also known as laser radar or lidar • The operating principle is the same as sonar – a short pulse of (laser) light is emitted – the time elapsed between emission and detection is used to determine distance (using the speed of light) • Due to the shorter wavelengths of lasers, the chance of specular reElections is much less – accuracies of millimetres (16 -­‐ 50mm) over 100m – 1D beam is usually swept to give a 2D planar beam • May not detect transparent surfaces (e.g. glass!) or dark objects RADAR • Radar usually uses electromagnetic energy in the 1 -­‐ 12.5 GHz frequency range – this corresponds to wavelengths of 30 cm -­‐ 2 cm • microwave energy – unaffected by fog, rain, dust, haze and smoke • It may use a pulsed time-­‐of-­‐Elight methodology of sonar and lidar, but may also use other methods – continuous-­‐wave phase detection – continuous-­‐wave frequency modulation • Continuous-­‐wave systems make use of Doppler effect to measure relative velocity of the target Angular Position: Rotary Encoder • Potentiometer – Used in the Servo on the boebots • Optical Disks (Relative) – Counting the slots – Direction by having pars of emitters/receivers out of phase: Quadrature decoding – Can spin very fast: 500 kHz • Optical Disks (Absolute) – Grey encoding for absolute: • 0:0000, 1:1000, 2:1100, 3:0100, 4:0110, • 5:1110, 6:1010, 7:0010, 8:0011 • 9:1011, 10:1111, 11:0111, 12:0101, 13:1101, 14:1001, 15:0001 Compass Sensors • Compass sensors measure the horizontal component of the earth’s magnetic Eield – some birds use the vertical component too • The earth’s magnetic Eield is very weak and non-­‐uniform, and changes over time – indoors there are likely to be many other Eield sources • steel girders, reinforced concrete, power lines, motors, etc. – an accurate absolute reference is unlikely, but the Eield is approx. constant, so can be used for local reference Gyroscopes • A gyroscope is a spinning wheel with most of its mass concentrated in the outer periphery – e.g. a bicycle wheel • Due to the law of conservation of momentum – the spinning wheel will stay in its original orientation – a force is required to rotate the gyroscope • A gyro. can thus be used to maintain orientation or to measure the rate and direction of rotation • In fact there are different types of mechanical gyro. – and even optical gyro’s with no moving parts! • these can be used in e.g. space probes to maintain orientation Ring Gyro's • Use standing waves set up – between mirrors (laser ring gyro) – within a Eiber optic cable (Eibre optic ring gyro) • Measure rotation by observing beats in standing wave as the mirrors "rotate through it". IMU's • Gyro, accelerometer combination. • Typical designs (e.g. 3DM-­‐GX1™) use tri-­‐axial gyros to track dynamic orientation and tri-­‐axial DC accelerometers along with the tri-­‐axial magnetometers to track static orientation. • The embedded microprocessors contains programmable Eilter algorithms, which blend these static and dynamic responses in real-­‐time. GPS • GPS uses a constellation of between 24 and 32 Medium Earth Orbit satellites. • Satellite broadcast their position + time. • Use travel time of 4 satellites and trilateration. • Suffers from “canyon” effect in cities. WiFi • Using the SSID and database. Odor Sensing Smell is ubiquitous in nature … both as a active and a passive sensor. Why is it so important? Advantages: evanescent, controllable, multi-­‐valued, useful. What is an actuator? • Device for moving or controlling a system. • “Robot Muscles Hydraulic Actuators • Pros: – Powerful – Fast – Stiff • Cons – Messy – Maintenance – External Pump Pneumatic Actuators • Pros: – Powerful – Cheap • Cons – Soft/Compliant – External Compressor Shape Memory Alloy Actuators • Works by warming and cooling Nitinol wires. • Pros: – Light – Powerful • Cons: – Slow (cooling) Electric Actuators • Pros – Better position precision – Well understood – No separate power source – Cheap • Cons – Heavy – Weaker/slower than hydraulics – Cooling issue • Stepper motors • DC motors – Servos • Continuous • Position • Others (not discussed) – Linear actuators – AC motors USER: According to the provided text, what is the typical maximum range for Infrared (IR)? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
20
14
1,661
null
6
For this task, answer questions exclusively from the knowledge you gain from the information within the prompt. Head each paragraph of your response with a bolded question pertaining to the information following it.
Summarize the key points of menu labeling into the form of paragraphs.
Research Evaluating the Impact of Menu Labeling It is difficult to predict what effect, if any, mandatory restaurant menu labeling will have on food purchasing and health outcomes. However, changes in behavior following implementation of calorie labeling regulations in other jurisdictions prior to publication of the final federal rule (e.g., New York City, Philadelphia, and King County, WA) may provide some insight. Studies of the Impact of Menu Labeling on Calories Purchased Studies examining the relationship between menu labeling and calorie purchasing behavior have yielded mixed findings. Although consumers often report ordering fewer calories as a result of menu labeling, studies examining restaurant transaction data have not consistently reported a decrease in calories purchased after implementation of menu labeling. This section discusses several studies that have evaluated the impact of menu labeling, using survey and transaction data, on calories purchased. 17 Findings from current research are limited because existing studies often vary in scope and methodology. 18 For example, several of the studies that did not find a post-labeling decrease in calories purchased were conducted by the same group of researchers using samples from lowincome communities in New York, NY and Newark, NJ, 19 and research has shown that there are socioeconomic disparities in calorie label use, with higher-income individuals being more likely to notice calorie labels.20 Another study limited its sample population to one chain of restaurants in King County, WA. 21 An additional factor to consider is the time frame between implementation of menu labeling and an assessment of purchasing behavior, as there needs to be enough time for an effect to take place. One study, for instance, did not find an effect at four to six months postmandatory menu labeling, but it did find a decrease in calories purchased 18 months after implementation.22 Another study that did not find an effect of menu labeling on calories purchased examined outcomes two months after implementation, which may not have been enough time for an effect to take place.23 In addition, most of these studies relied on self-reported data to assess customers’ awareness and use of calorie labels. Such self-reporting may not be accurate, as evidenced by the inconsistencies between reported calories purchased and actual calories purchased as indicated on receipts.24 Finally, these studies analyzed the number of calories purchased but not changes in calories consumed, which may differ in response to menu labeling. For example, in full-service restaurants, customers may be more likely to share a meal or eat half the meal and take the rest home, which would not be captured by transaction data. Similarly, in fast food or carry-out establishments, customers may consume only a portion of their meal, which would not be captured by transaction data. Studies of the Impact of Menu Labeling on Sales and Revenue In 2009, Starbucks commissioned a Stanford University study to determine how the menu labeling mandate in New York City (NYC) affected its overall sales.25 Findings indicate that after the implementation of mandatory calorie labeling, average calories per transaction fell by 6% at Starbucks, an effect that lasted 10 months after the calorie posting commenced. This effect was primarily found for food purchases, as the average food calories per transaction fell by 14% (i.e., approximately 14 calories per transaction), while average beverage calories per transaction did not substantially change. Changes in beverage calories may not be reflected in transaction data. For example, if a customer orders a latte and substitutes skim milk for 2% milk, or asks for one pump of syrup instead of the usual three or four, those substitutions would not be captured by transaction data because the cost of the latte would not change. This study also assessed the impact of calorie posting on Starbucks revenue, reporting no statistically significant change in revenue as a result of calorie labeling. Because cost data associated with the policy was unavailable, profits were not measured directly. The effect on revenue was divided into (1) the effect on the number of transactions and (2) the effect on revenue per transaction. The study found that daily store transactions increased by 1.4% on average, while revenue per transaction decreased by 0.8% on average for all Starbucks in NYC, resulting in a zero net impact of calorie posting on Starbucks revenues. In NYC Starbucks stores located within 100 meters of a Dunkin Donuts, daily revenue increased by 3.3% on average. To determine consumers’ preliminary knowledge of calories in Starbucks food and beverages, surveys were administered before and after the introduction of a calorie-posting law in Seattle.26 Pre-menu labeling survey data indicate that Starbucks customers tended to be inaccurate in predicting the number of calories in their beverage and food orders. Specifically, in this study, consumers overestimated the number of calories in beverages and underestimated the number of calories in food. This is consistent with the study’s finding that calorie posting discouraged individuals from purchasing food but not beverages. Because consumers tended to underestimate the number of calories in food items, seeing the posted caloric value, which was greater than initially expected, may have led consumers to reduce their food purchases. However, because consumers tended to overestimate beverage calories, calorie posting may not have discouraged people from purchasing beverages. Proponents of menu labeling argue that, in addition to affecting consumer purchasing behavior, mandatory menu labeling may incentivize restaurants to offer lower calorie options and provide consumers with healthier choices. A study in the American Journal of Preventive Medicine reported that new menu items in restaurant chains in 2013 contained approximately 60 fewer calories compared with menu items in 2012—a 12% drop in calories.27 This voluntary action by large chain restaurants may have been in anticipation of the ACA’s federal menu-labeling provisions which will be in effect May 7, 2018.
Question: Summarize the key points of menu labeling into the form of paragraphs. Context: Research Evaluating the Impact of Menu Labeling It is difficult to predict what effect, if any, mandatory restaurant menu labeling will have on food purchasing and health outcomes. However, changes in behavior following implementation of calorie labeling regulations in other jurisdictions prior to publication of the final federal rule (e.g., New York City, Philadelphia, and King County, WA) may provide some insight. Studies of the Impact of Menu Labeling on Calories Purchased Studies examining the relationship between menu labeling and calorie purchasing behavior have yielded mixed findings. Although consumers often report ordering fewer calories as a result of menu labeling, studies examining restaurant transaction data have not consistently reported a decrease in calories purchased after implementation of menu labeling. This section discusses several studies that have evaluated the impact of menu labeling, using survey and transaction data, on calories purchased. 17 Findings from current research are limited because existing studies often vary in scope and methodology. 18 For example, several of the studies that did not find a post-labeling decrease in calories purchased were conducted by the same group of researchers using samples from lowincome communities in New York, NY and Newark, NJ, 19 and research has shown that there are socioeconomic disparities in calorie label use, with higher-income individuals being more likely to notice calorie labels.20 Another study limited its sample population to one chain of restaurants in King County, WA. 21 An additional factor to consider is the time frame between implementation of menu labeling and an assessment of purchasing behavior, as there needs to be enough time for an effect to take place. One study, for instance, did not find an effect at four to six months postmandatory menu labeling, but it did find a decrease in calories purchased 18 months after implementation.22 Another study that did not find an effect of menu labeling on calories purchased examined outcomes two months after implementation, which may not have been enough time for an effect to take place.23 In addition, most of these studies relied on self-reported data to assess customers’ awareness and use of calorie labels. Such self-reporting may not be accurate, as evidenced by the inconsistencies between reported calories purchased and actual calories purchased as indicated on receipts.24 Finally, these studies analyzed the number of calories purchased but not changes in calories consumed, which may differ in response to menu labeling. For example, in full-service restaurants, customers may be more likely to share a meal or eat half the meal and take the rest home, which would not be captured by transaction data. Similarly, in fast food or carry-out establishments, customers may consume only a portion of their meal, which would not be captured by transaction data. Studies of the Impact of Menu Labeling on Sales and Revenue In 2009, Starbucks commissioned a Stanford University study to determine how the menu labeling mandate in New York City (NYC) affected its overall sales.25 Findings indicate that after the implementation of mandatory calorie labeling, average calories per transaction fell by 6% at Starbucks, an effect that lasted 10 months after the calorie posting commenced. This effect was primarily found for food purchases, as the average food calories per transaction fell by 14% (i.e., approximately 14 calories per transaction), while average beverage calories per transaction did not substantially change. Changes in beverage calories may not be reflected in transaction data. For example, if a customer orders a latte and substitutes skim milk for 2% milk, or asks for one pump of syrup instead of the usual three or four, those substitutions would not be captured by transaction data because the cost of the latte would not change. This study also assessed the impact of calorie posting on Starbucks revenue, reporting no statistically significant change in revenue as a result of calorie labeling. Because cost data associated with the policy was unavailable, profits were not measured directly. The effect on revenue was divided into (1) the effect on the number of transactions and (2) the effect on revenue per transaction. The study found that daily store transactions increased by 1.4% on average, while revenue per transaction decreased by 0.8% on average for all Starbucks in NYC, resulting in a zero net impact of calorie posting on Starbucks revenues. In NYC Starbucks stores located within 100 meters of a Dunkin Donuts, daily revenue increased by 3.3% on average. To determine consumers’ preliminary knowledge of calories in Starbucks food and beverages, surveys were administered before and after the introduction of a calorie-posting law in Seattle.26 Pre-menu labeling survey data indicate that Starbucks customers tended to be inaccurate in predicting the number of calories in their beverage and food orders. Specifically, in this study, consumers overestimated the number of calories in beverages and underestimated the number of calories in food. This is consistent with the study’s finding that calorie posting discouraged individuals from purchasing food but not beverages. Because consumers tended to underestimate the number of calories in food items, seeing the posted caloric value, which was greater than initially expected, may have led consumers to reduce their food purchases. However, because consumers tended to overestimate beverage calories, calorie posting may not have discouraged people from purchasing beverages. Proponents of menu labeling argue that, in addition to affecting consumer purchasing behavior, mandatory menu labeling may incentivize restaurants to offer lower calorie options and provide consumers with healthier choices. A study in the American Journal of Preventive Medicine reported that new menu items in restaurant chains in 2013 contained approximately 60 fewer calories compared with menu items in 2012—a 12% drop in calories.27 This voluntary action by large chain restaurants may have been in anticipation of the ACA’s federal menu-labeling provisions which will be in effect May 7, 2018. System Instructions: For this task, answer questions exclusively from the knowledge you gain from the information within the prompt. Head each paragraph of your response with a bolded question pertaining to the information following it.
For this task, answer questions exclusively from the knowledge you gain from the information within the prompt. Head each paragraph of your response with a bolded question pertaining to the information following it. EVIDENCE: Research Evaluating the Impact of Menu Labeling It is difficult to predict what effect, if any, mandatory restaurant menu labeling will have on food purchasing and health outcomes. However, changes in behavior following implementation of calorie labeling regulations in other jurisdictions prior to publication of the final federal rule (e.g., New York City, Philadelphia, and King County, WA) may provide some insight. Studies of the Impact of Menu Labeling on Calories Purchased Studies examining the relationship between menu labeling and calorie purchasing behavior have yielded mixed findings. Although consumers often report ordering fewer calories as a result of menu labeling, studies examining restaurant transaction data have not consistently reported a decrease in calories purchased after implementation of menu labeling. This section discusses several studies that have evaluated the impact of menu labeling, using survey and transaction data, on calories purchased. 17 Findings from current research are limited because existing studies often vary in scope and methodology. 18 For example, several of the studies that did not find a post-labeling decrease in calories purchased were conducted by the same group of researchers using samples from lowincome communities in New York, NY and Newark, NJ, 19 and research has shown that there are socioeconomic disparities in calorie label use, with higher-income individuals being more likely to notice calorie labels.20 Another study limited its sample population to one chain of restaurants in King County, WA. 21 An additional factor to consider is the time frame between implementation of menu labeling and an assessment of purchasing behavior, as there needs to be enough time for an effect to take place. One study, for instance, did not find an effect at four to six months postmandatory menu labeling, but it did find a decrease in calories purchased 18 months after implementation.22 Another study that did not find an effect of menu labeling on calories purchased examined outcomes two months after implementation, which may not have been enough time for an effect to take place.23 In addition, most of these studies relied on self-reported data to assess customers’ awareness and use of calorie labels. Such self-reporting may not be accurate, as evidenced by the inconsistencies between reported calories purchased and actual calories purchased as indicated on receipts.24 Finally, these studies analyzed the number of calories purchased but not changes in calories consumed, which may differ in response to menu labeling. For example, in full-service restaurants, customers may be more likely to share a meal or eat half the meal and take the rest home, which would not be captured by transaction data. Similarly, in fast food or carry-out establishments, customers may consume only a portion of their meal, which would not be captured by transaction data. Studies of the Impact of Menu Labeling on Sales and Revenue In 2009, Starbucks commissioned a Stanford University study to determine how the menu labeling mandate in New York City (NYC) affected its overall sales.25 Findings indicate that after the implementation of mandatory calorie labeling, average calories per transaction fell by 6% at Starbucks, an effect that lasted 10 months after the calorie posting commenced. This effect was primarily found for food purchases, as the average food calories per transaction fell by 14% (i.e., approximately 14 calories per transaction), while average beverage calories per transaction did not substantially change. Changes in beverage calories may not be reflected in transaction data. For example, if a customer orders a latte and substitutes skim milk for 2% milk, or asks for one pump of syrup instead of the usual three or four, those substitutions would not be captured by transaction data because the cost of the latte would not change. This study also assessed the impact of calorie posting on Starbucks revenue, reporting no statistically significant change in revenue as a result of calorie labeling. Because cost data associated with the policy was unavailable, profits were not measured directly. The effect on revenue was divided into (1) the effect on the number of transactions and (2) the effect on revenue per transaction. The study found that daily store transactions increased by 1.4% on average, while revenue per transaction decreased by 0.8% on average for all Starbucks in NYC, resulting in a zero net impact of calorie posting on Starbucks revenues. In NYC Starbucks stores located within 100 meters of a Dunkin Donuts, daily revenue increased by 3.3% on average. To determine consumers’ preliminary knowledge of calories in Starbucks food and beverages, surveys were administered before and after the introduction of a calorie-posting law in Seattle.26 Pre-menu labeling survey data indicate that Starbucks customers tended to be inaccurate in predicting the number of calories in their beverage and food orders. Specifically, in this study, consumers overestimated the number of calories in beverages and underestimated the number of calories in food. This is consistent with the study’s finding that calorie posting discouraged individuals from purchasing food but not beverages. Because consumers tended to underestimate the number of calories in food items, seeing the posted caloric value, which was greater than initially expected, may have led consumers to reduce their food purchases. However, because consumers tended to overestimate beverage calories, calorie posting may not have discouraged people from purchasing beverages. Proponents of menu labeling argue that, in addition to affecting consumer purchasing behavior, mandatory menu labeling may incentivize restaurants to offer lower calorie options and provide consumers with healthier choices. A study in the American Journal of Preventive Medicine reported that new menu items in restaurant chains in 2013 contained approximately 60 fewer calories compared with menu items in 2012—a 12% drop in calories.27 This voluntary action by large chain restaurants may have been in anticipation of the ACA’s federal menu-labeling provisions which will be in effect May 7, 2018. USER: Summarize the key points of menu labeling into the form of paragraphs. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
33
12
954
null
792
ONLY USE THE DATA I PROVIDE Limit your response to 500 words Organize the response in a FAQs document If you cannot answer using the contexts alone, say "I cannot determine the answer to that due to lack of context"
According to this document, what are the minimum rights that must be guaranteed with regard to employee working hours and conditions?
**Responsible Supplier Policy Background** Virgin Atlantic is a purpose led company and we believe that everyone can take on the world. It underpins everything we do and drives us to do better for our planet, people, customers and communities. As an airline, we know we must lead from the front. Responsibly bringing the benefits of travel, connectivity and exploration to the communities and customers we serve. For us that means tackling our carbon footprint, championing inclusion and being an advocate for change. We work with suppliers who share our values and, like us, see business as a force for good. Ensuring that the products and services we buy are sourced as sustainably as possible, with partners that innovate to improve practices, drive positive impact and bring economic and societal benefits. • Sourcing goods and services in a way that treats the people we work with (directly and indirectly) with respect and dignity • Supporting practices that minimise damage to the environment and natural resources on which we all depend • Promote positive animal welfare standards. Choosing who we work with matters. This policy sets out the standards we adhere to at Virgin Atlantic and that we expect our suppliers to comply with too. Based on best-in-class and internationally agreed standards to reduce our environmental impact, ensure basic human rights and protect animal welfare. Our commitment At Virgin Atlantic, our people with responsibility for procurement and supplier management put sustainable procurement practices at the heart of everything they do. The sustainability criteria outlined in this policy are built into our procurement process, from sourcing and selection to contract award and ongoing contract management. Over time these will increasingly become a prerequisite for all our suppliers. We encourage all suppliers to proactively work to improve practices in relation to these principles, in order to secure new and ongoing contracts with us. We know it’s not always straightforward, but we do expect openness and transparency in our relationships with our suppliers. We support continuous improvement with suppliers who need help in any area of this policy. Ultimate responsibility for this policy is held by our Procurement Director with full endorsement by our Chief Executive Officer. Your commitment We ask our suppliers to commit to the following: • To embrace the policy and assign a senior member of the business to promote skills and compliance. • To set up a documented monitoring process to verify standards are met and continually reviewed to ensure compliance, with a process for corrective actions to be set up and followed through. • To look to impose a policy of similar or higher standards on their own supply chains, including any sub-contractors they work with. The expectation is that each supplier in the supply chain will monitor their own compliance with a view that Virgin Atlantic or Virgin Atlantic Holidays will be able to meaningfully audit any tier in the chain if this is required. • To make their workforce (including those not directly employed by the Supplier i.e. agency staff, contractors and subcontractors) aware of the policy or the supplier’s own policy, if this is to a higher standard, and provide them with the appropriate training and skills to continually improve the supply chain. Minimum requirements We expect all suppliers to meet all requirements in this section. For some suppliers, certain certifications or standards may be minimum requirements (see following sections) and these would be communicated by the contract owner. People • Suppliers should provide safe and fair working conditions for their employees. Standards should, at a minimum, meet national laws. • Suppliers must not use child labour defined as anyone under 15 years of age, or as stipulated in the International Labour Organisation (ILO) C138 Minimum Age Convention. • There is no forced, bonded or compulsory labour. • Workers are not required to lodge ’deposits’ or their identity papers with their employer and are free to leave their employer after reasonable notice. • The company shall respect the right of personnel to a living wage and ensure that wages paid for a normal work week shall always meet at least legal or industry minimum standards and shall be sufficient to meet the basic needs of personnel and to provide some discretionary income. • All workers shall be provided with written and understandable information about their employment conditions before they enter employment. • Employees should be allowed freedom of association and the right to collective bargaining. Where the law restricts freedom of association and collective bargaining, employers should facilitate alternative means of representation by staff. • Employees working hours should comply with national laws and industry standards at a minimum. They should have at least 1 day off in 7 on average, and overtime should be voluntary and comply with local working law legislation. Employees should also be given regular breaks. • Working conditions must be safe and hygienic (bearing in mind any hazards specific to that industry), with access to clean toilets and water for drinking and washing. There should also be access to medical care when needed. • Employees should receive regular health and safety training and guidance, with clear health and safety procedures for all staff in the workplace, including those specific to their role. A senior representative should hold responsibility for the health and safety of all staff, including emergency procedures, and all accidents should be logged. • Accommodation, where provided, shall be clean, safe, and meet the basic needs of the workers. • Employees should not be submitted to harsh or inhumane treatment and all disciplinary procedures should be held on record. • Employees should not suffer discrimination in employment on any grounds including but not limited to: gender, race, age, disability, religion, political affiliation, sexual orientation, medical condition or freedom of association. Environment • Suppliers should comply with local and national environmental legislation. • Suppliers should monitor and reduce the environmental impacts of their business including: o Reducing fossil energy and fuel use, electricity use, and associated greenhouse gas emissions. o Reducing and recycling waste. o Ensuring responsible water management, including water saving measures and protection of supply of clean water to communities where these are affected by supplier operations. o Minimising the use of environmentally damaging chemicals and ensuring responsible disposal to prevent pollution of land and water sources. o Preventing negative business impacts on forests, land use, biodiversity and wild life, and ensuring high-value native eco-systems are maintained. Animal Welfare Suppliers of tourist attractions or hotels featuring animals should avoid any form of animal neglect or cruelty and fully adhere to the minimum standards set out in the ABTA Global Guidelines for Animal Welfare. Business Ethics We have a strict anti-bribery policy and expect our suppliers to uphold high standards of integrity, transparency and governance. At a minimum we expect suppliers: • To comply with all relevant local laws and regulations. • Not to be associated with any group that supports acts of violence or terrorism. • Not to offer, promise, give or receive any bribe or kickback and/or other improper advantage to or from any person, customer or supplier. • Not to make nor offer, directly or indirectly, any payment, gift or other advantage to a Foreign Public Official with the intention of influencing them and obtaining or retaining an advantage in the conduct of business. • To adhere to our anti-facilitation of tax evasion policy and not engage in any activity, practice or conduct which would cause an offence to be committed relating to the prevention of tax evasion and/or the facilitation of tax evasion under the Criminal Finances Act 2017. Priority working practices As part of our relationship with our suppliers there are priority areas of improvement that we want our suppliers to incorporate in support of doing ongoing business together. People • Suppliers should take responsibility for the local community they operate in, maximising social and economic benefits to the local community and minimising negative impacts. • Where medical conditions such as HIV / AIDS, malaria, hepatitis B etc. are a significant issue, employers should raise awareness to their employees on the risks of these medical conditions and assist in providing access to education, treatment and medication where possible. Employers should not subject employees to mandatory testing or ask employees to disclose their medical status. • Employers should not subject employees to mandatory pregnancy or virginity testing or questioning. Environment • Suppliers should have an environmental management system in place, whereby relevant, material environmental impacts are monitored and steps are taken to significantly reduce these. • For all onboard products, suppliers should help us to minimise fuel use and carbon emissions by providing us with lightweight products and packaging (without detriment to the safety or security of that product). • Suppliers should consider the full life cycle of products during design and packaging by aiming to (in order): o Reduce material, to reduce both weight and waste o Source all materials from sustainable, renewable or recycled sources. o Provide durable products that allow for re-use wherever possible o Ensure material can be recycled (providing clear labelling to show the recycling route) • Suppliers should remove or reduce all single use items wherever possible (including, but not limited to plastics), and where alternative materials are used they should come from recycled or otherwise verified sustainable sources. • Suppliers are required to ensure products associated with high rainforest destruction risk (i.e., made from, or consisting of, beef, leather, paper, wood, soy, palm oil or biofuels) are responsibly sourced through transparent supply chains with the appropriate independent certification. • All goods and services should be as resource efficient as possible, e.g. energy efficient lighting and appliances, fuel efficient, hybrid or electriccars, renewable electricity or low water use appliances. • Suppliers should help us reduce the carbon footprint of our products through effective management and reduction of their own carbon footprint through: o Utilising efficient manufacturing processes, and by using materials which do not require excessive energy to extract or produce. o Using local suppliers and running efficient logistical systems that reduce transport of our customers, staff and products as far as possible, to reduce both carbon emissions and local air pollution. o Responsible waste reductions and recycling within their business.
{Query} ========== According to this document, what are the minimum rights that must be guaranteed with regard to employee working hours and conditions? {Task Instructions} ========== ONLY USE THE DATA I PROVIDE Limit your response to 500 words Organize the response in a FAQs document If you cannot answer using the contexts alone, say "I cannot determine the answer to that due to lack of context" {Text Passage} ========== **Responsible Supplier Policy Background** Virgin Atlantic is a purpose led company and we believe that everyone can take on the world. It underpins everything we do and drives us to do better for our planet, people, customers and communities. As an airline, we know we must lead from the front. Responsibly bringing the benefits of travel, connectivity and exploration to the communities and customers we serve. For us that means tackling our carbon footprint, championing inclusion and being an advocate for change. We work with suppliers who share our values and, like us, see business as a force for good. Ensuring that the products and services we buy are sourced as sustainably as possible, with partners that innovate to improve practices, drive positive impact and bring economic and societal benefits. • Sourcing goods and services in a way that treats the people we work with (directly and indirectly) with respect and dignity • Supporting practices that minimise damage to the environment and natural resources on which we all depend • Promote positive animal welfare standards. Choosing who we work with matters. This policy sets out the standards we adhere to at Virgin Atlantic and that we expect our suppliers to comply with too. Based on best-in-class and internationally agreed standards to reduce our environmental impact, ensure basic human rights and protect animal welfare. Our commitment At Virgin Atlantic, our people with responsibility for procurement and supplier management put sustainable procurement practices at the heart of everything they do. The sustainability criteria outlined in this policy are built into our procurement process, from sourcing and selection to contract award and ongoing contract management. Over time these will increasingly become a prerequisite for all our suppliers. We encourage all suppliers to proactively work to improve practices in relation to these principles, in order to secure new and ongoing contracts with us. We know it’s not always straightforward, but we do expect openness and transparency in our relationships with our suppliers. We support continuous improvement with suppliers who need help in any area of this policy. Ultimate responsibility for this policy is held by our Procurement Director with full endorsement by our Chief Executive Officer. Your commitment We ask our suppliers to commit to the following: • To embrace the policy and assign a senior member of the business to promote skills and compliance. • To set up a documented monitoring process to verify standards are met and continually reviewed to ensure compliance, with a process for corrective actions to be set up and followed through. • To look to impose a policy of similar or higher standards on their own supply chains, including any sub-contractors they work with. The expectation is that each supplier in the supply chain will monitor their own compliance with a view that Virgin Atlantic or Virgin Atlantic Holidays will be able to meaningfully audit any tier in the chain if this is required. • To make their workforce (including those not directly employed by the Supplier i.e. agency staff, contractors and subcontractors) aware of the policy or the supplier’s own policy, if this is to a higher standard, and provide them with the appropriate training and skills to continually improve the supply chain. Minimum requirements We expect all suppliers to meet all requirements in this section. For some suppliers, certain certifications or standards may be minimum requirements (see following sections) and these would be communicated by the contract owner. People • Suppliers should provide safe and fair working conditions for their employees. Standards should, at a minimum, meet national laws. • Suppliers must not use child labour defined as anyone under 15 years of age, or as stipulated in the International Labour Organisation (ILO) C138 Minimum Age Convention. • There is no forced, bonded or compulsory labour. • Workers are not required to lodge ’deposits’ or their identity papers with their employer and are free to leave their employer after reasonable notice. • The company shall respect the right of personnel to a living wage and ensure that wages paid for a normal work week shall always meet at least legal or industry minimum standards and shall be sufficient to meet the basic needs of personnel and to provide some discretionary income. • All workers shall be provided with written and understandable information about their employment conditions before they enter employment. • Employees should be allowed freedom of association and the right to collective bargaining. Where the law restricts freedom of association and collective bargaining, employers should facilitate alternative means of representation by staff. • Employees working hours should comply with national laws and industry standards at a minimum. They should have at least 1 day off in 7 on average, and overtime should be voluntary and comply with local working law legislation. Employees should also be given regular breaks. • Working conditions must be safe and hygienic (bearing in mind any hazards specific to that industry), with access to clean toilets and water for drinking and washing. There should also be access to medical care when needed. • Employees should receive regular health and safety training and guidance, with clear health and safety procedures for all staff in the workplace, including those specific to their role. A senior representative should hold responsibility for the health and safety of all staff, including emergency procedures, and all accidents should be logged. • Accommodation, where provided, shall be clean, safe, and meet the basic needs of the workers. • Employees should not be submitted to harsh or inhumane treatment and all disciplinary procedures should be held on record. • Employees should not suffer discrimination in employment on any grounds including but not limited to: gender, race, age, disability, religion, political affiliation, sexual orientation, medical condition or freedom of association. Environment • Suppliers should comply with local and national environmental legislation. • Suppliers should monitor and reduce the environmental impacts of their business including: o Reducing fossil energy and fuel use, electricity use, and associated greenhouse gas emissions. o Reducing and recycling waste. o Ensuring responsible water management, including water saving measures and protection of supply of clean water to communities where these are affected by supplier operations. o Minimising the use of environmentally damaging chemicals and ensuring responsible disposal to prevent pollution of land and water sources. o Preventing negative business impacts on forests, land use, biodiversity and wild life, and ensuring high-value native eco-systems are maintained. Animal Welfare Suppliers of tourist attractions or hotels featuring animals should avoid any form of animal neglect or cruelty and fully adhere to the minimum standards set out in the ABTA Global Guidelines for Animal Welfare. Business Ethics We have a strict anti-bribery policy and expect our suppliers to uphold high standards of integrity, transparency and governance. At a minimum we expect suppliers: • To comply with all relevant local laws and regulations. • Not to be associated with any group that supports acts of violence or terrorism. • Not to offer, promise, give or receive any bribe or kickback and/or other improper advantage to or from any person, customer or supplier. • Not to make nor offer, directly or indirectly, any payment, gift or other advantage to a Foreign Public Official with the intention of influencing them and obtaining or retaining an advantage in the conduct of business. • To adhere to our anti-facilitation of tax evasion policy and not engage in any activity, practice or conduct which would cause an offence to be committed relating to the prevention of tax evasion and/or the facilitation of tax evasion under the Criminal Finances Act 2017. Priority working practices As part of our relationship with our suppliers there are priority areas of improvement that we want our suppliers to incorporate in support of doing ongoing business together. People • Suppliers should take responsibility for the local community they operate in, maximising social and economic benefits to the local community and minimising negative impacts. • Where medical conditions such as HIV / AIDS, malaria, hepatitis B etc. are a significant issue, employers should raise awareness to their employees on the risks of these medical conditions and assist in providing access to education, treatment and medication where possible. Employers should not subject employees to mandatory testing or ask employees to disclose their medical status. • Employers should not subject employees to mandatory pregnancy or virginity testing or questioning. Environment • Suppliers should have an environmental management system in place, whereby relevant, material environmental impacts are monitored and steps are taken to significantly reduce these. • For all onboard products, suppliers should help us to minimise fuel use and carbon emissions by providing us with lightweight products and packaging (without detriment to the safety or security of that product). • Suppliers should consider the full life cycle of products during design and packaging by aiming to (in order): o Reduce material, to reduce both weight and waste o Source all materials from sustainable, renewable or recycled sources. o Provide durable products that allow for re-use wherever possible o Ensure material can be recycled (providing clear labelling to show the recycling route) • Suppliers should remove or reduce all single use items wherever possible (including, but not limited to plastics), and where alternative materials are used they should come from recycled or otherwise verified sustainable sources. • Suppliers are required to ensure products associated with high rainforest destruction risk (i.e., made from, or consisting of, beef, leather, paper, wood, soy, palm oil or biofuels) are responsibly sourced through transparent supply chains with the appropriate independent certification. • All goods and services should be as resource efficient as possible, e.g. energy efficient lighting and appliances, fuel efficient, hybrid or electriccars, renewable electricity or low water use appliances. • Suppliers should help us reduce the carbon footprint of our products through effective management and reduction of their own carbon footprint through: o Utilising efficient manufacturing processes, and by using materials which do not require excessive energy to extract or produce. o Using local suppliers and running efficient logistical systems that reduce transport of our customers, staff and products as far as possible, to reduce both carbon emissions and local air pollution. o Responsible waste reductions and recycling within their business.
ONLY USE THE DATA I PROVIDE Limit your response to 500 words Organize the response in a FAQs document If you cannot answer using the contexts alone, say "I cannot determine the answer to that due to lack of context" EVIDENCE: **Responsible Supplier Policy Background** Virgin Atlantic is a purpose led company and we believe that everyone can take on the world. It underpins everything we do and drives us to do better for our planet, people, customers and communities. As an airline, we know we must lead from the front. Responsibly bringing the benefits of travel, connectivity and exploration to the communities and customers we serve. For us that means tackling our carbon footprint, championing inclusion and being an advocate for change. We work with suppliers who share our values and, like us, see business as a force for good. Ensuring that the products and services we buy are sourced as sustainably as possible, with partners that innovate to improve practices, drive positive impact and bring economic and societal benefits. • Sourcing goods and services in a way that treats the people we work with (directly and indirectly) with respect and dignity • Supporting practices that minimise damage to the environment and natural resources on which we all depend • Promote positive animal welfare standards. Choosing who we work with matters. This policy sets out the standards we adhere to at Virgin Atlantic and that we expect our suppliers to comply with too. Based on best-in-class and internationally agreed standards to reduce our environmental impact, ensure basic human rights and protect animal welfare. Our commitment At Virgin Atlantic, our people with responsibility for procurement and supplier management put sustainable procurement practices at the heart of everything they do. The sustainability criteria outlined in this policy are built into our procurement process, from sourcing and selection to contract award and ongoing contract management. Over time these will increasingly become a prerequisite for all our suppliers. We encourage all suppliers to proactively work to improve practices in relation to these principles, in order to secure new and ongoing contracts with us. We know it’s not always straightforward, but we do expect openness and transparency in our relationships with our suppliers. We support continuous improvement with suppliers who need help in any area of this policy. Ultimate responsibility for this policy is held by our Procurement Director with full endorsement by our Chief Executive Officer. Your commitment We ask our suppliers to commit to the following: • To embrace the policy and assign a senior member of the business to promote skills and compliance. • To set up a documented monitoring process to verify standards are met and continually reviewed to ensure compliance, with a process for corrective actions to be set up and followed through. • To look to impose a policy of similar or higher standards on their own supply chains, including any sub-contractors they work with. The expectation is that each supplier in the supply chain will monitor their own compliance with a view that Virgin Atlantic or Virgin Atlantic Holidays will be able to meaningfully audit any tier in the chain if this is required. • To make their workforce (including those not directly employed by the Supplier i.e. agency staff, contractors and subcontractors) aware of the policy or the supplier’s own policy, if this is to a higher standard, and provide them with the appropriate training and skills to continually improve the supply chain. Minimum requirements We expect all suppliers to meet all requirements in this section. For some suppliers, certain certifications or standards may be minimum requirements (see following sections) and these would be communicated by the contract owner. People • Suppliers should provide safe and fair working conditions for their employees. Standards should, at a minimum, meet national laws. • Suppliers must not use child labour defined as anyone under 15 years of age, or as stipulated in the International Labour Organisation (ILO) C138 Minimum Age Convention. • There is no forced, bonded or compulsory labour. • Workers are not required to lodge ’deposits’ or their identity papers with their employer and are free to leave their employer after reasonable notice. • The company shall respect the right of personnel to a living wage and ensure that wages paid for a normal work week shall always meet at least legal or industry minimum standards and shall be sufficient to meet the basic needs of personnel and to provide some discretionary income. • All workers shall be provided with written and understandable information about their employment conditions before they enter employment. • Employees should be allowed freedom of association and the right to collective bargaining. Where the law restricts freedom of association and collective bargaining, employers should facilitate alternative means of representation by staff. • Employees working hours should comply with national laws and industry standards at a minimum. They should have at least 1 day off in 7 on average, and overtime should be voluntary and comply with local working law legislation. Employees should also be given regular breaks. • Working conditions must be safe and hygienic (bearing in mind any hazards specific to that industry), with access to clean toilets and water for drinking and washing. There should also be access to medical care when needed. • Employees should receive regular health and safety training and guidance, with clear health and safety procedures for all staff in the workplace, including those specific to their role. A senior representative should hold responsibility for the health and safety of all staff, including emergency procedures, and all accidents should be logged. • Accommodation, where provided, shall be clean, safe, and meet the basic needs of the workers. • Employees should not be submitted to harsh or inhumane treatment and all disciplinary procedures should be held on record. • Employees should not suffer discrimination in employment on any grounds including but not limited to: gender, race, age, disability, religion, political affiliation, sexual orientation, medical condition or freedom of association. Environment • Suppliers should comply with local and national environmental legislation. • Suppliers should monitor and reduce the environmental impacts of their business including: o Reducing fossil energy and fuel use, electricity use, and associated greenhouse gas emissions. o Reducing and recycling waste. o Ensuring responsible water management, including water saving measures and protection of supply of clean water to communities where these are affected by supplier operations. o Minimising the use of environmentally damaging chemicals and ensuring responsible disposal to prevent pollution of land and water sources. o Preventing negative business impacts on forests, land use, biodiversity and wild life, and ensuring high-value native eco-systems are maintained. Animal Welfare Suppliers of tourist attractions or hotels featuring animals should avoid any form of animal neglect or cruelty and fully adhere to the minimum standards set out in the ABTA Global Guidelines for Animal Welfare. Business Ethics We have a strict anti-bribery policy and expect our suppliers to uphold high standards of integrity, transparency and governance. At a minimum we expect suppliers: • To comply with all relevant local laws and regulations. • Not to be associated with any group that supports acts of violence or terrorism. • Not to offer, promise, give or receive any bribe or kickback and/or other improper advantage to or from any person, customer or supplier. • Not to make nor offer, directly or indirectly, any payment, gift or other advantage to a Foreign Public Official with the intention of influencing them and obtaining or retaining an advantage in the conduct of business. • To adhere to our anti-facilitation of tax evasion policy and not engage in any activity, practice or conduct which would cause an offence to be committed relating to the prevention of tax evasion and/or the facilitation of tax evasion under the Criminal Finances Act 2017. Priority working practices As part of our relationship with our suppliers there are priority areas of improvement that we want our suppliers to incorporate in support of doing ongoing business together. People • Suppliers should take responsibility for the local community they operate in, maximising social and economic benefits to the local community and minimising negative impacts. • Where medical conditions such as HIV / AIDS, malaria, hepatitis B etc. are a significant issue, employers should raise awareness to their employees on the risks of these medical conditions and assist in providing access to education, treatment and medication where possible. Employers should not subject employees to mandatory testing or ask employees to disclose their medical status. • Employers should not subject employees to mandatory pregnancy or virginity testing or questioning. Environment • Suppliers should have an environmental management system in place, whereby relevant, material environmental impacts are monitored and steps are taken to significantly reduce these. • For all onboard products, suppliers should help us to minimise fuel use and carbon emissions by providing us with lightweight products and packaging (without detriment to the safety or security of that product). • Suppliers should consider the full life cycle of products during design and packaging by aiming to (in order): o Reduce material, to reduce both weight and waste o Source all materials from sustainable, renewable or recycled sources. o Provide durable products that allow for re-use wherever possible o Ensure material can be recycled (providing clear labelling to show the recycling route) • Suppliers should remove or reduce all single use items wherever possible (including, but not limited to plastics), and where alternative materials are used they should come from recycled or otherwise verified sustainable sources. • Suppliers are required to ensure products associated with high rainforest destruction risk (i.e., made from, or consisting of, beef, leather, paper, wood, soy, palm oil or biofuels) are responsibly sourced through transparent supply chains with the appropriate independent certification. • All goods and services should be as resource efficient as possible, e.g. energy efficient lighting and appliances, fuel efficient, hybrid or electriccars, renewable electricity or low water use appliances. • Suppliers should help us reduce the carbon footprint of our products through effective management and reduction of their own carbon footprint through: o Utilising efficient manufacturing processes, and by using materials which do not require excessive energy to extract or produce. o Using local suppliers and running efficient logistical systems that reduce transport of our customers, staff and products as far as possible, to reduce both carbon emissions and local air pollution. o Responsible waste reductions and recycling within their business. USER: According to this document, what are the minimum rights that must be guaranteed with regard to employee working hours and conditions? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
40
21
1,701
null
312
I want to only be given information based on the context provided. The answer should be within 60-70 words in length.
Can you summarize how the AICOA could help big companies?
The primary concern with this type of conduct involves monopoly leveraging.542 As discussed, leveraging theories of harm can take two forms. Offensive leveraging occurs when a firm attempts to use monopoly power in a primary market to extract additional profits from a secondary market.543 By contrast, defensive leveraging involves the use of monopoly power to gain an advantage in a secondary market so as to preserve a primary market monopoly—for example, by eliminating competitive threats that might emerge from the secondary market.544 Defensive leveraging may be a viable theory of harm under existing monopolization law.545 Offensive-leveraging claims, however, cannot succeed under Section 2 absent evidence that a defendant had a dangerous probability of monopolizing a secondary market; mere harm to competition in the secondary market is not sufficient.546 For some of the self-preferencing allegations against Big Tech firms, these limitations may preclude antitrust claims.547 It may be unlikely, for example, that Amazon will achieve monopoly power over most of the products that it sells on its marketplace. As a result, it would be difficult to challenge the preferential display of those products under an offensive-leveraging theory.548 This type of alleged favoritism may also be a weak foundation for a defensive-leveraging or monopoly-maintenance case; it is not clear that Amazon’s elevation of allegedly inferior products would help it maintain a putative e-commerce monopoly. Similarly, the case law governing refusals to deal may serve as an impediment to antitrust claims challenging platform self-preferencing. A platform operator’s favorable treatment of its own verticals relative to rivals that use its platform is typically less harmful to rivals than an outright refusal of access.549 Because antitrust imposes access duties only in a narrow set of circumstances, courts would likely find many forms of self-preferencing to be permissible if such conduct is evaluated as a refusal to deal.550 In the 118th Congress, the American Innovation and Choice Online Act (AICOA) would respond to these doctrinal difficulties by prohibiting covered platform operators from preferencing their own products and services “in a manner that would materially harm competition.”551 Given the ubiquity of self-preferencing by vertically integrated firms, the meaning of the “materially harm competition” standard is key to assessing the prohibition’s scope. However, many argue the meaning of that language is not clear.552 The bill does not by its terms clarify whether the “materially harm competition” standard embodies a consumer-welfare test or one of the alternative standards for assessing competitive harm urged by proponents of antitrust reform.553 As a result, it is unclear whether the AICOA would permit defendants to justify challenged conduct on the ground that it benefits consumers. If the AICOA becomes law, this may be a dispositive issue in many litigated cases. A wide range of platform self-preferencing may harm a firm’s rivals while also offering consumer benefits. For example, when Google displays a Google Maps result in response to a search query, it may disadvantage rival map services, but benefit consumers.554 Apple’s preinstallation of its own apps on iPhones, Microsoft’s inclusion of certain apps with its Windows operating system, and Amazon’s free provision of its video-streaming service to Amazon Prime members may have similar effects.555 It is not clear how the “materially harm competition” standard would apply to such practices. In cases that do not involve per se offenses, Sherman Act defendants typically have the opportunity to defend challenged conduct on the ground that it benefits consumers.556 To the extent that the “materially harm competition” standard is intended to incorporate prevailing concepts of competitive harm from the antitrust case law, then, consumer-welfare arguments would likely be cognizable In interpreting other industry-specific competition statutes, however, some courts and commentators have taken the view that “harm to competition” encompasses types of harm beyond those proscribed by the antitrust laws.557 Additionally, some of the AICOA’s proponents have rejected suggestions to amend the bill to adopt a consumer-welfare test.558 An interpretation that eschewed consumer-welfare justifications would also be consistent with the normative vision articulated by many advocates of antitrust reform. As discussed, the role that consumer welfare is meant to play in non-welfarist conceptions of “competition” is not clear.559 Much of the reformist literature, though, appears to reject the idea that courts and enforcers should balance different antitrust goals against one another.560 This context, along with the bill’s omission of other traditional antitrust concepts like market power, may cut against the argument that consumer-welfare arguments would be cognizable under the “materially harm competition” standard.561 The DCPCA appears to be more explicit about this issue. That legislation would make it presumptively unlawful for covered platforms to preference their own products and services, “regardless of any alleged procompetitive benefits or efficiencies.”562 Defendants could rebut an allegation of unlawful self-preferencing only by establishing by clear and convincing evidence that their conduct “did not result in any harm to the relevant aggrieved party.”563
The primary concern with this type of conduct involves monopoly leveraging.542 As discussed, leveraging theories of harm can take two forms. Offensive leveraging occurs when a firm attempts to use monopoly power in a primary market to extract additional profits from a secondary market.543 By contrast, defensive leveraging involves the use of monopoly power to gain an advantage in a secondary market so as to preserve a primary market monopoly—for example, by eliminating competitive threats that might emerge from the secondary market.544 Defensive leveraging may be a viable theory of harm under existing monopolization law.545 Offensive-leveraging claims, however, cannot succeed under Section 2 absent evidence that a defendant had a dangerous probability of monopolizing a secondary market; mere harm to competition in the secondary market is not sufficient.546 For some of the self-preferencing allegations against Big Tech firms, these limitations may preclude antitrust claims.547 It may be unlikely, for example, that Amazon will achieve monopoly power over most of the products that it sells on its marketplace. As a result, it would be difficult to challenge the preferential display of those products under an offensive-leveraging theory.548 This type of alleged favoritism may also be a weak foundation for a defensive-leveraging or monopoly-maintenance case; it is not clear that Amazon’s elevation of allegedly inferior products would help it maintain a putative e-commerce monopoly. Similarly, the case law governing refusals to deal may serve as an impediment to antitrust claims challenging platform self-preferencing. A platform operator’s favorable treatment of its own verticals relative to rivals that use its platform is typically less harmful to rivals than an outright refusal of access.549 Because antitrust imposes access duties only in a narrow set of circumstances, courts would likely find many forms of self-preferencing to be permissible if such conduct is evaluated as a refusal to deal.550 In the 118th Congress, the American Innovation and Choice Online Act (AICOA) would respond to these doctrinal difficulties by prohibiting covered platform operators from preferencing their own products and services “in a manner that would materially harm competition.”551 Given the ubiquity of self-preferencing by vertically integrated firms, the meaning of the “materially harm competition” standard is key to assessing the prohibition’s scope. However, many argue the meaning of that language is not clear.552 The bill does not by its terms clarify whether the “materially harm competition” standard embodies a consumer-welfare test or one of the alternative standards for assessing competitive harm urged by proponents of antitrust reform.553 As a result, it is unclear whether the AICOA would permit defendants to justify challenged conduct on the ground that it benefits consumers. If the AICOA becomes law, this may be a dispositive issue in many litigated cases. A wide range of platform self-preferencing may harm a firm’s rivals while also offering consumer benefits. For example, when Google displays a Google Maps result in response to a search query, it may disadvantage rival map services, but benefit consumers.554 Apple’s preinstallation of its own apps on iPhones, Microsoft’s inclusion of certain apps with its Windows operating system, and Amazon’s free provision of its video-streaming service to Amazon Prime members may have similar effects.555 It is not clear how the “materially harm competition” standard would apply to such practices. In cases that do not involve per se offenses, Sherman Act defendants typically have the opportunity to defend challenged conduct on the ground that it benefits consumers.556 To the extent that the “materially harm competition” standard is intended to incorporate prevailing concepts of competitive harm from the antitrust case law, then, consumer-welfare arguments would likely be cognizable In interpreting other industry-specific competition statutes, however, some courts and commentators have taken the view that “harm to competition” encompasses types of harm beyond those proscribed by the antitrust laws.557 Additionally, some of the AICOA’s proponents have rejected suggestions to amend the bill to adopt a consumer-welfare test.558 An interpretation that eschewed consumer-welfare justifications would also be consistent with the normative vision articulated by many advocates of antitrust reform. As discussed, the role that consumer welfare is meant to play in non-welfarist conceptions of “competition” is not clear.559 Much of the reformist literature, though, appears to reject the idea that courts and enforcers should balance different antitrust goals against one another.560 This context, along with the bill’s omission of other traditional antitrust concepts like market power, may cut against the argument that consumer-welfare arguments would be cognizable under the “materially harm competition” standard.561 The DCPCA appears to be more explicit about this issue. That legislation would make it presumptively unlawful for covered platforms to preference their own products and services, “regardless of any alleged procompetitive benefits or efficiencies.”562 Defendants could rebut an allegation of unlawful self-preferencing only by establishing by clear and convincing evidence that their conduct “did not result in any harm to the relevant aggrieved party.”563 Can you summarize how the AICOA could help big companies? I want to only be given information based on the context provided. The answer should be within 60-70 words in length.
I want to only be given information based on the context provided. The answer should be within 60-70 words in length. EVIDENCE: The primary concern with this type of conduct involves monopoly leveraging.542 As discussed, leveraging theories of harm can take two forms. Offensive leveraging occurs when a firm attempts to use monopoly power in a primary market to extract additional profits from a secondary market.543 By contrast, defensive leveraging involves the use of monopoly power to gain an advantage in a secondary market so as to preserve a primary market monopoly—for example, by eliminating competitive threats that might emerge from the secondary market.544 Defensive leveraging may be a viable theory of harm under existing monopolization law.545 Offensive-leveraging claims, however, cannot succeed under Section 2 absent evidence that a defendant had a dangerous probability of monopolizing a secondary market; mere harm to competition in the secondary market is not sufficient.546 For some of the self-preferencing allegations against Big Tech firms, these limitations may preclude antitrust claims.547 It may be unlikely, for example, that Amazon will achieve monopoly power over most of the products that it sells on its marketplace. As a result, it would be difficult to challenge the preferential display of those products under an offensive-leveraging theory.548 This type of alleged favoritism may also be a weak foundation for a defensive-leveraging or monopoly-maintenance case; it is not clear that Amazon’s elevation of allegedly inferior products would help it maintain a putative e-commerce monopoly. Similarly, the case law governing refusals to deal may serve as an impediment to antitrust claims challenging platform self-preferencing. A platform operator’s favorable treatment of its own verticals relative to rivals that use its platform is typically less harmful to rivals than an outright refusal of access.549 Because antitrust imposes access duties only in a narrow set of circumstances, courts would likely find many forms of self-preferencing to be permissible if such conduct is evaluated as a refusal to deal.550 In the 118th Congress, the American Innovation and Choice Online Act (AICOA) would respond to these doctrinal difficulties by prohibiting covered platform operators from preferencing their own products and services “in a manner that would materially harm competition.”551 Given the ubiquity of self-preferencing by vertically integrated firms, the meaning of the “materially harm competition” standard is key to assessing the prohibition’s scope. However, many argue the meaning of that language is not clear.552 The bill does not by its terms clarify whether the “materially harm competition” standard embodies a consumer-welfare test or one of the alternative standards for assessing competitive harm urged by proponents of antitrust reform.553 As a result, it is unclear whether the AICOA would permit defendants to justify challenged conduct on the ground that it benefits consumers. If the AICOA becomes law, this may be a dispositive issue in many litigated cases. A wide range of platform self-preferencing may harm a firm’s rivals while also offering consumer benefits. For example, when Google displays a Google Maps result in response to a search query, it may disadvantage rival map services, but benefit consumers.554 Apple’s preinstallation of its own apps on iPhones, Microsoft’s inclusion of certain apps with its Windows operating system, and Amazon’s free provision of its video-streaming service to Amazon Prime members may have similar effects.555 It is not clear how the “materially harm competition” standard would apply to such practices. In cases that do not involve per se offenses, Sherman Act defendants typically have the opportunity to defend challenged conduct on the ground that it benefits consumers.556 To the extent that the “materially harm competition” standard is intended to incorporate prevailing concepts of competitive harm from the antitrust case law, then, consumer-welfare arguments would likely be cognizable In interpreting other industry-specific competition statutes, however, some courts and commentators have taken the view that “harm to competition” encompasses types of harm beyond those proscribed by the antitrust laws.557 Additionally, some of the AICOA’s proponents have rejected suggestions to amend the bill to adopt a consumer-welfare test.558 An interpretation that eschewed consumer-welfare justifications would also be consistent with the normative vision articulated by many advocates of antitrust reform. As discussed, the role that consumer welfare is meant to play in non-welfarist conceptions of “competition” is not clear.559 Much of the reformist literature, though, appears to reject the idea that courts and enforcers should balance different antitrust goals against one another.560 This context, along with the bill’s omission of other traditional antitrust concepts like market power, may cut against the argument that consumer-welfare arguments would be cognizable under the “materially harm competition” standard.561 The DCPCA appears to be more explicit about this issue. That legislation would make it presumptively unlawful for covered platforms to preference their own products and services, “regardless of any alleged procompetitive benefits or efficiencies.”562 Defendants could rebut an allegation of unlawful self-preferencing only by establishing by clear and convincing evidence that their conduct “did not result in any harm to the relevant aggrieved party.”563 USER: Can you summarize how the AICOA could help big companies? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
21
10
801
null
176
Respond succinctly and directly. Refer only to the provided document. After your answer, provide any relevant quotes from the source document in italics.
What are the 5th gen Standard Series CPUs based on?
**Microsoft Azure SQL Database pricing** vCore A vCore-based purchase model is best if you are looking for flexibility, control and transparency of individual resource consumption. This model allows you to scale compute, memory and storage based upon your workload needs and provides a straightforward way to translate on-premises workload requirements to the cloud. Serverless compute The SQL Database server-less compute tier optimises price-performance and simplifies performance management for single databases with intermittent, unpredictable usage by auto-scaling compute and billing for compute used per second. For details, see the FAQ section and documentation. Hyperscale Serverless Hyperscale combines the benefits of compute auto-scaling with storage auto-scaling up to 100 TB to help you optimise price-performance of your database resources to meet your workload's needs. If zone redundancy is enabled, the database must have at least one high availability (HA) replica. The pricing below is applicable for both primary and secondary replicas. Standard-series (Gen 5) Standard-series (Gen 5) logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz, Intel SP8160 (Skylake), Intel Xeon Platinum 8272CL 2.5 GHz (Cascade Lake) and Intel(R) Xeon Scalable 2.8 GHz processor (Ice Lake) processors. In the standard-series (Gen 5), 1 vCore = 1 hyper thread. The standard-series (Gen 5) logical CPU is great for most relational database servers. Primary replica pricing Minimum vCores Maximum vCores Minimum Memory (GB) Maximum Memory (GB) Price 0.5 80 2.05 240 $0.0001050/vCore-second ($0.378/vCore-hour) High Availability Replica and Named Replica Pricing Minimum vCores Maximum vCores Minimum Memory (GB) Maximum Memory (GB) Price 0.5 80 2.05 240 $0.0001050/vCore-second ($0.378/vCore-hour) Storage In the Hyperscale tier, you are charged for storage for your database based on actual allocation. Storage is dynamically allocated between 10 GB and 100 TB, in 10 GB increments. Storage Price GB/month $0.25 Back up storage (point-in-time restore) By default, seven days of backups are stored in RA-GRS Standard blob storage. Any corrupted or deleted database can be restored to any point in time within that period. The storage is used by periodic storage blob snapshots and all generated transaction log. The usage of the backup storage depends on the rate of change of the database and the configured retention period. Back up storage consumption will be charged in GB/month. Learn more about automated backups, and how to monitor and manage backup costs. Redundancy Price LRS $0.08/GB/month ZRS $0.10/GB/month RA-GRS $0.20/GB/month Provisioned compute The SQL Database provisioned compute tier provides a fixed amount of compute resource for a fixed price billed hourly. It optimises price-performance for single databases and elastic pools with more regular usage that cannot afford any delay in compute warm-up after idle usage periods. For details, see the FAQ section and documentation. Hyperscale Build new, highly scalable cloud applications on Azure SQL Database Hyperscale. Hyperscale provides rapid, auto-scaling storage up to 100 TB to help you optimise database resources for your workload's needs. To enable zone redundancy, the database must have at least one secondary high availability replica. The pricing below is applicable for both primary and secondary replicas. Standard-series (Gen 5) Standard-series (Gen 5) logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz, Intel SP8160 (Skylake), Intel Xeon Platinum 8272CL 2.5 GHz (Cascade Lake) and Intel(R) Xeon Scalable 2.8 GHz processor (Ice Lake) processors. In the standard-series (Gen 5), 1 vCore = 1 hyper thread. The standard-series (Gen 5) logical CPU is great for most relational database servers. vCORE Memory (GB) Pay as you go 1-year reserved capacity 1 3 year reserved capacity 1 2 10.2 $0.366/hour $0.238/hour ~35% savings $0.165/hour ~55% savings 4 20.4 $0.731/hour $0.475/hour ~35% savings $0.329/hour ~55% savings 6 30.6 $1.096/hour $0.713/hour ~35% savings $0.494/hour ~55% savings 8 40.8 $1.462/hour $0.950/hour ~35% savings $0.658/hour ~55% savings 10 51 $1.827/hour $1.188/hour ~35% savings $0.822/hour ~55% savings 12 61.2 $2.192/hour $1.425/hour ~35% savings $0.987/hour ~55% savings 14 71.4 $2.558/hour $1.663/hour ~35% savings $1.151/hour ~55% savings 16 81.6 $2.923/hour $1.900/hour ~35% savings $1.316/hour ~55% savings 18 91.8 $3.288/hour $2.137/hour ~35% savings $1.480/hour ~55% savings 20 102 $3.654/hour $2.375/hour ~35% savings $1.644/hour ~55% savings 24 122.4 $4.384/hour $2.850/hour ~35% savings $1.973/hour ~55% savings 32 163.2 $5.846/hour $3.800/hour ~35% savings $2.631/hour ~55% savings 40 204 $7.307/hour $4.749/hour ~35% savings $3.288/hour ~55% savings 80 396 $14.613/hour $9.498/hour ~35% savings $6.576/hour ~55% savings 1Learn more about Azure reservations and Azure SQL Database reserved capacity pricing. Compute is provisioned in virtual cores (vCores) with an option to choose between compute generations. DC-series The DC-series logical CPUs are based on Intel XEON E-2288G processors with Software Guard Extensions (Intel SGX) technology. In the DC-series, 1 vCore = 1 physical core. DC-series supports Always Encrypted with secure enclaves and it is designed to for workloads that process sensitive data and demand confidential query processing capabilities. vCORE Memory (GB) Pay as you go 2 9 $0.73/hour 4 18 $1.46/hour 6 27 $2.19/hour 8 36 $2.92/hour 10 45 $3.65/hour 12 54 $4.38/hour 14 63 $5.11/hour 16 72 $5.84/hour 18 81 $6.57/hour 20 90 $7.30/hour 32 144 $11.68/hour 40 180 $14.60/hour This hardware option is subject to regional availability. See our documentation for the latest list of available regions. Compute is provisioned in virtual cores (vCores). A vCore represents a logical CPU offered with an option to choose between compute generations. Premium-series Premium-series logical CPUs are based on the latest Intel(R) Xeon (Ice Lake) and AMD EPYCTM 7763v (Milan) chipsets, 1 vCore = 1 hyper thread. The premium-series logical CPU is a great fit for database workloads that require faster compute and memory performance as well as improved IO and network experience over the standard-series hardware offering. vCORE Memory (GB) Pay as you go 1-year reserved capacity 1 2 10.4 $0.366/hour $0.238/hour ~35% savings 4 20.8 $0.731/hour $0.475/hour ~35% savings 6 31.1 $1.096/hour $0.713/hour ~35% savings 8 41.5 $1.462/hour $0.950/hour ~35% savings 10 51.9 $1.827/hour $1.188/hour ~35% savings 12 62.3 $2.192/hour $1.425/hour ~35% savings 14 72.7 $2.558/hour $1.663/hour ~35% savings 16 83 $2.923/hour $1.900/hour ~35% savings 18 93.4 $3.288/hour $2.137/hour ~35% savings 20 103.8 $3.654/hour $2.375/hour ~35% savings 24 124.6 $4.384/hour $2.850/hour ~35% savings 32 166.1 $5.846/hour $3.800/hour ~35% savings 40 207.6 $7.307/hour $4.749/hour ~35% savings 64 664.4 $11.691/hour $7.599/hour ~35% savings 80 415.2 $14.613/hour $9.498/hour ~35% savings 128 647.8 $23.381/hour $15.197/hour ~35% savings 1Learn more about Azure reservations and Azure SQL Database reserved capacity pricing. Compute is provisioned in virtual cores (vCores). A vCore represents a logical CPU offered with an option to choose between compute generations.
[Task Description] ======= Respond succinctly and directly. Refer only to the provided document. After your answer, provide any relevant quotes from the source document in italics. ---------------- [Text] ======= **Microsoft Azure SQL Database pricing** vCore A vCore-based purchase model is best if you are looking for flexibility, control and transparency of individual resource consumption. This model allows you to scale compute, memory and storage based upon your workload needs and provides a straightforward way to translate on-premises workload requirements to the cloud. Serverless compute The SQL Database server-less compute tier optimises price-performance and simplifies performance management for single databases with intermittent, unpredictable usage by auto-scaling compute and billing for compute used per second. For details, see the FAQ section and documentation. Hyperscale Serverless Hyperscale combines the benefits of compute auto-scaling with storage auto-scaling up to 100 TB to help you optimise price-performance of your database resources to meet your workload's needs. If zone redundancy is enabled, the database must have at least one high availability (HA) replica. The pricing below is applicable for both primary and secondary replicas. Standard-series (Gen 5) Standard-series (Gen 5) logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz, Intel SP8160 (Skylake), Intel Xeon Platinum 8272CL 2.5 GHz (Cascade Lake) and Intel(R) Xeon Scalable 2.8 GHz processor (Ice Lake) processors. In the standard-series (Gen 5), 1 vCore = 1 hyper thread. The standard-series (Gen 5) logical CPU is great for most relational database servers. Primary replica pricing Minimum vCores Maximum vCores Minimum Memory (GB) Maximum Memory (GB) Price 0.5 80 2.05 240 $0.0001050/vCore-second ($0.378/vCore-hour) High Availability Replica and Named Replica Pricing Minimum vCores Maximum vCores Minimum Memory (GB) Maximum Memory (GB) Price 0.5 80 2.05 240 $0.0001050/vCore-second ($0.378/vCore-hour) Storage In the Hyperscale tier, you are charged for storage for your database based on actual allocation. Storage is dynamically allocated between 10 GB and 100 TB, in 10 GB increments. Storage Price GB/month $0.25 Back up storage (point-in-time restore) By default, seven days of backups are stored in RA-GRS Standard blob storage. Any corrupted or deleted database can be restored to any point in time within that period. The storage is used by periodic storage blob snapshots and all generated transaction log. The usage of the backup storage depends on the rate of change of the database and the configured retention period. Back up storage consumption will be charged in GB/month. Learn more about automated backups, and how to monitor and manage backup costs. Redundancy Price LRS $0.08/GB/month ZRS $0.10/GB/month RA-GRS $0.20/GB/month Provisioned compute The SQL Database provisioned compute tier provides a fixed amount of compute resource for a fixed price billed hourly. It optimises price-performance for single databases and elastic pools with more regular usage that cannot afford any delay in compute warm-up after idle usage periods. For details, see the FAQ section and documentation. Hyperscale Build new, highly scalable cloud applications on Azure SQL Database Hyperscale. Hyperscale provides rapid, auto-scaling storage up to 100 TB to help you optimise database resources for your workload's needs. To enable zone redundancy, the database must have at least one secondary high availability replica. The pricing below is applicable for both primary and secondary replicas. Standard-series (Gen 5) Standard-series (Gen 5) logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz, Intel SP8160 (Skylake), Intel Xeon Platinum 8272CL 2.5 GHz (Cascade Lake) and Intel(R) Xeon Scalable 2.8 GHz processor (Ice Lake) processors. In the standard-series (Gen 5), 1 vCore = 1 hyper thread. The standard-series (Gen 5) logical CPU is great for most relational database servers. vCORE Memory (GB) Pay as you go 1-year reserved capacity 1 3 year reserved capacity 1 2 10.2 $0.366/hour $0.238/hour ~35% savings $0.165/hour ~55% savings 4 20.4 $0.731/hour $0.475/hour ~35% savings $0.329/hour ~55% savings 6 30.6 $1.096/hour $0.713/hour ~35% savings $0.494/hour ~55% savings 8 40.8 $1.462/hour $0.950/hour ~35% savings $0.658/hour ~55% savings 10 51 $1.827/hour $1.188/hour ~35% savings $0.822/hour ~55% savings 12 61.2 $2.192/hour $1.425/hour ~35% savings $0.987/hour ~55% savings 14 71.4 $2.558/hour $1.663/hour ~35% savings $1.151/hour ~55% savings 16 81.6 $2.923/hour $1.900/hour ~35% savings $1.316/hour ~55% savings 18 91.8 $3.288/hour $2.137/hour ~35% savings $1.480/hour ~55% savings 20 102 $3.654/hour $2.375/hour ~35% savings $1.644/hour ~55% savings 24 122.4 $4.384/hour $2.850/hour ~35% savings $1.973/hour ~55% savings 32 163.2 $5.846/hour $3.800/hour ~35% savings $2.631/hour ~55% savings 40 204 $7.307/hour $4.749/hour ~35% savings $3.288/hour ~55% savings 80 396 $14.613/hour $9.498/hour ~35% savings $6.576/hour ~55% savings 1Learn more about Azure reservations and Azure SQL Database reserved capacity pricing. Compute is provisioned in virtual cores (vCores) with an option to choose between compute generations. DC-series The DC-series logical CPUs are based on Intel XEON E-2288G processors with Software Guard Extensions (Intel SGX) technology. In the DC-series, 1 vCore = 1 physical core. DC-series supports Always Encrypted with secure enclaves and it is designed to for workloads that process sensitive data and demand confidential query processing capabilities. vCORE Memory (GB) Pay as you go 2 9 $0.73/hour 4 18 $1.46/hour 6 27 $2.19/hour 8 36 $2.92/hour 10 45 $3.65/hour 12 54 $4.38/hour 14 63 $5.11/hour 16 72 $5.84/hour 18 81 $6.57/hour 20 90 $7.30/hour 32 144 $11.68/hour 40 180 $14.60/hour This hardware option is subject to regional availability. See our documentation for the latest list of available regions. Compute is provisioned in virtual cores (vCores). A vCore represents a logical CPU offered with an option to choose between compute generations. Premium-series Premium-series logical CPUs are based on the latest Intel(R) Xeon (Ice Lake) and AMD EPYCTM 7763v (Milan) chipsets, 1 vCore = 1 hyper thread. The premium-series logical CPU is a great fit for database workloads that require faster compute and memory performance as well as improved IO and network experience over the standard-series hardware offering. vCORE Memory (GB) Pay as you go 1-year reserved capacity 1 2 10.4 $0.366/hour $0.238/hour ~35% savings 4 20.8 $0.731/hour $0.475/hour ~35% savings 6 31.1 $1.096/hour $0.713/hour ~35% savings 8 41.5 $1.462/hour $0.950/hour ~35% savings 10 51.9 $1.827/hour $1.188/hour ~35% savings 12 62.3 $2.192/hour $1.425/hour ~35% savings 14 72.7 $2.558/hour $1.663/hour ~35% savings 16 83 $2.923/hour $1.900/hour ~35% savings 18 93.4 $3.288/hour $2.137/hour ~35% savings 20 103.8 $3.654/hour $2.375/hour ~35% savings 24 124.6 $4.384/hour $2.850/hour ~35% savings 32 166.1 $5.846/hour $3.800/hour ~35% savings 40 207.6 $7.307/hour $4.749/hour ~35% savings 64 664.4 $11.691/hour $7.599/hour ~35% savings 80 415.2 $14.613/hour $9.498/hour ~35% savings 128 647.8 $23.381/hour $15.197/hour ~35% savings 1Learn more about Azure reservations and Azure SQL Database reserved capacity pricing. Compute is provisioned in virtual cores (vCores). A vCore represents a logical CPU offered with an option to choose between compute generations. ---------------- [Query] ======= What are the 5th gen Standard Series CPUs based on?
Respond succinctly and directly. Refer only to the provided document. After your answer, provide any relevant quotes from the source document in italics. EVIDENCE: **Microsoft Azure SQL Database pricing** vCore A vCore-based purchase model is best if you are looking for flexibility, control and transparency of individual resource consumption. This model allows you to scale compute, memory and storage based upon your workload needs and provides a straightforward way to translate on-premises workload requirements to the cloud. Serverless compute The SQL Database server-less compute tier optimises price-performance and simplifies performance management for single databases with intermittent, unpredictable usage by auto-scaling compute and billing for compute used per second. For details, see the FAQ section and documentation. Hyperscale Serverless Hyperscale combines the benefits of compute auto-scaling with storage auto-scaling up to 100 TB to help you optimise price-performance of your database resources to meet your workload's needs. If zone redundancy is enabled, the database must have at least one high availability (HA) replica. The pricing below is applicable for both primary and secondary replicas. Standard-series (Gen 5) Standard-series (Gen 5) logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz, Intel SP8160 (Skylake), Intel Xeon Platinum 8272CL 2.5 GHz (Cascade Lake) and Intel(R) Xeon Scalable 2.8 GHz processor (Ice Lake) processors. In the standard-series (Gen 5), 1 vCore = 1 hyper thread. The standard-series (Gen 5) logical CPU is great for most relational database servers. Primary replica pricing Minimum vCores Maximum vCores Minimum Memory (GB) Maximum Memory (GB) Price 0.5 80 2.05 240 $0.0001050/vCore-second ($0.378/vCore-hour) High Availability Replica and Named Replica Pricing Minimum vCores Maximum vCores Minimum Memory (GB) Maximum Memory (GB) Price 0.5 80 2.05 240 $0.0001050/vCore-second ($0.378/vCore-hour) Storage In the Hyperscale tier, you are charged for storage for your database based on actual allocation. Storage is dynamically allocated between 10 GB and 100 TB, in 10 GB increments. Storage Price GB/month $0.25 Back up storage (point-in-time restore) By default, seven days of backups are stored in RA-GRS Standard blob storage. Any corrupted or deleted database can be restored to any point in time within that period. The storage is used by periodic storage blob snapshots and all generated transaction log. The usage of the backup storage depends on the rate of change of the database and the configured retention period. Back up storage consumption will be charged in GB/month. Learn more about automated backups, and how to monitor and manage backup costs. Redundancy Price LRS $0.08/GB/month ZRS $0.10/GB/month RA-GRS $0.20/GB/month Provisioned compute The SQL Database provisioned compute tier provides a fixed amount of compute resource for a fixed price billed hourly. It optimises price-performance for single databases and elastic pools with more regular usage that cannot afford any delay in compute warm-up after idle usage periods. For details, see the FAQ section and documentation. Hyperscale Build new, highly scalable cloud applications on Azure SQL Database Hyperscale. Hyperscale provides rapid, auto-scaling storage up to 100 TB to help you optimise database resources for your workload's needs. To enable zone redundancy, the database must have at least one secondary high availability replica. The pricing below is applicable for both primary and secondary replicas. Standard-series (Gen 5) Standard-series (Gen 5) logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz, Intel SP8160 (Skylake), Intel Xeon Platinum 8272CL 2.5 GHz (Cascade Lake) and Intel(R) Xeon Scalable 2.8 GHz processor (Ice Lake) processors. In the standard-series (Gen 5), 1 vCore = 1 hyper thread. The standard-series (Gen 5) logical CPU is great for most relational database servers. vCORE Memory (GB) Pay as you go 1-year reserved capacity 1 3 year reserved capacity 1 2 10.2 $0.366/hour $0.238/hour ~35% savings $0.165/hour ~55% savings 4 20.4 $0.731/hour $0.475/hour ~35% savings $0.329/hour ~55% savings 6 30.6 $1.096/hour $0.713/hour ~35% savings $0.494/hour ~55% savings 8 40.8 $1.462/hour $0.950/hour ~35% savings $0.658/hour ~55% savings 10 51 $1.827/hour $1.188/hour ~35% savings $0.822/hour ~55% savings 12 61.2 $2.192/hour $1.425/hour ~35% savings $0.987/hour ~55% savings 14 71.4 $2.558/hour $1.663/hour ~35% savings $1.151/hour ~55% savings 16 81.6 $2.923/hour $1.900/hour ~35% savings $1.316/hour ~55% savings 18 91.8 $3.288/hour $2.137/hour ~35% savings $1.480/hour ~55% savings 20 102 $3.654/hour $2.375/hour ~35% savings $1.644/hour ~55% savings 24 122.4 $4.384/hour $2.850/hour ~35% savings $1.973/hour ~55% savings 32 163.2 $5.846/hour $3.800/hour ~35% savings $2.631/hour ~55% savings 40 204 $7.307/hour $4.749/hour ~35% savings $3.288/hour ~55% savings 80 396 $14.613/hour $9.498/hour ~35% savings $6.576/hour ~55% savings 1Learn more about Azure reservations and Azure SQL Database reserved capacity pricing. Compute is provisioned in virtual cores (vCores) with an option to choose between compute generations. DC-series The DC-series logical CPUs are based on Intel XEON E-2288G processors with Software Guard Extensions (Intel SGX) technology. In the DC-series, 1 vCore = 1 physical core. DC-series supports Always Encrypted with secure enclaves and it is designed to for workloads that process sensitive data and demand confidential query processing capabilities. vCORE Memory (GB) Pay as you go 2 9 $0.73/hour 4 18 $1.46/hour 6 27 $2.19/hour 8 36 $2.92/hour 10 45 $3.65/hour 12 54 $4.38/hour 14 63 $5.11/hour 16 72 $5.84/hour 18 81 $6.57/hour 20 90 $7.30/hour 32 144 $11.68/hour 40 180 $14.60/hour This hardware option is subject to regional availability. See our documentation for the latest list of available regions. Compute is provisioned in virtual cores (vCores). A vCore represents a logical CPU offered with an option to choose between compute generations. Premium-series Premium-series logical CPUs are based on the latest Intel(R) Xeon (Ice Lake) and AMD EPYCTM 7763v (Milan) chipsets, 1 vCore = 1 hyper thread. The premium-series logical CPU is a great fit for database workloads that require faster compute and memory performance as well as improved IO and network experience over the standard-series hardware offering. vCORE Memory (GB) Pay as you go 1-year reserved capacity 1 2 10.4 $0.366/hour $0.238/hour ~35% savings 4 20.8 $0.731/hour $0.475/hour ~35% savings 6 31.1 $1.096/hour $0.713/hour ~35% savings 8 41.5 $1.462/hour $0.950/hour ~35% savings 10 51.9 $1.827/hour $1.188/hour ~35% savings 12 62.3 $2.192/hour $1.425/hour ~35% savings 14 72.7 $2.558/hour $1.663/hour ~35% savings 16 83 $2.923/hour $1.900/hour ~35% savings 18 93.4 $3.288/hour $2.137/hour ~35% savings 20 103.8 $3.654/hour $2.375/hour ~35% savings 24 124.6 $4.384/hour $2.850/hour ~35% savings 32 166.1 $5.846/hour $3.800/hour ~35% savings 40 207.6 $7.307/hour $4.749/hour ~35% savings 64 664.4 $11.691/hour $7.599/hour ~35% savings 80 415.2 $14.613/hour $9.498/hour ~35% savings 128 647.8 $23.381/hour $15.197/hour ~35% savings 1Learn more about Azure reservations and Azure SQL Database reserved capacity pricing. Compute is provisioned in virtual cores (vCores). A vCore represents a logical CPU offered with an option to choose between compute generations. USER: What are the 5th gen Standard Series CPUs based on? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
23
10
1,062
null
631
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document]
Summarize the information about periorbital hyperchromia in about 600 words. In bold at the end of the response, tell me what type of individuals are most affected by it.
INTRODUCTION Although periorbital hyperpigmentation (also called peri- palpebral hyperpigmentation, dark eyelids, dark eye circles, dark circles, or simply under-eye circles) is a mere color difference between the palpebral skin and the remaining facial skin, it makes people look tired or older, which negatively affects their quality of life. 1,-4 It has a higher prevalence in individuals with darker skin, hair and eyes, and affects age groups and genders equally. Nevertheless, there are a higher number of complaints from women, especially senior women. There are few studies about the etiology of this condition, however dark eye circles with a vascular component are known to present a dominant autoso- mal family inheritance pattern. 2,3 Periorbital hyperpigmentation seems to have multifactori- al causes that involve intrinsic factors (determined by the indi- vidual''''''''s genetics), and extrinsic factors (sun exposure, smoking, alcoholism and sleep deprivation, for instance). However, the presence of melanic pigment and hemosiderotic pigment in the affected sites is a distinctive feature in its etiopathogeny. 2-4 Melanic hyperpigmentation is more frequent in brunet adults, as a consequence of excessive and cumulative exposure to the sun, which increases the production of melanin, reduces the skin''''''''s thickness and increases the dilatation of blood vessels. 2,4,5 Intense vascularization is mainly found in people belong- ing to certain ethnic groups such as Arabs, Turks, Hindus, inhab- itants of the Iberian Peninsula and their respective descendants. In these ethnicities, its manifestation tends to take place earlier, often during childhood. In those individuals there is no change in the color of the skin; the eyelid appears darkened because the dilated vessels are visible due to the transparency of the skin. 2 In those cases, therefore, the problem is often aggravated when the lower eyelid''''''''s vessels are more dilated (e.g., from fatigue, insom- nia, oral breathing, crying), causing dermal blood extravasation. The liberation of ferric ions takes place locally, entailing the for- mation of free radicals that stimulate the melanocytes, which generates melanic pigmentation. 2, 4-6 Other causes noted as being responsible for the appearance of dark eye circles are post-inflammatory hyperpigmentation secondary to atopic and contact dermatitis, sleep deprivation, oral breathing, alcoholism, smoking, use of certain medications (contraceptives, chemotherapy, antipsychotic and some types of eye drops), the presence of palpebral sagging (due to aging) and of disorders that develop with hydric retention and palpebral edema (thyroid disorders, nephropathies, cardiopathies and pneumopathies) – all of which worsen the unattractive appear- ance of dark eye circles. 2-4,7 Various treatments have been proposed for periorbital hyperchromia, however there are few studies on their long-term efficacy. The main types of treatment are: topical application of depigmenting products, chemical peelings, dermabrasion, cryosurgery, fillings with hyaluronic acid, intense pulsed light, CO2, argon, ruby and excimer lasers. 2-4, 6, 8-12 PALPEBRAL ANATOMY The eyelids are tegumentary pleats that participate in facial expression and aesthetics, however their main function is to pro- tect the eyeballs through sensorial filtration actions carried out by the palpebral cilia, and the Meibomian and lachrymal glands'''''''' secretions. In this manner, the cornea remains hydrated and the closing movements of the eyes function as a barrier to external traumas and prevent the cornea from drying out. 13-17 The upper eyelid reaches upwards to the eyebrow, which separates it from the forehead. The lower eyelid extends down- wards up to the lower border of the orbit, and is delimited by the genian region. 15 The palpebral fissure, which measures 9-10 mm in adults, is determined by the interaction of the muscles that open and close the eyelids. To open the eyelid, the palpebral elevator mus- cle is assisted by two other accessory muscles (Muller''''''''s and frontalis muscles). 18 The aging process decreases the palpebral fis- sure''''''''s vertical opening, due to the progressive lowering of the upper eyelid, 14 which is caused by a decrease in the upper eye- lid lifter muscle''''''''s aponeurosis action. 15 The skin becomes more flaccid, less elastic and has a greater propensity to wrinkle 16. The orbicular and tarsal muscles, the orbital septum and the conjunctival mucous membrane also go through transforma- tions in the elderly. In addition, gravity and facial expressions influence the mechanical deformation of those structures. 17 A cohort study with 320 patients (aged 10-89) evaluated participants'''''''' eyelids frontally and laterally and found that there is a correlation between a decrease in the palpebral fissure and an increase in the age of patients. 19 PALPEBRAL REGION''''''''S SKIN AND SUBCUTANEOUS TISSUE Palpebral skin is the thinnest in the human body (< 1 mm). Its epidermis is constituted of stratified epithelium, which is very thin (0.4 mm) compared to that of the palmoplantar region (the thickness of which is approximately 1.6 mm). 13 The nasal portion of the palpebral skin has thinner hair and more sebaceous glands (i.e., it is softer and oilier) than its tem- poral portion. The transition between the eyelids'''''''' thin skin and the remaining facial skin is clinically observable. 13 The palpebral dermis is composed of loose conjunctive tissue, and is extremely thin in that region. It is absent in the pre-tarsal skin, in the medial and lateral ligaments of the eyelid, where the skin adheres to the underlying fibrous tissue. The thinness of the skin, combined with the lack of fatty tissue, gives that region its characteristic translucency. As a result, the accu- mulation of melanin and/or vessel dilatation in that region can be easily seen, through transparency, as bilateral homogeneous hyperpigmentation. 2,4,5,13 PALPEBRAL REGION''''''''S VENOUS AND LYMPH VASCULARIZATION The eyelids'''''''' arterial irrigation comes through many vessels: the supratrochlear, supraorbital, lachrymal and dorsum of the nose arteries (all originating in the facial artery); the angular artery (originating in the facial artery); the transverse artery (originating in the facial artery); the transverse facial artery (originating in the superficial temporal artery) and the branch- es of the superficial temporal artery itself 20 (Figure 1). Venous drainage (following an external pattern) takes place through the veins associated with these arteries and (following an internal pattern) penetrates the orbit through connections with ophthalmic veins 20 (Figure 2). Lymphatic drainage takes place mainly through the parotid lymph nodes; some of the drainage from the medial angle of the eye to the lymph vessels is associated with the angular and facial arteries, towards the submandibular lymph nodes. 20 COLOR OF THE SKIN IN THE PALPEBRAL REGION The palpebral skin''''''''s color results from the combination of several factors, some of genetic-racial origin (such as the amount of melanin pigment), others of individual or regional and even gender l origins, such as the thickness of the several components and the blood volume in their vessels.
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== Summarize the information about periorbital hyperchromia in about 600 words. In bold at the end of the response, tell me what type of individuals are most affected by it. {passage 0} ========== INTRODUCTION Although periorbital hyperpigmentation (also called peri- palpebral hyperpigmentation, dark eyelids, dark eye circles, dark circles, or simply under-eye circles) is a mere color difference between the palpebral skin and the remaining facial skin, it makes people look tired or older, which negatively affects their quality of life. 1,-4 It has a higher prevalence in individuals with darker skin, hair and eyes, and affects age groups and genders equally. Nevertheless, there are a higher number of complaints from women, especially senior women. There are few studies about the etiology of this condition, however dark eye circles with a vascular component are known to present a dominant autoso- mal family inheritance pattern. 2,3 Periorbital hyperpigmentation seems to have multifactori- al causes that involve intrinsic factors (determined by the indi- vidual''''''''s genetics), and extrinsic factors (sun exposure, smoking, alcoholism and sleep deprivation, for instance). However, the presence of melanic pigment and hemosiderotic pigment in the affected sites is a distinctive feature in its etiopathogeny. 2-4 Melanic hyperpigmentation is more frequent in brunet adults, as a consequence of excessive and cumulative exposure to the sun, which increases the production of melanin, reduces the skin''''''''s thickness and increases the dilatation of blood vessels. 2,4,5 Intense vascularization is mainly found in people belong- ing to certain ethnic groups such as Arabs, Turks, Hindus, inhab- itants of the Iberian Peninsula and their respective descendants. In these ethnicities, its manifestation tends to take place earlier, often during childhood. In those individuals there is no change in the color of the skin; the eyelid appears darkened because the dilated vessels are visible due to the transparency of the skin. 2 In those cases, therefore, the problem is often aggravated when the lower eyelid''''''''s vessels are more dilated (e.g., from fatigue, insom- nia, oral breathing, crying), causing dermal blood extravasation. The liberation of ferric ions takes place locally, entailing the for- mation of free radicals that stimulate the melanocytes, which generates melanic pigmentation. 2, 4-6 Other causes noted as being responsible for the appearance of dark eye circles are post-inflammatory hyperpigmentation secondary to atopic and contact dermatitis, sleep deprivation, oral breathing, alcoholism, smoking, use of certain medications (contraceptives, chemotherapy, antipsychotic and some types of eye drops), the presence of palpebral sagging (due to aging) and of disorders that develop with hydric retention and palpebral edema (thyroid disorders, nephropathies, cardiopathies and pneumopathies) – all of which worsen the unattractive appear- ance of dark eye circles. 2-4,7 Various treatments have been proposed for periorbital hyperchromia, however there are few studies on their long-term efficacy. The main types of treatment are: topical application of depigmenting products, chemical peelings, dermabrasion, cryosurgery, fillings with hyaluronic acid, intense pulsed light, CO2, argon, ruby and excimer lasers. 2-4, 6, 8-12 PALPEBRAL ANATOMY The eyelids are tegumentary pleats that participate in facial expression and aesthetics, however their main function is to pro- tect the eyeballs through sensorial filtration actions carried out by the palpebral cilia, and the Meibomian and lachrymal glands'''''''' secretions. In this manner, the cornea remains hydrated and the closing movements of the eyes function as a barrier to external traumas and prevent the cornea from drying out. 13-17 The upper eyelid reaches upwards to the eyebrow, which separates it from the forehead. The lower eyelid extends down- wards up to the lower border of the orbit, and is delimited by the genian region. 15 The palpebral fissure, which measures 9-10 mm in adults, is determined by the interaction of the muscles that open and close the eyelids. To open the eyelid, the palpebral elevator mus- cle is assisted by two other accessory muscles (Muller''''''''s and frontalis muscles). 18 The aging process decreases the palpebral fis- sure''''''''s vertical opening, due to the progressive lowering of the upper eyelid, 14 which is caused by a decrease in the upper eye- lid lifter muscle''''''''s aponeurosis action. 15 The skin becomes more flaccid, less elastic and has a greater propensity to wrinkle 16. The orbicular and tarsal muscles, the orbital septum and the conjunctival mucous membrane also go through transforma- tions in the elderly. In addition, gravity and facial expressions influence the mechanical deformation of those structures. 17 A cohort study with 320 patients (aged 10-89) evaluated participants'''''''' eyelids frontally and laterally and found that there is a correlation between a decrease in the palpebral fissure and an increase in the age of patients. 19 PALPEBRAL REGION''''''''S SKIN AND SUBCUTANEOUS TISSUE Palpebral skin is the thinnest in the human body (< 1 mm). Its epidermis is constituted of stratified epithelium, which is very thin (0.4 mm) compared to that of the palmoplantar region (the thickness of which is approximately 1.6 mm). 13 The nasal portion of the palpebral skin has thinner hair and more sebaceous glands (i.e., it is softer and oilier) than its tem- poral portion. The transition between the eyelids'''''''' thin skin and the remaining facial skin is clinically observable. 13 The palpebral dermis is composed of loose conjunctive tissue, and is extremely thin in that region. It is absent in the pre-tarsal skin, in the medial and lateral ligaments of the eyelid, where the skin adheres to the underlying fibrous tissue. The thinness of the skin, combined with the lack of fatty tissue, gives that region its characteristic translucency. As a result, the accu- mulation of melanin and/or vessel dilatation in that region can be easily seen, through transparency, as bilateral homogeneous hyperpigmentation. 2,4,5,13 PALPEBRAL REGION''''''''S VENOUS AND LYMPH VASCULARIZATION The eyelids'''''''' arterial irrigation comes through many vessels: the supratrochlear, supraorbital, lachrymal and dorsum of the nose arteries (all originating in the facial artery); the angular artery (originating in the facial artery); the transverse artery (originating in the facial artery); the transverse facial artery (originating in the superficial temporal artery) and the branch- es of the superficial temporal artery itself 20 (Figure 1). Venous drainage (following an external pattern) takes place through the veins associated with these arteries and (following an internal pattern) penetrates the orbit through connections with ophthalmic veins 20 (Figure 2). Lymphatic drainage takes place mainly through the parotid lymph nodes; some of the drainage from the medial angle of the eye to the lymph vessels is associated with the angular and facial arteries, towards the submandibular lymph nodes. 20 COLOR OF THE SKIN IN THE PALPEBRAL REGION The palpebral skin''''''''s color results from the combination of several factors, some of genetic-racial origin (such as the amount of melanin pigment), others of individual or regional and even gender l origins, such as the thickness of the several components and the blood volume in their vessels. http://www.surgicalcosmetic.org.br/details/158/en-US/periorbital-hyperchromia
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document] EVIDENCE: INTRODUCTION Although periorbital hyperpigmentation (also called peri- palpebral hyperpigmentation, dark eyelids, dark eye circles, dark circles, or simply under-eye circles) is a mere color difference between the palpebral skin and the remaining facial skin, it makes people look tired or older, which negatively affects their quality of life. 1,-4 It has a higher prevalence in individuals with darker skin, hair and eyes, and affects age groups and genders equally. Nevertheless, there are a higher number of complaints from women, especially senior women. There are few studies about the etiology of this condition, however dark eye circles with a vascular component are known to present a dominant autoso- mal family inheritance pattern. 2,3 Periorbital hyperpigmentation seems to have multifactori- al causes that involve intrinsic factors (determined by the indi- vidual''''''''s genetics), and extrinsic factors (sun exposure, smoking, alcoholism and sleep deprivation, for instance). However, the presence of melanic pigment and hemosiderotic pigment in the affected sites is a distinctive feature in its etiopathogeny. 2-4 Melanic hyperpigmentation is more frequent in brunet adults, as a consequence of excessive and cumulative exposure to the sun, which increases the production of melanin, reduces the skin''''''''s thickness and increases the dilatation of blood vessels. 2,4,5 Intense vascularization is mainly found in people belong- ing to certain ethnic groups such as Arabs, Turks, Hindus, inhab- itants of the Iberian Peninsula and their respective descendants. In these ethnicities, its manifestation tends to take place earlier, often during childhood. In those individuals there is no change in the color of the skin; the eyelid appears darkened because the dilated vessels are visible due to the transparency of the skin. 2 In those cases, therefore, the problem is often aggravated when the lower eyelid''''''''s vessels are more dilated (e.g., from fatigue, insom- nia, oral breathing, crying), causing dermal blood extravasation. The liberation of ferric ions takes place locally, entailing the for- mation of free radicals that stimulate the melanocytes, which generates melanic pigmentation. 2, 4-6 Other causes noted as being responsible for the appearance of dark eye circles are post-inflammatory hyperpigmentation secondary to atopic and contact dermatitis, sleep deprivation, oral breathing, alcoholism, smoking, use of certain medications (contraceptives, chemotherapy, antipsychotic and some types of eye drops), the presence of palpebral sagging (due to aging) and of disorders that develop with hydric retention and palpebral edema (thyroid disorders, nephropathies, cardiopathies and pneumopathies) – all of which worsen the unattractive appear- ance of dark eye circles. 2-4,7 Various treatments have been proposed for periorbital hyperchromia, however there are few studies on their long-term efficacy. The main types of treatment are: topical application of depigmenting products, chemical peelings, dermabrasion, cryosurgery, fillings with hyaluronic acid, intense pulsed light, CO2, argon, ruby and excimer lasers. 2-4, 6, 8-12 PALPEBRAL ANATOMY The eyelids are tegumentary pleats that participate in facial expression and aesthetics, however their main function is to pro- tect the eyeballs through sensorial filtration actions carried out by the palpebral cilia, and the Meibomian and lachrymal glands'''''''' secretions. In this manner, the cornea remains hydrated and the closing movements of the eyes function as a barrier to external traumas and prevent the cornea from drying out. 13-17 The upper eyelid reaches upwards to the eyebrow, which separates it from the forehead. The lower eyelid extends down- wards up to the lower border of the orbit, and is delimited by the genian region. 15 The palpebral fissure, which measures 9-10 mm in adults, is determined by the interaction of the muscles that open and close the eyelids. To open the eyelid, the palpebral elevator mus- cle is assisted by two other accessory muscles (Muller''''''''s and frontalis muscles). 18 The aging process decreases the palpebral fis- sure''''''''s vertical opening, due to the progressive lowering of the upper eyelid, 14 which is caused by a decrease in the upper eye- lid lifter muscle''''''''s aponeurosis action. 15 The skin becomes more flaccid, less elastic and has a greater propensity to wrinkle 16. The orbicular and tarsal muscles, the orbital septum and the conjunctival mucous membrane also go through transforma- tions in the elderly. In addition, gravity and facial expressions influence the mechanical deformation of those structures. 17 A cohort study with 320 patients (aged 10-89) evaluated participants'''''''' eyelids frontally and laterally and found that there is a correlation between a decrease in the palpebral fissure and an increase in the age of patients. 19 PALPEBRAL REGION''''''''S SKIN AND SUBCUTANEOUS TISSUE Palpebral skin is the thinnest in the human body (< 1 mm). Its epidermis is constituted of stratified epithelium, which is very thin (0.4 mm) compared to that of the palmoplantar region (the thickness of which is approximately 1.6 mm). 13 The nasal portion of the palpebral skin has thinner hair and more sebaceous glands (i.e., it is softer and oilier) than its tem- poral portion. The transition between the eyelids'''''''' thin skin and the remaining facial skin is clinically observable. 13 The palpebral dermis is composed of loose conjunctive tissue, and is extremely thin in that region. It is absent in the pre-tarsal skin, in the medial and lateral ligaments of the eyelid, where the skin adheres to the underlying fibrous tissue. The thinness of the skin, combined with the lack of fatty tissue, gives that region its characteristic translucency. As a result, the accu- mulation of melanin and/or vessel dilatation in that region can be easily seen, through transparency, as bilateral homogeneous hyperpigmentation. 2,4,5,13 PALPEBRAL REGION''''''''S VENOUS AND LYMPH VASCULARIZATION The eyelids'''''''' arterial irrigation comes through many vessels: the supratrochlear, supraorbital, lachrymal and dorsum of the nose arteries (all originating in the facial artery); the angular artery (originating in the facial artery); the transverse artery (originating in the facial artery); the transverse facial artery (originating in the superficial temporal artery) and the branch- es of the superficial temporal artery itself 20 (Figure 1). Venous drainage (following an external pattern) takes place through the veins associated with these arteries and (following an internal pattern) penetrates the orbit through connections with ophthalmic veins 20 (Figure 2). Lymphatic drainage takes place mainly through the parotid lymph nodes; some of the drainage from the medial angle of the eye to the lymph vessels is associated with the angular and facial arteries, towards the submandibular lymph nodes. 20 COLOR OF THE SKIN IN THE PALPEBRAL REGION The palpebral skin''''''''s color results from the combination of several factors, some of genetic-racial origin (such as the amount of melanin pigment), others of individual or regional and even gender l origins, such as the thickness of the several components and the blood volume in their vessels. USER: Summarize the information about periorbital hyperchromia in about 600 words. In bold at the end of the response, tell me what type of individuals are most affected by it. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
29
1,093
null
173
Use only information from the provided text to answer questions. Do not use outside knowledge and do not answer based on common sense. If you can't determine the answer based on the provided text, you should say "I can't find the answer to your question in the provided text."
In brief, what are 2-3 issues foreseen with this bill?
On July 20, 2022, the House Energy and Commerce Committee voted 53-2 to advance the American Data Privacy and Protection Act (ADPPA), H.R. 8152, to the full House of Representatives. The ADPPA would create a comprehensive federal consumer privacy framework. Some commentators have noted the bill’s novel compromises on two issues that have impeded previous attempts to create a national privacy framework: whether to preempt state privacy laws and whether to create a private right of action. The bipartisan bill is co-sponsored by House Energy and Commerce Committee Chairman Frank Pallone, Jr. and Ranking Member Cathy McMorris Rogers, and is promoted in the Senate by Commerce Committee Ranking Member Roger Wicker. In a joint statement, Representatives Pallone and McMorris Rodgers and Senator Wicker described the bill as “strik[ing] a meaningful balance” on key issues. Senate Commerce Committee Chair Maria Cantwell has critiqued the ADPPA as having “major enforcement holes,” prompting other commentators to question whether the Senate will pass the bill. This Sidebar first provides a summary of the version of the ADPPA ordered to be reported by the House Commerce Committee on July 20. It then compares several of the bill’s key provisions to other privacy bills from the 117th and 116th Congresses before examining some considerations for Congress. Summary of the Bill The ADPPA would govern how companies across different industries treat consumer data. While not an exhaustive summary, some key facets of the bill are as follows:  Covered Entities. The bill would apply to most entities, including nonprofits and common carriers. Some entities, such as those defined as large data holders that meet certain thresholds and service providers that use data on behalf of other entities (including covered entities, government entities, and other service providers), would face different or additional requirements.  Covered Data. The bill would apply to information that “identifies or is linked or reasonably linkable” to an individual.  Duties of Loyalty. The bill would prohibit covered entities from collecting, using, or transferring covered data beyond what is reasonably necessary and proportionate to provide a service requested by the individual, unless the collection, use, or disclosure would fall under one of seventeen permissible purposes. It also would create special protections for certain types of sensitive covered data, defined as sixteen different categories of data. Among other things, the bill would require covered entities to get a consumer’s affirmative, express consent before transferring their sensitive covered data to a third party, unless a specific exception applies.  Transparency. The bill would require covered entities to disclose, among other things, the type of data they collect, what they use it for, how long they retain it, and whether they make the data accessible to the People’s Republic of China, Russia, Iran, or North Korea.  Consumer Control and Consent. The bill would give consumers various rights over covered data, including the right to access, correct, and delete their data held by a particular covered entity. It would further require covered entities to give consumers an opportunity to object before the entity transfers their data to a third party or targets advertising toward them.  Youth Protections. The bill would create additional data protections for individuals under age 17, including a prohibition on targeted advertising, and it would establish a Youth Privacy and Marketing Division at the Federal Trade Commission (FTC). These additional protections would only apply when the covered entity knows the individual in question is under age 17, though certain social media companies or large data holders would be deemed to “know” an individual’s age in more circumstances.  Third-Party Collecting Entities. The bill would create specific obligations for third-party collecting entities, which are entities whose main source of revenue comes from processing or transferring data that they do not directly collect from consumers (e.g., data brokers). These entities would have to comply with FTC auditing regulations and, if they collect data above the threshold amount of individuals or devices, would have to register with the FTC. The FTC would establish a searchable registry of third-party collecting entities and a “Do Not Collect” mechanism by which individuals could request that all registered entities refrain from collecting covered data relating to the individual. The ADPPA has bipartisan support, and various interest groups and commentators, such as the Electronic Privacy Information Center, the Center for Democracy & Technology, and the Washington Post’s editorial board, have expressed enthusiasm for the bill. In an August 25 letter to House Speaker Nancy Pelosi, forty-eight different public interest groups urged Congress to move the ADPPA forward through Congress, stating that the bill is a “meaningful compromise” and that a failure to act may “forestall progress on this issue for years to come.” At the same time, some Members of Congress and other commentators have raised concerns with the bill. Senators Cantwell and Schatz, for example, have both criticized the bill’s failure to impose a “duty of loyalty” on covered entities. While the ADPPA has various requirements that are classified under a “Duty of Loyalty” heading, these requirements differ from those included in COPRA or the Data Care Act. COPRA’s “duty of loyalty” would prohibit businesses from engaging in “harmful” data practices, which the bill defines to mean using covered data “in a manner that causes or is likely to cause” injury to the subject of the covered data. The Data Care Act’s “duty of loyalty” would prohibit covered providers from using data in a way that would “benefit the [provider] to the detriment of the end user” and would “result in reasonably foreseeable and material physical harm” or “be unexpected and highly offensive” to the end user. The ADPPA’s “Duty of Loyalty” imposes a data minimization requirement and defines several specific prohibited data practices, but it does not broadly prohibit providers from acting in ways that could harm individuals. Some have also raised concerns over the ADPPA’s preemption provisions. The Attorney General of California sent Congress a letter co-signed by nine other state attorneys general criticizing the ADPPA because it would set a “ceiling” for privacy rights rather than a “floor.” These state attorneys general argue that states should be allowed to adopt their own privacy laws so they can “legislate responsively” to changes in technology and practices. In the Commerce Committee’s July 20 markup of the ADPPA, some Members expressed similar concerns over the ADPPA’s preemption of state law. Other Members and commentators have pushed back on these criticisms, pointing to the strengths of the ADPPA’s protections and the importance of setting a federal standard.
On July 20, 2022, the House Energy and Commerce Committee voted 53-2 to advance the American Data Privacy and Protection Act (ADPPA), H.R. 8152, to the full House of Representatives. The ADPPA would create a comprehensive federal consumer privacy framework. Some commentators have noted the bill’s novel compromises on two issues that have impeded previous attempts to create a national privacy framework: whether to preempt state privacy laws and whether to create a private right of action. The bipartisan bill is co-sponsored by House Energy and Commerce Committee Chairman Frank Pallone, Jr. and Ranking Member Cathy McMorris Rogers, and is promoted in the Senate by Commerce Committee Ranking Member Roger Wicker. In a joint statement, Representatives Pallone and McMorris Rodgers and Senator Wicker described the bill as “strik[ing] a meaningful balance” on key issues. Senate Commerce Committee Chair Maria Cantwell has critiqued the ADPPA as having “major enforcement holes,” prompting other commentators to question whether the Senate will pass the bill. This Sidebar first provides a summary of the version of the ADPPA ordered to be reported by the House Commerce Committee on July 20. It then compares several of the bill’s key provisions to other privacy bills from the 117th and 116th Congresses before examining some considerations for Congress. Summary of the Bill The ADPPA would govern how companies across different industries treat consumer data. While not an exhaustive summary, some key facets of the bill are as follows:  Covered Entities. The bill would apply to most entities, including nonprofits and common carriers. Some entities, such as those defined as large data holders that meet certain thresholds and service providers that use data on behalf of other entities (including covered entities, government entities, and other service providers), would face different or additional requirements.  Covered Data. The bill would apply to information that “identifies or is linked or reasonably linkable” to an individual.  Duties of Loyalty. The bill would prohibit covered entities from collecting, using, or transferring covered data beyond what is reasonably necessary and proportionate to provide a service requested by the individual, unless the collection, use, or disclosure would fall under one of seventeen permissible purposes. It also would create special protections for certain types of sensitive covered data, defined as sixteen different categories of data. Among other things, the bill would require covered entities to get a consumer’s affirmative, express consent before transferring their sensitive covered data to a third party, unless a specific exception applies.  Transparency. The bill would require covered entities to disclose, among other things, the type of data they collect, what they use it for, how long they retain it, and whether they make the data accessible to the People’s Republic of China, Russia, Iran, or North Korea.  Consumer Control and Consent. The bill would give consumers various rights over covered data, including the right to access, correct, and delete their data held by a particular covered entity. It would further require covered entities to give consumers an opportunity to object before the entity transfers their data to a third party or targets advertising toward them.  Youth Protections. The bill would create additional data protections for individuals under age 17, including a prohibition on targeted advertising, and it would establish a Youth Privacy and Marketing Division at the Federal Trade Commission (FTC). These additional protections would only apply when the covered entity knows the individual in question is under age 17, though certain social media companies or large data holders would be deemed to “know” an individual’s age in more circumstances.  Third-Party Collecting Entities. The bill would create specific obligations for third-party collecting entities, which are entities whose main source of revenue comes from processing or transferring data that they do not directly collect from consumers (e.g., data brokers). These entities would have to comply with FTC auditing regulations and, if they collect data above the threshold amount of individuals or devices, would have to register with the FTC. The FTC would establish a searchable registry of third-party collecting entities and a “Do Not Collect” mechanism by which individuals could request that all registered entities refrain from collecting covered data relating to the individual. The ADPPA has bipartisan support, and various interest groups and commentators, such as the Electronic Privacy Information Center, the Center for Democracy & Technology, and the Washington Post’s editorial board, have expressed enthusiasm for the bill. In an August 25 letter to House Speaker Nancy Pelosi, forty-eight different public interest groups urged Congress to move the ADPPA forward through Congress, stating that the bill is a “meaningful compromise” and that a failure to act may “forestall progress on this issue for years to come.” At the same time, some Members of Congress and other commentators have raised concerns with the bill. Senators Cantwell and Schatz, for example, have both criticized the bill’s failure to impose a “duty of loyalty” on covered entities. While the ADPPA has various requirements that are classified under a “Duty of Loyalty” heading, these requirements differ from those included in COPRA or the Data Care Act. COPRA’s “duty of loyalty” would prohibit businesses from engaging in “harmful” data practices, which the bill defines to mean using covered data “in a manner that causes or is likely to cause” injury to the subject of the covered data. The Data Care Act’s “duty of loyalty” would prohibit covered providers from using data in a way that would “benefit the [provider] to the detriment of the end user” and would “result in reasonably foreseeable and material physical harm” or “be unexpected and highly offensive” to the end user. The ADPPA’s “Duty of Loyalty” imposes a data minimization requirement and defines several specific prohibited data practices, but it does not broadly prohibit providers from acting in ways that could harm individuals. Some have also raised concerns over the ADPPA’s preemption provisions. The Attorney General of California sent Congress a letter co-signed by nine other state attorneys general criticizing the ADPPA because it would set a “ceiling” for privacy rights rather than a “floor.” These state attorneys general argue that states should be allowed to adopt their own privacy laws so they can “legislate responsively” to changes in technology and practices. In the Commerce Committee’s July 20 markup of the ADPPA, some Members expressed similar concerns over the ADPPA’s preemption of state law. Other Members and commentators have pushed back on these criticisms, pointing to the strengths of the ADPPA’s protections and the importance of setting a federal standard. Use only information from the provided text to answer questions. Do not use outside knowledge and do not answer based on common sense. If you can't determine the answer based on the provided text, you should say "I can't find the answer to your question in the provided text." In brief, what are 2-3 issues foreseen with this bill?
Use only information from the provided text to answer questions. Do not use outside knowledge and do not answer based on common sense. If you can't determine the answer based on the provided text, you should say "I can't find the answer to your question in the provided text." EVIDENCE: On July 20, 2022, the House Energy and Commerce Committee voted 53-2 to advance the American Data Privacy and Protection Act (ADPPA), H.R. 8152, to the full House of Representatives. The ADPPA would create a comprehensive federal consumer privacy framework. Some commentators have noted the bill’s novel compromises on two issues that have impeded previous attempts to create a national privacy framework: whether to preempt state privacy laws and whether to create a private right of action. The bipartisan bill is co-sponsored by House Energy and Commerce Committee Chairman Frank Pallone, Jr. and Ranking Member Cathy McMorris Rogers, and is promoted in the Senate by Commerce Committee Ranking Member Roger Wicker. In a joint statement, Representatives Pallone and McMorris Rodgers and Senator Wicker described the bill as “strik[ing] a meaningful balance” on key issues. Senate Commerce Committee Chair Maria Cantwell has critiqued the ADPPA as having “major enforcement holes,” prompting other commentators to question whether the Senate will pass the bill. This Sidebar first provides a summary of the version of the ADPPA ordered to be reported by the House Commerce Committee on July 20. It then compares several of the bill’s key provisions to other privacy bills from the 117th and 116th Congresses before examining some considerations for Congress. Summary of the Bill The ADPPA would govern how companies across different industries treat consumer data. While not an exhaustive summary, some key facets of the bill are as follows:  Covered Entities. The bill would apply to most entities, including nonprofits and common carriers. Some entities, such as those defined as large data holders that meet certain thresholds and service providers that use data on behalf of other entities (including covered entities, government entities, and other service providers), would face different or additional requirements.  Covered Data. The bill would apply to information that “identifies or is linked or reasonably linkable” to an individual.  Duties of Loyalty. The bill would prohibit covered entities from collecting, using, or transferring covered data beyond what is reasonably necessary and proportionate to provide a service requested by the individual, unless the collection, use, or disclosure would fall under one of seventeen permissible purposes. It also would create special protections for certain types of sensitive covered data, defined as sixteen different categories of data. Among other things, the bill would require covered entities to get a consumer’s affirmative, express consent before transferring their sensitive covered data to a third party, unless a specific exception applies.  Transparency. The bill would require covered entities to disclose, among other things, the type of data they collect, what they use it for, how long they retain it, and whether they make the data accessible to the People’s Republic of China, Russia, Iran, or North Korea.  Consumer Control and Consent. The bill would give consumers various rights over covered data, including the right to access, correct, and delete their data held by a particular covered entity. It would further require covered entities to give consumers an opportunity to object before the entity transfers their data to a third party or targets advertising toward them.  Youth Protections. The bill would create additional data protections for individuals under age 17, including a prohibition on targeted advertising, and it would establish a Youth Privacy and Marketing Division at the Federal Trade Commission (FTC). These additional protections would only apply when the covered entity knows the individual in question is under age 17, though certain social media companies or large data holders would be deemed to “know” an individual’s age in more circumstances.  Third-Party Collecting Entities. The bill would create specific obligations for third-party collecting entities, which are entities whose main source of revenue comes from processing or transferring data that they do not directly collect from consumers (e.g., data brokers). These entities would have to comply with FTC auditing regulations and, if they collect data above the threshold amount of individuals or devices, would have to register with the FTC. The FTC would establish a searchable registry of third-party collecting entities and a “Do Not Collect” mechanism by which individuals could request that all registered entities refrain from collecting covered data relating to the individual. The ADPPA has bipartisan support, and various interest groups and commentators, such as the Electronic Privacy Information Center, the Center for Democracy & Technology, and the Washington Post’s editorial board, have expressed enthusiasm for the bill. In an August 25 letter to House Speaker Nancy Pelosi, forty-eight different public interest groups urged Congress to move the ADPPA forward through Congress, stating that the bill is a “meaningful compromise” and that a failure to act may “forestall progress on this issue for years to come.” At the same time, some Members of Congress and other commentators have raised concerns with the bill. Senators Cantwell and Schatz, for example, have both criticized the bill’s failure to impose a “duty of loyalty” on covered entities. While the ADPPA has various requirements that are classified under a “Duty of Loyalty” heading, these requirements differ from those included in COPRA or the Data Care Act. COPRA’s “duty of loyalty” would prohibit businesses from engaging in “harmful” data practices, which the bill defines to mean using covered data “in a manner that causes or is likely to cause” injury to the subject of the covered data. The Data Care Act’s “duty of loyalty” would prohibit covered providers from using data in a way that would “benefit the [provider] to the detriment of the end user” and would “result in reasonably foreseeable and material physical harm” or “be unexpected and highly offensive” to the end user. The ADPPA’s “Duty of Loyalty” imposes a data minimization requirement and defines several specific prohibited data practices, but it does not broadly prohibit providers from acting in ways that could harm individuals. Some have also raised concerns over the ADPPA’s preemption provisions. The Attorney General of California sent Congress a letter co-signed by nine other state attorneys general criticizing the ADPPA because it would set a “ceiling” for privacy rights rather than a “floor.” These state attorneys general argue that states should be allowed to adopt their own privacy laws so they can “legislate responsively” to changes in technology and practices. In the Commerce Committee’s July 20 markup of the ADPPA, some Members expressed similar concerns over the ADPPA’s preemption of state law. Other Members and commentators have pushed back on these criticisms, pointing to the strengths of the ADPPA’s protections and the importance of setting a federal standard. USER: In brief, what are 2-3 issues foreseen with this bill? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
49
10
1,086
null
852
[question] [user request] ===================== [text] [context document] ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources.
Can you explain the new funding approved for passenger rail in the Investing in America Agenda? What were some of the passenger corridors approved for funding? Answer in a minimum of 300 words.
President Biden’s Investing in America Agenda – a key pillar of Bidenomics – is delivering world class-infrastructure across the country, expanding access to economic opportunity, and creating good-paying jobs. By delivering $66 billion from the Bipartisan Infrastructure Law – the largest investment in passenger rail since the creation of Amtrak 50 years ago – President Biden is delivering on his vision to rebuild America and win the global competition for the 21st century. Today, the Biden-Harris Administration is announcing $8.2 billion in new funding for 10 major passenger rail projects across the country, including the first world-class high-speed rail projects in our country’s history. Key selected projects include: building a new high-speed rail system between California and Nevada, which will serve more than 11 million passengers annually; creating a high-speed rail line through California’s Central Valley to ultimately link Los Angeles and San Francisco, supporting travel with speeds up to 220 mph; delivering significant upgrades to frequently-traveled rail corridors in Virginia, North Carolina, and the District of Columbia; and upgrading and expanding capacity at Chicago Union Station in Illinois, one of the nation’s busiest rail hubs. These historic projects will create tens of thousands of good-paying, union jobs, unlock economic opportunity for communities across the country, and open up safe, comfortable, and climate-friendly travel options to get people to their destinations in a fraction of the time it takes to drive. The Biden-Harris Administration is building out a pipeline of passenger rail projects in every region of the country in order to achieve the President’s vision of world-class passenger rail. Announced projects will add new passenger rail service to cities that have historically lacked access to America’s rail network, connecting residents to jobs, healthcare, and educational opportunities. Investments will repair aging rail infrastructure to increase train speeds, reduce delays, benefit freight rail supply chains to boost America’s economy, significantly reduce greenhouse emissions, and create good-paying union jobs. Additionally, electric high-speed rail trains will take millions of cars off the roads and reduce emissions, further cementing intercity rail as an environmentally-friendly alternative to flying or driving and saving time for millions of Americans. These investments will also create tens of thousands of good-paying union jobs in construction and related industries – adding to over 100,000 jobs that the President is creating through historic investments in world-class rail. Today’s investment includes $8.2 billion through the Federal Railroad Administration’s Federal-State Partnership for Intercity Passenger Rail Program, as well as $34.5 million through the Corridor Identification and Development program to guide passenger rail development on 69 rail corridors across 44 states, ensuring that intercity rail projects are ready for implementation. President Biden will travel to Las Vegas, Nevada to make this announcement. To date, President Biden has announced $30 billion for rail projects across the country – including $16.4 billion on the Northeast Corridor, $1.4 billion for passenger rail and freight rail safety projects, and $570 million to upgrade or mitigate railroad crossings. Fed-State National Project selections include: The Brightline West High-Speed Intercity Passenger Rail System Project will receive up to $3 billion for a new 218-mile intercity passenger rail system between Las Vegas, Nevada, and Rancho Cucamonga, California. The project will create a new high-speed rail system, resulting in trip times of just over 2 hours – nearly twice as fast as driving. This route is expected to serve more than 11 million passengers annually, taking millions of cars off the road and, thanks to all-electric train sets, removing an estimated 400,000 tons of carbon dioxide per year. This project will create 35,000 jobs supporting construction and support 1,000 permanent jobs in operations and maintenance once in service. Brightline’s agreement with the California State and Southern Nevada Building Trades will ensure that this project is built with good-paying union labor, and the project has reached a separate agreement with Rail Labor to employ union workers for its ongoing operations and maintenance. The project will also allow for connections to the Los Angeles Metro area via the Metrolink commuter rail system. The California Inaugural High-Speed Rail Service Project will receive up to $3.07 billion to help deliver high-speed rail service in California’s Central Valley by designing and extending the rail line between Bakersfield and Merced, procuring new high-speed trainsets, and constructing the Fresno station, which will connect communities to urban centers in Northern and Southern California. This 171-mile rail corridor will support high-speed travel with speeds up to 220mph. The project will improve connectivity and increase travel options, along with providing more frequent passenger rail service, from the Central Valley to urban centers in northern and Southern California. New all-electric trainsets will produce zero emissions and be powered by 100% renewable energy. By separating passenger and freight lines, this project will benefit freight rail operations throughout California as well. This project has already created over 11,000 good-paying union construction jobs and has committed to using union labor for operations and maintenance. The Raleigh to Richmond (R2R) Innovating Rail Program Phases IA and II project will receive up to $1.1 billion to build approximately additional parts of the Southeast Corridor from Raleigh to Wake Forest, North Carolina, including new and upgraded track, eleven grade separations and closure of multiple at-grade crossings. The investment will improve system and service performance by developing a resilient and reliable passenger rail route that will also contribute to freight and supply chain resiliency in the southeastern U.S. The proposed project is part of a multi-phased effort to develop a new passenger rail route between Raleigh, North Carolina, and Richmond, Virginia, and better connect the southern states to DC and the Northeast Corridor. Once completed, this new route will save passengers an estimated 90 minutes per trip. The Long Bridge project, part of the Transforming Rail in Virginia – Phase II program, will receive $729 million to construct a new two-track rail bridge over the Potomac River to expand passenger rail capacity between Washington, D.C. and Richmond, VA. Nearly 6 million passengers travel over the existing bridge every year on Amtrak and Virginia Railway Express lines. This upgrade will reduce congestion and delays on this heavily-traveled corridor to our nation’s capital. As part of President Biden’s vision for world-class passenger rail, the Administration is planning for future rail growth in new and unprecedented ways through the Bipartisan Infrastructure Law-created Corridor ID Program. The program establishes a new planning framework for future investments, and corridor selections announced today stand to upgrade 15 existing rail routes, establish 47 extensions to existing and new conventional corridor routes, and advance 7 new high-speed rail projects, creating a pipeline of intercity passenger rail projects ready for future investment. Project selections include: Scranton to New York, reviving a dormant rail corridor between Pennsylvania, New Jersey, and New York, to provide up to three daily trips for commuters and other passengers; Colorado Front Range, a new rail corridor connecting Fort Collins, CO, and Pueblo, CO, to serve an area that currently has no passenger rail options; The Northern Lights Express, connecting Minneapolis, MN and Duluth, MN, with several stops in Wisconsin, for greater regional connectivity; Cascadia High-Speed Rail, a proposed new high-speed rail corridor linking Oregon, Washington, and Vancouver, with entirely new service; Charlotte to Atlanta, a new high-speed rail corridor linking the Southeast and providing connection to Hartsfield-Jackson Airport, the busiest airport in the world;
[question] Can you explain the new funding approved for passenger rail in the Investing in America Agenda? What were some of the passenger corridors approved for funding? Answer in a minimum of 300 words. ===================== [text] President Biden’s Investing in America Agenda – a key pillar of Bidenomics – is delivering world class-infrastructure across the country, expanding access to economic opportunity, and creating good-paying jobs. By delivering $66 billion from the Bipartisan Infrastructure Law – the largest investment in passenger rail since the creation of Amtrak 50 years ago – President Biden is delivering on his vision to rebuild America and win the global competition for the 21st century. Today, the Biden-Harris Administration is announcing $8.2 billion in new funding for 10 major passenger rail projects across the country, including the first world-class high-speed rail projects in our country’s history. Key selected projects include: building a new high-speed rail system between California and Nevada, which will serve more than 11 million passengers annually; creating a high-speed rail line through California’s Central Valley to ultimately link Los Angeles and San Francisco, supporting travel with speeds up to 220 mph; delivering significant upgrades to frequently-traveled rail corridors in Virginia, North Carolina, and the District of Columbia; and upgrading and expanding capacity at Chicago Union Station in Illinois, one of the nation’s busiest rail hubs. These historic projects will create tens of thousands of good-paying, union jobs, unlock economic opportunity for communities across the country, and open up safe, comfortable, and climate-friendly travel options to get people to their destinations in a fraction of the time it takes to drive. The Biden-Harris Administration is building out a pipeline of passenger rail projects in every region of the country in order to achieve the President’s vision of world-class passenger rail. Announced projects will add new passenger rail service to cities that have historically lacked access to America’s rail network, connecting residents to jobs, healthcare, and educational opportunities. Investments will repair aging rail infrastructure to increase train speeds, reduce delays, benefit freight rail supply chains to boost America’s economy, significantly reduce greenhouse emissions, and create good-paying union jobs. Additionally, electric high-speed rail trains will take millions of cars off the roads and reduce emissions, further cementing intercity rail as an environmentally-friendly alternative to flying or driving and saving time for millions of Americans. These investments will also create tens of thousands of good-paying union jobs in construction and related industries – adding to over 100,000 jobs that the President is creating through historic investments in world-class rail. Today’s investment includes $8.2 billion through the Federal Railroad Administration’s Federal-State Partnership for Intercity Passenger Rail Program, as well as $34.5 million through the Corridor Identification and Development program to guide passenger rail development on 69 rail corridors across 44 states, ensuring that intercity rail projects are ready for implementation. President Biden will travel to Las Vegas, Nevada to make this announcement. To date, President Biden has announced $30 billion for rail projects across the country – including $16.4 billion on the Northeast Corridor, $1.4 billion for passenger rail and freight rail safety projects, and $570 million to upgrade or mitigate railroad crossings. Fed-State National Project selections include: The Brightline West High-Speed Intercity Passenger Rail System Project will receive up to $3 billion for a new 218-mile intercity passenger rail system between Las Vegas, Nevada, and Rancho Cucamonga, California. The project will create a new high-speed rail system, resulting in trip times of just over 2 hours – nearly twice as fast as driving. This route is expected to serve more than 11 million passengers annually, taking millions of cars off the road and, thanks to all-electric train sets, removing an estimated 400,000 tons of carbon dioxide per year. This project will create 35,000 jobs supporting construction and support 1,000 permanent jobs in operations and maintenance once in service. Brightline’s agreement with the California State and Southern Nevada Building Trades will ensure that this project is built with good-paying union labor, and the project has reached a separate agreement with Rail Labor to employ union workers for its ongoing operations and maintenance. The project will also allow for connections to the Los Angeles Metro area via the Metrolink commuter rail system. The California Inaugural High-Speed Rail Service Project will receive up to $3.07 billion to help deliver high-speed rail service in California’s Central Valley by designing and extending the rail line between Bakersfield and Merced, procuring new high-speed trainsets, and constructing the Fresno station, which will connect communities to urban centers in Northern and Southern California. This 171-mile rail corridor will support high-speed travel with speeds up to 220mph. The project will improve connectivity and increase travel options, along with providing more frequent passenger rail service, from the Central Valley to urban centers in northern and Southern California. New all-electric trainsets will produce zero emissions and be powered by 100% renewable energy. By separating passenger and freight lines, this project will benefit freight rail operations throughout California as well. This project has already created over 11,000 good-paying union construction jobs and has committed to using union labor for operations and maintenance. The Raleigh to Richmond (R2R) Innovating Rail Program Phases IA and II project will receive up to $1.1 billion to build approximately additional parts of the Southeast Corridor from Raleigh to Wake Forest, North Carolina, including new and upgraded track, eleven grade separations and closure of multiple at-grade crossings. The investment will improve system and service performance by developing a resilient and reliable passenger rail route that will also contribute to freight and supply chain resiliency in the southeastern U.S. The proposed project is part of a multi-phased effort to develop a new passenger rail route between Raleigh, North Carolina, and Richmond, Virginia, and better connect the southern states to DC and the Northeast Corridor. Once completed, this new route will save passengers an estimated 90 minutes per trip. The Long Bridge project, part of the Transforming Rail in Virginia – Phase II program, will receive $729 million to construct a new two-track rail bridge over the Potomac River to expand passenger rail capacity between Washington, D.C. and Richmond, VA. Nearly 6 million passengers travel over the existing bridge every year on Amtrak and Virginia Railway Express lines. This upgrade will reduce congestion and delays on this heavily-traveled corridor to our nation’s capital. As part of President Biden’s vision for world-class passenger rail, the Administration is planning for future rail growth in new and unprecedented ways through the Bipartisan Infrastructure Law-created Corridor ID Program. The program establishes a new planning framework for future investments, and corridor selections announced today stand to upgrade 15 existing rail routes, establish 47 extensions to existing and new conventional corridor routes, and advance 7 new high-speed rail projects, creating a pipeline of intercity passenger rail projects ready for future investment. Project selections include: Scranton to New York, reviving a dormant rail corridor between Pennsylvania, New Jersey, and New York, to provide up to three daily trips for commuters and other passengers; Colorado Front Range, a new rail corridor connecting Fort Collins, CO, and Pueblo, CO, to serve an area that currently has no passenger rail options; The Northern Lights Express, connecting Minneapolis, MN and Duluth, MN, with several stops in Wisconsin, for greater regional connectivity; Cascadia High-Speed Rail, a proposed new high-speed rail corridor linking Oregon, Washington, and Vancouver, with entirely new service; Charlotte to Atlanta, a new high-speed rail corridor linking the Southeast and providing connection to Hartsfield-Jackson Airport, the busiest airport in the world; https://www.whitehouse.gov/briefing-room/statements-releases/2023/12/08/fact-sheet-president-biden-announces-billions-to-deliver-world-class-high-speed-rail-and-launch-new-passenger-rail-corridors-across-the-country/ ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources.
[question] [user request] ===================== [text] [context document] ===================== [instruction] Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. EVIDENCE: President Biden’s Investing in America Agenda – a key pillar of Bidenomics – is delivering world class-infrastructure across the country, expanding access to economic opportunity, and creating good-paying jobs. By delivering $66 billion from the Bipartisan Infrastructure Law – the largest investment in passenger rail since the creation of Amtrak 50 years ago – President Biden is delivering on his vision to rebuild America and win the global competition for the 21st century. Today, the Biden-Harris Administration is announcing $8.2 billion in new funding for 10 major passenger rail projects across the country, including the first world-class high-speed rail projects in our country’s history. Key selected projects include: building a new high-speed rail system between California and Nevada, which will serve more than 11 million passengers annually; creating a high-speed rail line through California’s Central Valley to ultimately link Los Angeles and San Francisco, supporting travel with speeds up to 220 mph; delivering significant upgrades to frequently-traveled rail corridors in Virginia, North Carolina, and the District of Columbia; and upgrading and expanding capacity at Chicago Union Station in Illinois, one of the nation’s busiest rail hubs. These historic projects will create tens of thousands of good-paying, union jobs, unlock economic opportunity for communities across the country, and open up safe, comfortable, and climate-friendly travel options to get people to their destinations in a fraction of the time it takes to drive. The Biden-Harris Administration is building out a pipeline of passenger rail projects in every region of the country in order to achieve the President’s vision of world-class passenger rail. Announced projects will add new passenger rail service to cities that have historically lacked access to America’s rail network, connecting residents to jobs, healthcare, and educational opportunities. Investments will repair aging rail infrastructure to increase train speeds, reduce delays, benefit freight rail supply chains to boost America’s economy, significantly reduce greenhouse emissions, and create good-paying union jobs. Additionally, electric high-speed rail trains will take millions of cars off the roads and reduce emissions, further cementing intercity rail as an environmentally-friendly alternative to flying or driving and saving time for millions of Americans. These investments will also create tens of thousands of good-paying union jobs in construction and related industries – adding to over 100,000 jobs that the President is creating through historic investments in world-class rail. Today’s investment includes $8.2 billion through the Federal Railroad Administration’s Federal-State Partnership for Intercity Passenger Rail Program, as well as $34.5 million through the Corridor Identification and Development program to guide passenger rail development on 69 rail corridors across 44 states, ensuring that intercity rail projects are ready for implementation. President Biden will travel to Las Vegas, Nevada to make this announcement. To date, President Biden has announced $30 billion for rail projects across the country – including $16.4 billion on the Northeast Corridor, $1.4 billion for passenger rail and freight rail safety projects, and $570 million to upgrade or mitigate railroad crossings. Fed-State National Project selections include: The Brightline West High-Speed Intercity Passenger Rail System Project will receive up to $3 billion for a new 218-mile intercity passenger rail system between Las Vegas, Nevada, and Rancho Cucamonga, California. The project will create a new high-speed rail system, resulting in trip times of just over 2 hours – nearly twice as fast as driving. This route is expected to serve more than 11 million passengers annually, taking millions of cars off the road and, thanks to all-electric train sets, removing an estimated 400,000 tons of carbon dioxide per year. This project will create 35,000 jobs supporting construction and support 1,000 permanent jobs in operations and maintenance once in service. Brightline’s agreement with the California State and Southern Nevada Building Trades will ensure that this project is built with good-paying union labor, and the project has reached a separate agreement with Rail Labor to employ union workers for its ongoing operations and maintenance. The project will also allow for connections to the Los Angeles Metro area via the Metrolink commuter rail system. The California Inaugural High-Speed Rail Service Project will receive up to $3.07 billion to help deliver high-speed rail service in California’s Central Valley by designing and extending the rail line between Bakersfield and Merced, procuring new high-speed trainsets, and constructing the Fresno station, which will connect communities to urban centers in Northern and Southern California. This 171-mile rail corridor will support high-speed travel with speeds up to 220mph. The project will improve connectivity and increase travel options, along with providing more frequent passenger rail service, from the Central Valley to urban centers in northern and Southern California. New all-electric trainsets will produce zero emissions and be powered by 100% renewable energy. By separating passenger and freight lines, this project will benefit freight rail operations throughout California as well. This project has already created over 11,000 good-paying union construction jobs and has committed to using union labor for operations and maintenance. The Raleigh to Richmond (R2R) Innovating Rail Program Phases IA and II project will receive up to $1.1 billion to build approximately additional parts of the Southeast Corridor from Raleigh to Wake Forest, North Carolina, including new and upgraded track, eleven grade separations and closure of multiple at-grade crossings. The investment will improve system and service performance by developing a resilient and reliable passenger rail route that will also contribute to freight and supply chain resiliency in the southeastern U.S. The proposed project is part of a multi-phased effort to develop a new passenger rail route between Raleigh, North Carolina, and Richmond, Virginia, and better connect the southern states to DC and the Northeast Corridor. Once completed, this new route will save passengers an estimated 90 minutes per trip. The Long Bridge project, part of the Transforming Rail in Virginia – Phase II program, will receive $729 million to construct a new two-track rail bridge over the Potomac River to expand passenger rail capacity between Washington, D.C. and Richmond, VA. Nearly 6 million passengers travel over the existing bridge every year on Amtrak and Virginia Railway Express lines. This upgrade will reduce congestion and delays on this heavily-traveled corridor to our nation’s capital. As part of President Biden’s vision for world-class passenger rail, the Administration is planning for future rail growth in new and unprecedented ways through the Bipartisan Infrastructure Law-created Corridor ID Program. The program establishes a new planning framework for future investments, and corridor selections announced today stand to upgrade 15 existing rail routes, establish 47 extensions to existing and new conventional corridor routes, and advance 7 new high-speed rail projects, creating a pipeline of intercity passenger rail projects ready for future investment. Project selections include: Scranton to New York, reviving a dormant rail corridor between Pennsylvania, New Jersey, and New York, to provide up to three daily trips for commuters and other passengers; Colorado Front Range, a new rail corridor connecting Fort Collins, CO, and Pueblo, CO, to serve an area that currently has no passenger rail options; The Northern Lights Express, connecting Minneapolis, MN and Duluth, MN, with several stops in Wisconsin, for greater regional connectivity; Cascadia High-Speed Rail, a proposed new high-speed rail corridor linking Oregon, Washington, and Vancouver, with entirely new service; Charlotte to Atlanta, a new high-speed rail corridor linking the Southeast and providing connection to Hartsfield-Jackson Airport, the busiest airport in the world; USER: Can you explain the new funding approved for passenger rail in the Investing in America Agenda? What were some of the passenger corridors approved for funding? Answer in a minimum of 300 words. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
28
33
1,218
null
603
Please answer the question using only the provided context. Format your answer as a list.
How can the Adobe Experience Platform make a business more profitable?
Adobe Experience Platform helps customers to centralise and standardise their customer data and content across the enterprise – powering 360° customer profiles, enabling data science, and data governance to drive real-time personalised experiences. Experience Platform provides services that includes capabilities for data ingestion, wrangling and analysing data and building predictive models and next best action. Experience Platform makes the data, content and insights available to experience-delivery systems to act upon in real time, yielding compelling experiences in the relevant moment. With Experience Platform, enterprises will be able to utilise completely coordinated marketing and analytics solutions for driving meaningful customer interactions, leading to positive business results. An integral part of Experience Platform is sharing customer experience data to improve experiences for our customers as they work to deliver real-time experiences through our open and extensible platform. Companies want to leverage their customer experience data and share data and insights across all their experience applications (both Adobe applications and third-party applications). Sharing customer experience data in multiple formats from multiple sources can require too much time and too many resources. Adobe’s Experience Data Model (XDM) is a formal specification that you can integrate into your own data model to create a true 360-degree view of your customer, which saves you time and makes moving your data into Adobe Experience Cloud products a seamless process. Company executives in a variety of industries have found themselves thinking about a single issue: how to create a better user experience by delivering the right offer (or right message) at the right time. In order to find an answer to that issue, we need to understand the entire journey of a customer across multiple touchpoints both online and offline. It’s not enough knowing how the customer interacts within a website. You also have to know how the customer responds to emails and how they respond to any offline touchpoints (such as customer support calls or marketing postcards). Knowing the details of the complete journey will give businesses information they need for better personalisation and that will allow them to use machine learning to analyse the journey and deliver an individualised experience. Nine in ten marketers say data is their most underutilised asset. Why aren’t they deriving more value from the terabytes of information they collect? Primarily, it’s because that data isn’t immediately usable. Information compiled from varied sources — like websites, emails, sales, third-party vendors and even offline channels — tends to be siloed and structured in different formats. Even when one department within a firm gets relevant data into a format it can understand, the resulting intel is still largely unintelligible to other teams and departments. If all that data were translated into a single language — one that is equally useful and informative to sales representatives, IT departments, social-media marketers and customer service reps — companies could offer customers more compelling, personalised experiences in real time. Adobe’s Experience Data Model (XDM) is a formal specification used to describe this journey of experiences, as well as the resulting actions and events. XDM describes not only the journey, but also the measurement, content offers and other details of the journey. XDM is more than just a “data dictionary” for companies working with data from customer experiences — it’s a complete language for the experience business. XDM has been developed by Adobe as a way to make experience data easier to interpret and to share. Companies have been chasing the 360-degree customer view for years. The biggest problem is that every bit of data seems to be in a different format or on a different platform. You have your website, your email offers, your customer support system, your retail store and a rewards card, not to mention your search, display, social and video advertising across the web. Many of the systems you use to track those items don’t talk to each other or even store the information in a format the other systems can use. Since you want to use machine learning to derive insights and intelligence from the data, and then use those insights to drive company actions, those separate systems make getting a better view of your customer a difficult and time-consuming task. How can you talk about delivering a personalised experience for your customers if every system has a different definition of who the customer is? To make all these disparate data sets work together and be understood, Data Engineers and Data Scientists are in a constant process of translating and re-translating the data at every step. A large amount of that time is spent understanding the structure of the data before they can turn the data into something meaningful that you can use to create a better experience for your customers. But streamlining that data is easier said than done. Almost 40 percent of advertisers employ three or more data management platforms and 44 percent use three or more analytics platforms. By juggling multiple different data platforms, companies are more likely drop sales leads. Data flowing in from a company’s smartphone app, for instance, might be in a completely different language than the data acquired from an email marketing campaign, a third-party vendor or from the point of sale. The average data scientist spends about 80 percent of their day preparing raw data for analysis, according to a recent poll from data mining company CrowdFlower. Every hour spent cleaning and structuring data is time that could be better spent drawing useful insights from that data, so companies can devise engaging customer experiences. Imagine if sales and marketing data existed in a single, standardised language from the moment it’s compiled — the same way Adobe standardised PDF for documents. Every business is an Experience Business. Whether you’re selling a product, a service or even an event, as long as another person is expected to interact with your company or product or service, then you are creating an experience. This is especially true for any business (or department) that deals with a customer’s ongoing interaction, such as customer support or loyalty clubs. XDM is a specification that describes the elements of those interactions. XDM can describe a consumer’s preferences and qualify what audiences they are part of and then categorise information about their online journey (such as what buttons they click on or what they add to a shopping cart). XDM can also define offline interactions such as loyalty-club memberships. XDM is a core part of the Adobe Experience Platform, built with partners and global brands that are strategically investing in this shared vision of omnipresent and consistent first-class customer experience. Modern customer interactions are unique because they go beyond what historically common data modelling can support. Interacting with digital audiences requires capabilities such as engaging content, insights from data at scale, complete data awareness, identity management, unified profiles, omni-channel and experiencecentric metadata, and the blending of real-time with historical behavioural data. Often, this data comes from multiple different vendors representing online behaviour across web and mobile and offline behavior for instore purchases, demographic information and user preferences. It is a labour-intensive process to combine all of these disparate data sources to get a 360-degree view of a consumer and speak to them with one voice across the various channels. XDM is the language to express these experiences.
Adobe Experience Platform helps customers to centralise and standardise their customer data and content across the enterprise – powering 360° customer profiles, enabling data science, and data governance to drive real-time personalised experiences. Experience Platform provides services that includes capabilities for data ingestion, wrangling and analysing data and building predictive models and next best action. Experience Platform makes the data, content and insights available to experience-delivery systems to act upon in real time, yielding compelling experiences in the relevant moment. With Experience Platform, enterprises will be able to utilise completely coordinated marketing and analytics solutions for driving meaningful customer interactions, leading to positive business results. An integral part of Experience Platform is sharing customer experience data to improve experiences for our customers as they work to deliver real-time experiences through our open and extensible platform. Companies want to leverage their customer experience data and share data and insights across all their experience applications (both Adobe applications and third-party applications). Sharing customer experience data in multiple formats from multiple sources can require too much time and too many resources. Adobe’s Experience Data Model (XDM) is a formal specification that you can integrate into your own data model to create a true 360-degree view of your customer, which saves you time and makes moving your data into Adobe Experience Cloud products a seamless process. Company executives in a variety of industries have found themselves thinking about a single issue: how to create a better user experience by delivering the right offer (or right message) at the right time. In order to find an answer to that issue, we need to understand the entire journey of a customer across multiple touchpoints both online and offline. It’s not enough knowing how the customer interacts within a website. You also have to know how the customer responds to emails and how they respond to any offline touchpoints (such as customer support calls or marketing postcards). Knowing the details of the complete journey will give businesses information they need for better personalisation and that will allow them to use machine learning to analyse the journey and deliver an individualised experience. Nine in ten marketers say data is their most underutilised asset. Why aren’t they deriving more value from the terabytes of information they collect? Primarily, it’s because that data isn’t immediately usable. Information compiled from varied sources — like websites, emails, sales, third-party vendors and even offline channels — tends to be siloed and structured in different formats. Even when one department within a firm gets relevant data into a format it can understand, the resulting intel is still largely unintelligible to other teams and departments. If all that data were translated into a single language — one that is equally useful and informative to sales representatives, IT departments, social-media marketers and customer service reps — companies could offer customers more compelling, personalised experiences in real time. Adobe’s Experience Data Model (XDM) is a formal specification used to describe this journey of experiences, as well as the resulting actions and events. XDM describes not only the journey, but also the measurement, content offers and other details of the journey. XDM is more than just a “data dictionary” for companies working with data from customer experiences — it’s a complete language for the experience business. XDM has been developed by Adobe as a way to make experience data easier to interpret and to share. Companies have been chasing the 360-degree customer view for years. The biggest problem is that every bit of data seems to be in a different format or on a different platform. You have your website, your email offers, your customer support system, your retail store and a rewards card, not to mention your search, display, social and video advertising across the web. Many of the systems you use to track those items don’t talk to each other or even store the information in a format the other systems can use. Since you want to use machine learning to derive insights and intelligence from the data, and then use those insights to drive company actions, those separate systems make getting a better view of your customer a difficult and time-consuming task. How can you talk about delivering a personalised experience for your customers if every system has a different definition of who the customer is? To make all these disparate data sets work together and be understood, Data Engineers and Data Scientists are in a constant process of translating and re-translating the data at every step. A large amount of that time is spent understanding the structure of the data before they can turn the data into something meaningful that you can use to create a better experience for your customers. But streamlining that data is easier said than done. Almost 40 percent of advertisers employ three or more data management platforms and 44 percent use three or more analytics platforms. By juggling multiple different data platforms, companies are more likely drop sales leads. Data flowing in from a company’s smartphone app, for instance, might be in a completely different language than the data acquired from an email marketing campaign, a third-party vendor or from the point of sale. The average data scientist spends about 80 percent of their day preparing raw data for analysis, according to a recent poll from data mining company CrowdFlower. Every hour spent cleaning and structuring data is time that could be better spent drawing useful insights from that data, so companies can devise engaging customer experiences. Imagine if sales and marketing data existed in a single, standardised language from the moment it’s compiled — the same way Adobe standardised PDF for documents. Every business is an Experience Business. Whether you’re selling a product, a service or even an event, as long as another person is expected to interact with your company or product or service, then you are creating an experience. This is especially true for any business (or department) that deals with a customer’s ongoing interaction, such as customer support or loyalty clubs. XDM is a specification that describes the elements of those interactions. XDM can describe a consumer’s preferences and qualify what audiences they are part of and then categorise information about their online journey (such as what buttons they click on or what they add to a shopping cart). XDM can also define offline interactions such as loyalty-club memberships. XDM is a core part of the Adobe Experience Platform, built with partners and global brands that are strategically investing in this shared vision of omnipresent and consistent first-class customer experience. Modern customer interactions are unique because they go beyond what historically common data modelling can support. Interacting with digital audiences requires capabilities such as engaging content, insights from data at scale, complete data awareness, identity management, unified profiles, omni-channel and experiencecentric metadata, and the blending of real-time with historical behavioural data. Often, this data comes from multiple different vendors representing online behaviour across web and mobile and offline behavior for instore purchases, demographic information and user preferences. It is a labour-intensive process to combine all of these disparate data sources to get a 360-degree view of a consumer and speak to them with one voice across the various channels. XDM is the language to express these experiences. Please answer the question using only the provided context. Format your answer as a list. How can the Adobe Experience Platform make a business more profitable?
Please answer the question using only the provided context. Format your answer as a list. EVIDENCE: Adobe Experience Platform helps customers to centralise and standardise their customer data and content across the enterprise – powering 360° customer profiles, enabling data science, and data governance to drive real-time personalised experiences. Experience Platform provides services that includes capabilities for data ingestion, wrangling and analysing data and building predictive models and next best action. Experience Platform makes the data, content and insights available to experience-delivery systems to act upon in real time, yielding compelling experiences in the relevant moment. With Experience Platform, enterprises will be able to utilise completely coordinated marketing and analytics solutions for driving meaningful customer interactions, leading to positive business results. An integral part of Experience Platform is sharing customer experience data to improve experiences for our customers as they work to deliver real-time experiences through our open and extensible platform. Companies want to leverage their customer experience data and share data and insights across all their experience applications (both Adobe applications and third-party applications). Sharing customer experience data in multiple formats from multiple sources can require too much time and too many resources. Adobe’s Experience Data Model (XDM) is a formal specification that you can integrate into your own data model to create a true 360-degree view of your customer, which saves you time and makes moving your data into Adobe Experience Cloud products a seamless process. Company executives in a variety of industries have found themselves thinking about a single issue: how to create a better user experience by delivering the right offer (or right message) at the right time. In order to find an answer to that issue, we need to understand the entire journey of a customer across multiple touchpoints both online and offline. It’s not enough knowing how the customer interacts within a website. You also have to know how the customer responds to emails and how they respond to any offline touchpoints (such as customer support calls or marketing postcards). Knowing the details of the complete journey will give businesses information they need for better personalisation and that will allow them to use machine learning to analyse the journey and deliver an individualised experience. Nine in ten marketers say data is their most underutilised asset. Why aren’t they deriving more value from the terabytes of information they collect? Primarily, it’s because that data isn’t immediately usable. Information compiled from varied sources — like websites, emails, sales, third-party vendors and even offline channels — tends to be siloed and structured in different formats. Even when one department within a firm gets relevant data into a format it can understand, the resulting intel is still largely unintelligible to other teams and departments. If all that data were translated into a single language — one that is equally useful and informative to sales representatives, IT departments, social-media marketers and customer service reps — companies could offer customers more compelling, personalised experiences in real time. Adobe’s Experience Data Model (XDM) is a formal specification used to describe this journey of experiences, as well as the resulting actions and events. XDM describes not only the journey, but also the measurement, content offers and other details of the journey. XDM is more than just a “data dictionary” for companies working with data from customer experiences — it’s a complete language for the experience business. XDM has been developed by Adobe as a way to make experience data easier to interpret and to share. Companies have been chasing the 360-degree customer view for years. The biggest problem is that every bit of data seems to be in a different format or on a different platform. You have your website, your email offers, your customer support system, your retail store and a rewards card, not to mention your search, display, social and video advertising across the web. Many of the systems you use to track those items don’t talk to each other or even store the information in a format the other systems can use. Since you want to use machine learning to derive insights and intelligence from the data, and then use those insights to drive company actions, those separate systems make getting a better view of your customer a difficult and time-consuming task. How can you talk about delivering a personalised experience for your customers if every system has a different definition of who the customer is? To make all these disparate data sets work together and be understood, Data Engineers and Data Scientists are in a constant process of translating and re-translating the data at every step. A large amount of that time is spent understanding the structure of the data before they can turn the data into something meaningful that you can use to create a better experience for your customers. But streamlining that data is easier said than done. Almost 40 percent of advertisers employ three or more data management platforms and 44 percent use three or more analytics platforms. By juggling multiple different data platforms, companies are more likely drop sales leads. Data flowing in from a company’s smartphone app, for instance, might be in a completely different language than the data acquired from an email marketing campaign, a third-party vendor or from the point of sale. The average data scientist spends about 80 percent of their day preparing raw data for analysis, according to a recent poll from data mining company CrowdFlower. Every hour spent cleaning and structuring data is time that could be better spent drawing useful insights from that data, so companies can devise engaging customer experiences. Imagine if sales and marketing data existed in a single, standardised language from the moment it’s compiled — the same way Adobe standardised PDF for documents. Every business is an Experience Business. Whether you’re selling a product, a service or even an event, as long as another person is expected to interact with your company or product or service, then you are creating an experience. This is especially true for any business (or department) that deals with a customer’s ongoing interaction, such as customer support or loyalty clubs. XDM is a specification that describes the elements of those interactions. XDM can describe a consumer’s preferences and qualify what audiences they are part of and then categorise information about their online journey (such as what buttons they click on or what they add to a shopping cart). XDM can also define offline interactions such as loyalty-club memberships. XDM is a core part of the Adobe Experience Platform, built with partners and global brands that are strategically investing in this shared vision of omnipresent and consistent first-class customer experience. Modern customer interactions are unique because they go beyond what historically common data modelling can support. Interacting with digital audiences requires capabilities such as engaging content, insights from data at scale, complete data awareness, identity management, unified profiles, omni-channel and experiencecentric metadata, and the blending of real-time with historical behavioural data. Often, this data comes from multiple different vendors representing online behaviour across web and mobile and offline behavior for instore purchases, demographic information and user preferences. It is a labour-intensive process to combine all of these disparate data sources to get a 360-degree view of a consumer and speak to them with one voice across the various channels. XDM is the language to express these experiences. USER: How can the Adobe Experience Platform make a business more profitable? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
15
11
1,212
null
580
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document]
My final research project is on fungal immunity. Read this section from a recent publication and explain the effects of the different interleukins mentioned in the section. Do not give an overview of the molecules, I only want to know their specific functions in fungal immunity. Limit to one sentence per interleukin.
Innate Immunity Innate Detection and Immune Evasion The lungs maintain many defense mechanisms to survey and eliminate airborne threats. Lung epithelial cells (LECs) secrete anti-microbial peptides, complement proteins, and defensins which enhance granulocyte activity and create a less hospitable environment for Coccidioides (Hernández-Santos et al., 2018). To survive, Coccidioides must successfully avoid detection from surveying and patrolling innate immune cells. Lung-resident macrophages, also known as alveolar macrophages, comprise up to 95% of pulmonary leukocytes and participate in early immune detection of pathogens and maintain the lung microenvironment (Wynn and Vannella, 2016). In Aspergillus infections, tissue-specific neutrophils are recruited by LECs and enter the lung early after infection due to β-glucan and chitin (Dubey et al., 2014). Innate leukocytes control early pathogen invasion via phagocytosis and production of reactive oxide and reactive nitrogen species (RNS) (Xu and Shinohara, 2017). β-glucan and chitin are conserved across many fungal species, including Coccidioides, so these molecules could interact with epithelial cells and aid in neutrophil recruitment. In cases where host immune responses cannot control infection, disease becomes chronic. Host responses sometimes control infections through granuloma formation in the lung as fungi is walled off instead of destroyed (Nguyen et al., 2013; Johnson et al., 2014; Wynn and Vannella, 2016). To survive lung defenses and evade innate immune responses, Coccidioides expresses virulence factors for immune evasion and survival. Inside the lung, arthroconidia express ornithine decarboxylase, an enzyme implicated during growth from arthroconidia to spherule state (Guevara-Olvera et al., 2000). During transition, the spherule internal cell wall segments bud off into endospores. Lifecycle transition allows vulnerable, easily phagocytosed, arthroconidia to develop into phagocytosis-resistant spherules (Hung et al., 2002; Gonzalez et al., 2011; Nguyen et al., 2013). Arthroconidia are vulnerable to RNS while mature spherules suppress nitric oxide species (NOS) and inducible NOS expression in macrophages (Figure 1) (Gonzalez et al., 2011). Mature spherules are too large for most host phagocytic activity, allowing Coccidiodes to evade early immune detection (Hung et al., 2002). Coccidioides induces host expression of arginase resulting in ornithine and urea production, important components for transition from arthroconidia to spherule (Hung et al., 2007). FGURE 1 www.frontiersin.org Figure 1 Fungal dimorphism presents challenges for immune detection and activation. Early infection: Coccidioides is vulnerable to immune detection during early infection due to the smaller size (2–5 μM) and SOWgp expression which is detected via Dectin-1 and TLR2 on innate immune cells. These interactions mediate clearance via phagocytosis and reactive oxide species production. Later infection: As Coccidioides sporulates, it secretes MEP1 which digests SOWgp from the fungal surface, hampering immune detection. Spherules induce arginase expression in host tissues, suppressing NOS/NO production via an unknown mechanism, contributing to immune suppression. In the spherule state, Coccidioides secretes metalloproteinase 1 (Mep1) which digests an immunodominant antigen spherical outer wall glycoprotein (SOWgp) on the fungal surface (Figure 1) (Hung et al., 2005). Phagocytotic granulocytes rely on pathogen associated molecular patterns such as SOWgp, thus Mep1 secretion prevents detection by innate immune cells (Hung et al., 2005). Coccidioides upregulates nitrate reductase during development, an enzyme that converts nitrate to nitrite, thereby enhancing Coccidioides survival in anoxic conditions, such as those found inside a granuloma (Johannesson et al., 2006). Early detection to inhaled fungus is critical for host response. Macrophages and neutrophils detect Coccidioides arthroconidia and immature spherules via receptors Dectin-1, Dectin-2, and Mincle interacting with SOWgp (Hung et al., 2002; Nguyen et al., 2013). Endothelial lung cells use these same receptors to regulate defensin secretion. Toll-like receptors (TLRs) and c-type lectin receptors (CLRs) interact with major pathogen-associated molecular patterns to detect Coccidioides (Romani, 2004; Viriyakosol et al., 2008; Viriyakosol et al., 2013). Like most fungi, Coccidioides expresses β-glucans, chitins, and mannans in the outer cell wall (Nguyen et al., 2013). These cell components are recognized by a variety of TLRs and CLRs and elicit strong inflammatory responses from local immune cells. Coccidioides interactions with TLR2 and Dectin-1 on macrophages activate production of reactive oxide species (ROS) and inflammatory cytokines, such as interleukin-6 (IL-6) and tumor necrosis factor-alpha (TNFα) (Viriyakosol et al., 2008; Viriyakosol et al., 2013). There are no known nucleotide-binding oligomerization domain-like (NOD-like) receptors yet associated with Coccidioides detection. In humans, polymorphisms in IFNγ/IL-12 signaling pathway result in a STAT1 gain of function mutations that associate with increased disease severity in Coccidioides, Histoplasma, and Candida infection (Sampaio et al., 2013). In disseminated Coccidioides, patients with severe disease were found to have a STAT3 mutation (Odio et al., 2015). STAT 3 mediates IL-23 signaling, critical for IFNγ, IL-12, and IL-17 production while STAT1 signaling induces Th1 cell differentiation in response to IL-12 to produce IFNγ; IFNγ, in turn, inhibits Th17 differentiation (Yeh et al., 2014). IL-12β1 receptor deficiency is associated with increased risk of disseminated coccidioidomycosis (Yeh et al., 2014). In chronic mucocutaneous candidiasis, gain of function mutations in STAT1 and STAT3 correlates to more severe disease and poor TH17 responses (Zheng et al., 2015). These observations suggest that STAT1 and STAT3 immune signaling is critical in host control of Th1/Th17 cytokine balance and is required for protection and Coccidioides fungal control. In Blastomyces dermatitidis infection, LECs regulate collaborative killing between alveolar macrophages, dendritic cells (DC), and neutrophils (Hussell and Bell, 2014; Hernández-Santos et al., 2018). Upon LECs ablation, B. dermatitidis phagocytosis is reduced, and viable yeast numbers increase. Other data suggests that IL-1/IL-1R interactions regulate CCL20 expression in LECs. Chemokine CCL20 strongly recruits lymphocytes and weakly recruits neutrophils (Hernández-Santos et al., 2018). IL-1R-deficient mice express less CCL20 and lung Th17 cells are reduced, suggesting that IL-1/IL-1R signaling in LECs could regulate adaptive immune functions (Hernández-Santos et al., 2018). IL-1R is critical for vaccine induced resistance to Coccidioides infection via MyD88 induction of Th17 responses (Hung et al., 2014a; Hung et al., 2016a). Though it has not been explored, LECs could mediate early responses to Coccidioides through IL-1R, suggesting another innate immune cell role in anti-fungal responses within the lung tissues. Alveoli structure likely helps shape local immune responses. Three dominant cell types exist within and around the alveoli structure: Type 1 and Type 2 pneumocytes (also known as alveolar epithelial cells, AECs), and tissue-resident alveolar macrophages (Guillot et al., 2013; Hussell and Bell, 2014). Type 1 pneumocytes (AECI) secrete IL-10 constitutively, which bind to IL-10R on alveolar macrophages to maintain an anti-inflammatory state. Type 2 pneumocytes (or AECII) express CD200 which interacts with CD200R on alveolar macrophage to inhibit pro-inflammatory phenotype (Guillot et al., 2013; Hernández-Santos et al., 2018). Alveolar macrophages express TGFβ-receptors that bind to pneumocyte-expressed αvβ6 integrin, tethering them in the alveolar airspace. In inflammatory conditions, AECIs upregulate TLRs and AECIIs increase SP-A and SP-D production (Guillot et al., 2013). These surfactant proteins are known to enhance pathogen opsonization and phagocytosis, and are capable of binding to Coccidioides antigen (Awasthi et al., 2004). Coccidioides infected mice expressed less SP-A and SP-D protein in their bronchial lavage fluid compared to uninfected and vaccinated controls, demonstrating the pathogen’s capability of altering the lung mucosa (Awasthi et al., 2004). AECII secreted production of surfactant proteins may be influenced by Coccidioides allowing fungal escape of phagocytosis and prolonged survival.
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== My final research project is on fungal immunity. Read this section from a recent publication and explain the effects of the different interleukins mentioned in the section. Do not give an overview of the molecules, I only want to know their specific functions in fungal immunity. Limit to one sentence per interleukin. {passage 0} ========== Innate Immunity Innate Detection and Immune Evasion The lungs maintain many defense mechanisms to survey and eliminate airborne threats. Lung epithelial cells (LECs) secrete anti-microbial peptides, complement proteins, and defensins which enhance granulocyte activity and create a less hospitable environment for Coccidioides (Hernández-Santos et al., 2018). To survive, Coccidioides must successfully avoid detection from surveying and patrolling innate immune cells. Lung-resident macrophages, also known as alveolar macrophages, comprise up to 95% of pulmonary leukocytes and participate in early immune detection of pathogens and maintain the lung microenvironment (Wynn and Vannella, 2016). In Aspergillus infections, tissue-specific neutrophils are recruited by LECs and enter the lung early after infection due to β-glucan and chitin (Dubey et al., 2014). Innate leukocytes control early pathogen invasion via phagocytosis and production of reactive oxide and reactive nitrogen species (RNS) (Xu and Shinohara, 2017). β-glucan and chitin are conserved across many fungal species, including Coccidioides, so these molecules could interact with epithelial cells and aid in neutrophil recruitment. In cases where host immune responses cannot control infection, disease becomes chronic. Host responses sometimes control infections through granuloma formation in the lung as fungi is walled off instead of destroyed (Nguyen et al., 2013; Johnson et al., 2014; Wynn and Vannella, 2016). To survive lung defenses and evade innate immune responses, Coccidioides expresses virulence factors for immune evasion and survival. Inside the lung, arthroconidia express ornithine decarboxylase, an enzyme implicated during growth from arthroconidia to spherule state (Guevara-Olvera et al., 2000). During transition, the spherule internal cell wall segments bud off into endospores. Lifecycle transition allows vulnerable, easily phagocytosed, arthroconidia to develop into phagocytosis-resistant spherules (Hung et al., 2002; Gonzalez et al., 2011; Nguyen et al., 2013). Arthroconidia are vulnerable to RNS while mature spherules suppress nitric oxide species (NOS) and inducible NOS expression in macrophages (Figure 1) (Gonzalez et al., 2011). Mature spherules are too large for most host phagocytic activity, allowing Coccidiodes to evade early immune detection (Hung et al., 2002). Coccidioides induces host expression of arginase resulting in ornithine and urea production, important components for transition from arthroconidia to spherule (Hung et al., 2007). FGURE 1 www.frontiersin.org Figure 1 Fungal dimorphism presents challenges for immune detection and activation. Early infection: Coccidioides is vulnerable to immune detection during early infection due to the smaller size (2–5 μM) and SOWgp expression which is detected via Dectin-1 and TLR2 on innate immune cells. These interactions mediate clearance via phagocytosis and reactive oxide species production. Later infection: As Coccidioides sporulates, it secretes MEP1 which digests SOWgp from the fungal surface, hampering immune detection. Spherules induce arginase expression in host tissues, suppressing NOS/NO production via an unknown mechanism, contributing to immune suppression. In the spherule state, Coccidioides secretes metalloproteinase 1 (Mep1) which digests an immunodominant antigen spherical outer wall glycoprotein (SOWgp) on the fungal surface (Figure 1) (Hung et al., 2005). Phagocytotic granulocytes rely on pathogen associated molecular patterns such as SOWgp, thus Mep1 secretion prevents detection by innate immune cells (Hung et al., 2005). Coccidioides upregulates nitrate reductase during development, an enzyme that converts nitrate to nitrite, thereby enhancing Coccidioides survival in anoxic conditions, such as those found inside a granuloma (Johannesson et al., 2006). Early detection to inhaled fungus is critical for host response. Macrophages and neutrophils detect Coccidioides arthroconidia and immature spherules via receptors Dectin-1, Dectin-2, and Mincle interacting with SOWgp (Hung et al., 2002; Nguyen et al., 2013). Endothelial lung cells use these same receptors to regulate defensin secretion. Toll-like receptors (TLRs) and c-type lectin receptors (CLRs) interact with major pathogen-associated molecular patterns to detect Coccidioides (Romani, 2004; Viriyakosol et al., 2008; Viriyakosol et al., 2013). Like most fungi, Coccidioides expresses β-glucans, chitins, and mannans in the outer cell wall (Nguyen et al., 2013). These cell components are recognized by a variety of TLRs and CLRs and elicit strong inflammatory responses from local immune cells. Coccidioides interactions with TLR2 and Dectin-1 on macrophages activate production of reactive oxide species (ROS) and inflammatory cytokines, such as interleukin-6 (IL-6) and tumor necrosis factor-alpha (TNFα) (Viriyakosol et al., 2008; Viriyakosol et al., 2013). There are no known nucleotide-binding oligomerization domain-like (NOD-like) receptors yet associated with Coccidioides detection. In humans, polymorphisms in IFNγ/IL-12 signaling pathway result in a STAT1 gain of function mutations that associate with increased disease severity in Coccidioides, Histoplasma, and Candida infection (Sampaio et al., 2013). In disseminated Coccidioides, patients with severe disease were found to have a STAT3 mutation (Odio et al., 2015). STAT 3 mediates IL-23 signaling, critical for IFNγ, IL-12, and IL-17 production while STAT1 signaling induces Th1 cell differentiation in response to IL-12 to produce IFNγ; IFNγ, in turn, inhibits Th17 differentiation (Yeh et al., 2014). IL-12β1 receptor deficiency is associated with increased risk of disseminated coccidioidomycosis (Yeh et al., 2014). In chronic mucocutaneous candidiasis, gain of function mutations in STAT1 and STAT3 correlates to more severe disease and poor TH17 responses (Zheng et al., 2015). These observations suggest that STAT1 and STAT3 immune signaling is critical in host control of Th1/Th17 cytokine balance and is required for protection and Coccidioides fungal control. In Blastomyces dermatitidis infection, LECs regulate collaborative killing between alveolar macrophages, dendritic cells (DC), and neutrophils (Hussell and Bell, 2014; Hernández-Santos et al., 2018). Upon LECs ablation, B. dermatitidis phagocytosis is reduced, and viable yeast numbers increase. Other data suggests that IL-1/IL-1R interactions regulate CCL20 expression in LECs. Chemokine CCL20 strongly recruits lymphocytes and weakly recruits neutrophils (Hernández-Santos et al., 2018). IL-1R-deficient mice express less CCL20 and lung Th17 cells are reduced, suggesting that IL-1/IL-1R signaling in LECs could regulate adaptive immune functions (Hernández-Santos et al., 2018). IL-1R is critical for vaccine induced resistance to Coccidioides infection via MyD88 induction of Th17 responses (Hung et al., 2014a; Hung et al., 2016a). Though it has not been explored, LECs could mediate early responses to Coccidioides through IL-1R, suggesting another innate immune cell role in anti-fungal responses within the lung tissues. Alveoli structure likely helps shape local immune responses. Three dominant cell types exist within and around the alveoli structure: Type 1 and Type 2 pneumocytes (also known as alveolar epithelial cells, AECs), and tissue-resident alveolar macrophages (Guillot et al., 2013; Hussell and Bell, 2014). Type 1 pneumocytes (AECI) secrete IL-10 constitutively, which bind to IL-10R on alveolar macrophages to maintain an anti-inflammatory state. Type 2 pneumocytes (or AECII) express CD200 which interacts with CD200R on alveolar macrophage to inhibit pro-inflammatory phenotype (Guillot et al., 2013; Hernández-Santos et al., 2018). Alveolar macrophages express TGFβ-receptors that bind to pneumocyte-expressed αvβ6 integrin, tethering them in the alveolar airspace. In inflammatory conditions, AECIs upregulate TLRs and AECIIs increase SP-A and SP-D production (Guillot et al., 2013). These surfactant proteins are known to enhance pathogen opsonization and phagocytosis, and are capable of binding to Coccidioides antigen (Awasthi et al., 2004). Coccidioides infected mice expressed less SP-A and SP-D protein in their bronchial lavage fluid compared to uninfected and vaccinated controls, demonstrating the pathogen’s capability of altering the lung mucosa (Awasthi et al., 2004). AECII secreted production of surfactant proteins may be influenced by Coccidioides allowing fungal escape of phagocytosis and prolonged survival. https://www.frontiersin.org/journals/cellular-and-infection-microbiology/articles/10.3389/fcimb.2020.581101/full
{instruction} ========== In your answer, refer only to the context document. Do not employ any outside knowledge {question} ========== [user request] {passage 0} ========== [context document] EVIDENCE: Innate Immunity Innate Detection and Immune Evasion The lungs maintain many defense mechanisms to survey and eliminate airborne threats. Lung epithelial cells (LECs) secrete anti-microbial peptides, complement proteins, and defensins which enhance granulocyte activity and create a less hospitable environment for Coccidioides (Hernández-Santos et al., 2018). To survive, Coccidioides must successfully avoid detection from surveying and patrolling innate immune cells. Lung-resident macrophages, also known as alveolar macrophages, comprise up to 95% of pulmonary leukocytes and participate in early immune detection of pathogens and maintain the lung microenvironment (Wynn and Vannella, 2016). In Aspergillus infections, tissue-specific neutrophils are recruited by LECs and enter the lung early after infection due to β-glucan and chitin (Dubey et al., 2014). Innate leukocytes control early pathogen invasion via phagocytosis and production of reactive oxide and reactive nitrogen species (RNS) (Xu and Shinohara, 2017). β-glucan and chitin are conserved across many fungal species, including Coccidioides, so these molecules could interact with epithelial cells and aid in neutrophil recruitment. In cases where host immune responses cannot control infection, disease becomes chronic. Host responses sometimes control infections through granuloma formation in the lung as fungi is walled off instead of destroyed (Nguyen et al., 2013; Johnson et al., 2014; Wynn and Vannella, 2016). To survive lung defenses and evade innate immune responses, Coccidioides expresses virulence factors for immune evasion and survival. Inside the lung, arthroconidia express ornithine decarboxylase, an enzyme implicated during growth from arthroconidia to spherule state (Guevara-Olvera et al., 2000). During transition, the spherule internal cell wall segments bud off into endospores. Lifecycle transition allows vulnerable, easily phagocytosed, arthroconidia to develop into phagocytosis-resistant spherules (Hung et al., 2002; Gonzalez et al., 2011; Nguyen et al., 2013). Arthroconidia are vulnerable to RNS while mature spherules suppress nitric oxide species (NOS) and inducible NOS expression in macrophages (Figure 1) (Gonzalez et al., 2011). Mature spherules are too large for most host phagocytic activity, allowing Coccidiodes to evade early immune detection (Hung et al., 2002). Coccidioides induces host expression of arginase resulting in ornithine and urea production, important components for transition from arthroconidia to spherule (Hung et al., 2007). FGURE 1 www.frontiersin.org Figure 1 Fungal dimorphism presents challenges for immune detection and activation. Early infection: Coccidioides is vulnerable to immune detection during early infection due to the smaller size (2–5 μM) and SOWgp expression which is detected via Dectin-1 and TLR2 on innate immune cells. These interactions mediate clearance via phagocytosis and reactive oxide species production. Later infection: As Coccidioides sporulates, it secretes MEP1 which digests SOWgp from the fungal surface, hampering immune detection. Spherules induce arginase expression in host tissues, suppressing NOS/NO production via an unknown mechanism, contributing to immune suppression. In the spherule state, Coccidioides secretes metalloproteinase 1 (Mep1) which digests an immunodominant antigen spherical outer wall glycoprotein (SOWgp) on the fungal surface (Figure 1) (Hung et al., 2005). Phagocytotic granulocytes rely on pathogen associated molecular patterns such as SOWgp, thus Mep1 secretion prevents detection by innate immune cells (Hung et al., 2005). Coccidioides upregulates nitrate reductase during development, an enzyme that converts nitrate to nitrite, thereby enhancing Coccidioides survival in anoxic conditions, such as those found inside a granuloma (Johannesson et al., 2006). Early detection to inhaled fungus is critical for host response. Macrophages and neutrophils detect Coccidioides arthroconidia and immature spherules via receptors Dectin-1, Dectin-2, and Mincle interacting with SOWgp (Hung et al., 2002; Nguyen et al., 2013). Endothelial lung cells use these same receptors to regulate defensin secretion. Toll-like receptors (TLRs) and c-type lectin receptors (CLRs) interact with major pathogen-associated molecular patterns to detect Coccidioides (Romani, 2004; Viriyakosol et al., 2008; Viriyakosol et al., 2013). Like most fungi, Coccidioides expresses β-glucans, chitins, and mannans in the outer cell wall (Nguyen et al., 2013). These cell components are recognized by a variety of TLRs and CLRs and elicit strong inflammatory responses from local immune cells. Coccidioides interactions with TLR2 and Dectin-1 on macrophages activate production of reactive oxide species (ROS) and inflammatory cytokines, such as interleukin-6 (IL-6) and tumor necrosis factor-alpha (TNFα) (Viriyakosol et al., 2008; Viriyakosol et al., 2013). There are no known nucleotide-binding oligomerization domain-like (NOD-like) receptors yet associated with Coccidioides detection. In humans, polymorphisms in IFNγ/IL-12 signaling pathway result in a STAT1 gain of function mutations that associate with increased disease severity in Coccidioides, Histoplasma, and Candida infection (Sampaio et al., 2013). In disseminated Coccidioides, patients with severe disease were found to have a STAT3 mutation (Odio et al., 2015). STAT 3 mediates IL-23 signaling, critical for IFNγ, IL-12, and IL-17 production while STAT1 signaling induces Th1 cell differentiation in response to IL-12 to produce IFNγ; IFNγ, in turn, inhibits Th17 differentiation (Yeh et al., 2014). IL-12β1 receptor deficiency is associated with increased risk of disseminated coccidioidomycosis (Yeh et al., 2014). In chronic mucocutaneous candidiasis, gain of function mutations in STAT1 and STAT3 correlates to more severe disease and poor TH17 responses (Zheng et al., 2015). These observations suggest that STAT1 and STAT3 immune signaling is critical in host control of Th1/Th17 cytokine balance and is required for protection and Coccidioides fungal control. In Blastomyces dermatitidis infection, LECs regulate collaborative killing between alveolar macrophages, dendritic cells (DC), and neutrophils (Hussell and Bell, 2014; Hernández-Santos et al., 2018). Upon LECs ablation, B. dermatitidis phagocytosis is reduced, and viable yeast numbers increase. Other data suggests that IL-1/IL-1R interactions regulate CCL20 expression in LECs. Chemokine CCL20 strongly recruits lymphocytes and weakly recruits neutrophils (Hernández-Santos et al., 2018). IL-1R-deficient mice express less CCL20 and lung Th17 cells are reduced, suggesting that IL-1/IL-1R signaling in LECs could regulate adaptive immune functions (Hernández-Santos et al., 2018). IL-1R is critical for vaccine induced resistance to Coccidioides infection via MyD88 induction of Th17 responses (Hung et al., 2014a; Hung et al., 2016a). Though it has not been explored, LECs could mediate early responses to Coccidioides through IL-1R, suggesting another innate immune cell role in anti-fungal responses within the lung tissues. Alveoli structure likely helps shape local immune responses. Three dominant cell types exist within and around the alveoli structure: Type 1 and Type 2 pneumocytes (also known as alveolar epithelial cells, AECs), and tissue-resident alveolar macrophages (Guillot et al., 2013; Hussell and Bell, 2014). Type 1 pneumocytes (AECI) secrete IL-10 constitutively, which bind to IL-10R on alveolar macrophages to maintain an anti-inflammatory state. Type 2 pneumocytes (or AECII) express CD200 which interacts with CD200R on alveolar macrophage to inhibit pro-inflammatory phenotype (Guillot et al., 2013; Hernández-Santos et al., 2018). Alveolar macrophages express TGFβ-receptors that bind to pneumocyte-expressed αvβ6 integrin, tethering them in the alveolar airspace. In inflammatory conditions, AECIs upregulate TLRs and AECIIs increase SP-A and SP-D production (Guillot et al., 2013). These surfactant proteins are known to enhance pathogen opsonization and phagocytosis, and are capable of binding to Coccidioides antigen (Awasthi et al., 2004). Coccidioides infected mice expressed less SP-A and SP-D protein in their bronchial lavage fluid compared to uninfected and vaccinated controls, demonstrating the pathogen’s capability of altering the lung mucosa (Awasthi et al., 2004). AECII secreted production of surfactant proteins may be influenced by Coccidioides allowing fungal escape of phagocytosis and prolonged survival. USER: My final research project is on fungal immunity. Read this section from a recent publication and explain the effects of the different interleukins mentioned in the section. Do not give an overview of the molecules, I only want to know their specific functions in fungal immunity. Limit to one sentence per interleukin. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
26
52
1,177
null
159
Using ONLY the context block/prompt to guide your answer, provide a comprehensive comparison of the subjects mentioned in the question. Do not use any previous knowledge or outside sources to inform your answer.
How e-sports broadcasts compare with traditional sports broadcasts?
E-Sports Broadcasting 8 Introduction Sportscasters on a Digital Field Sitting at a desk underbright lights, two announcerstalk at afast clip. After a weekend full of commentating, theirvoices are scratchyandfading, yet theirexcitement never wanes. No one watchingcan see the two men, though a camerasitsjust afew feet infront ofthem. Instead, the live audience andhome viewers see the Europeanchampions, Fnatic,going head to head with SK Gaming on a virtualbattlefield. They're 55 minutes into an absoluteslugfest, the two announcers'voices rise andfallwith the action ofthe game. Over the PA, the audience hears that this game is mere seconds awayfrom ending. The SK team has Fnaticon the ropes after brilliantlydefending their base. Fnatic'sstarplayer, Xpeke stays, attempting to win the game singlehandedly. The casters initiallydismiss the lastditch effort while the bulk of SK's team move to end thegameontheothersideofthemap.However,thecamerastaysonXpeke whoisina showdown with one memberofSK. NanosecondsawayfromdefeatXpeke dodgesa deadly ability. The casters erupt in nearly unintelligible,frantic excitement as the 25,000 live attendees atSpodek Arena in Katowice, Polandcheerat the sudden Fnaticvictory. Back in the realworld, theentireFnaticteamjumpsawayfrom theircomputersandpileontoXpeke whilewe hear, "I do not believe it! Xpeke's done it!" Over 643,000 online viewers around the world watch the camerapan acrossthe SK team, stunnedin theirdefeat. From theirhome computers, these viewers have just witnessed e-sports history. E-Sports Broadcasting 9 The above scene unfolded at the 2014 Intel Extreme Masters World Championships in League of Legends, a popular e-sports title. The solo maneuver that Xpeke performed on that stage has since made its way into common LeagueofLegends vernacular, being invoked in any match, casual or professional, where a player deftly ends a game singlehandedly. E-sports, which encompasses many more titles than League of Legends, has become a cultural phenomenon of sorts. People may wonder whether the whole scene is just a flash in the pan or something more significant. I begin this thesis in much the same way that I have begun many conversations over the past two years: defining e-sports. In most of those conversations, I simply say "professional video-gaming" and move on to other topics. Here, though, I fully elaborate on what e-sports means. More than just professional gaming, e-sports is an entire industry created around competitive gaming at all levels of play. An e-sport is not a just a sports video game like the title might suggest, though some e-sports titles are sports video games. Instead, e-sports titles are meticulously balanced, competitive, multiplayer games. Many games would fall into this category, but it takes a community of people to take an e-sport to the level of the classics like Counter Strike and Starcraft. Such communities are core to the identity of e-sports. Indeed, this identity itself is an oxymoronic collision of geek and jock culture; a mixture that media would have us believe acts like oil and water. Even within e-sports communities lines are hazy and misdrawn. As Taylor and Witkowski (2010) show in their study of a mega-LAN event, the e-sports scene is fraught with identity issues not only from outside, but within as well. The jock-like first-person-shooter (FPS) players competing at the same event as the nerdy, enigmatic World of Warcraft players E-Sports Broadcasting 10 shows the conflicting, lived masculinities in e-sports. Players are unsure whether to act like superstar athletes or tech-geeks. Can you be both? The word e-sports alone evokes such a conflicting image. Electronic sports seems almost paradoxical in nature. Have we moved beyond a physical match of skill and extended our contests to avatars in a digital world? How can two players sitting at a desk be sporting? As e- sports continue to grow not only as a segment of the gaming industry, but as a spectator affair, we begin to see the 'sports' side of e-sports both challenged and invoked more frequently. In a telling case, Twitter erupted after a Dota 2 tournament made an appearance on ESPN 2 in 2014. With $10 million at stake, many e-sports fans thought the event warranted the attention of the all-sports network. Plenty of viewers took to social media to praise the move made by ESPN. Others were shocked: "Espn2 is seriously airing an online gaming championship? Wtf man. This is our society now. That is not a sport" (Hernandez 2014). The sports status of e-sports has been both defended and attacked by journalists, academics, and fans alike. The debate about the status of e-sports has been raging for many years. Witkowski's piece, "Probing the Sportiness of E-Sports", presents both sides of the argument pulling from games studies scholars and assessing e-sports on their terms. Ultimately though, I believe she shelves the debate deftly when she states, "sport is a personal experience... as many a sporting scholar has written before - if an individual considers the sporting activity they are engaged in to be a sport... then it is a sport" (2009, 56). I do not wish to rehash this debate. I have no stake in it. As Witkowski asserts, the attempt would be futile. Instead, I accept the role traditional sports have played in the shaping of e-sports. In fact, exploring the relationship between e-sports and their traditional counterpart drives this work. In what follows, I argue that the sports media industrial complex has fundamentally E-Sports Broadcasting 11 shaped the current e-sports industry. Beyond this grounding, e-sports broadcasters constantly borrow from traditional televisual broadcasts, using models that they feel to be appropriate for their medium. Regardless of whether e-sports qualify as sports or not, they are constantly informed by sports broadcasting and follow a trajectory set out by traditional sports models. This work comes about at in an interesting moment in e-sports history. E-sports audiences have never been larger, Riot games boasted an impressive 27 million viewers for the League ofLegends World Championship in 2014 while the 2015 Intel Extreme Masters world championship saw over 1 million concurrent viewers across multiple live-streaming platforms (Riot Games 2014; ESL 2014). An old classic, CounterStrike, has re-emerged, albeit in a new package. The audience it continues to draw proves that some titles have staying power in this fickle industry. At the same time, a new title, League ofLegends, consistently pulls in over 100,000 concurrent viewers for its weekly shows in the U.S. and E.U. As the League ofLegends Championship Series moves into its fifth season, it has come to resemble a traditional sports broadcast more than it does its fellow e-sports shows. A new addition in Season 5, a segment called Prime Time League (PTL) is nearly indistinguishable from ESPN's Pardon the Interruption (PTI) at a glance. Figure 1-Left Image: Prime Time League; Right Image: Pardon the Interruption E-Sports Broadcasting 12 Comparing these two images reveals the level of sports emulation found in e-sports broadcasting today. From the stats and schedule ticker at the bottom of the screen to the show rundown along the edge of the screen, an uninitiated viewer would have difficulty distinguishing between the e- sports show and the traditional sports show. A steady influx of television producers and directors are starting to shape an industry that already has an identity crisis while still investigating how best to harness the new medium of live-streaming. These assertions are not meant to give the impression that we stand on the edge of wholly untouched land as pioneers in a new frontier. As shown in the e-sports literature review to follow, the e-sports industry has a history of evoking the feeling of standing on a precipice. Organization In the introduction, I first provide a brief history of e-sports and take note of the directions e-sports scholarship has pursued. Following this review, I introduce the sports media industrial complex to better situate e-sports broadcasting within the larger media landscape of sports broadcasting: the focus of chapter 1. The first chapter begins by looking at the long history of sports and media. By introducing the full gamut of sports media, I am better able to investigate how e-sports broadcasting stays in conversation with each of its predecessors. As evidenced in the reshuffling of sports media through history, we can see that e-sports make use of all of these forms of media while creating something new. During this chapter, I look to the transition moments in traditional sports broadcasting as the foundation ofthe e-sports industry. Moments of tension and doubt within the sports media industry as it shifted from one medium to another provide perfect lessons E-Sports Broadcasting 13 to be learned by the e-sports industry as they struggle with some of the same issues found in the reshuffling of media history. Indeed, while making use of the same media through journalism, public relations, and audiovisual broadcasts, the e-sports industry constantly wrangles with the use of the newly emerged medium of live-streaming. Television especially influences live- streamed broadcasts, which e-sports broadcasts tend to approach with the same framework as television. Chapter two focuses on e-sportscasters, also known as shoutcasters. I begin the chapter with a brief look at the history of shoutcasting. Considering that many of the early shoutcasters pull solely from traditional sportscasters, understanding their influences is crucial in understanding how e-sports has evolved in the way it has. As, I argue, the single most pointed signaling of the sportiness in e-sports, these individuals have pushed the e-sports industry towards a sports model. When first time viewers or listeners leave an e-sports broadcast with the distinct feeling of a sports broadcast in their mind, it is the shoutcasters doing their job. They rely heavily on conventions set by traditional sportscasters. Much like their predecessors when faced with something new, shoutcasters borrowed what they could and innovated when there was nothing to borrow. Chapter two also focuses on shoutcasters' formulation of their identity within the e-sports industry as personalities, professionals, and record-keepers. Shoutcasters are just now creating an identity separate from traditional sportscasting. Where veteran shoutcasters relied primarily on traditional sports broadcasts, newer casters look instead to other shoutcasters. These shoutcasters are reshaping their identity while attempting to fully embrace the new medium of live-streaming. The third and final chapter tackles the topic of economics in e-sports. As the history and trajectory of sports broadcasting has profoundly affected the e-sports industry, many of the E-Sports Broadcasting 14 economic models present in traditional sports bled into the e-sports industry as well. The e-sports industry in the US and Europe has yet to be analyzed as such. Some work (Taylor 2012) has focused on e-sports revenue streams including sponsorships, company models, and team ownership, but overall, the subject remains underexplored. Dal Yong Jin's (2010) analysis of the political economy of e-sports in South Korea offers a tool set for this chapter. While the South Korean e-sports model spawned out of an extremely particular set of circumstances that cannot be readily applied to the U.S. or E.U. e-sports scenes, Jin's investigation of the surrounding economic systems surrounding e-sports translates well to my own investigation of the U.S. and E.U. industries. As staggering prize pools continue to make headlines, it is easy to lose sight of the economic system working behind the scenes to keep e-sports financially salable, or in some cases not. The third chapter delves into traditional sports economics and their influence on the e- sports industry. In some areas, the models translate perfectly. In others, e-sports has been unable to tap into the same revenue generators as traditional sports. Unless some developments significantly alter the e-sports industry, it may be more tenable to pursue other models instead of the sports industry. Methods This thesis makes use of many qualitative methods including historical analysis, interviews, and fieldwork. To grasp the significance and situation of e-sports broadcasting in its current state fully, one must analyze the same developments in traditional sports broadcasting. As one takes a deeper look into the past of the professional sporting industry, its influences on e- sports become clear. A feedback loop has been created between the two. Historical analysis offers a glimpse at key moments which defined the incredibly successful global sports industry. E-Sports Broadcasting 15 Not only are similar situations appearing in e-sports, but e-sports pushes back into each of the investigated forms of media. A few of the issues currently facing e-sports could be resolved through following the path established by traditional sports, while other issues have been caused because so much has been borrowed. I also had the pleasure of conducting seven interviews with professional shoutcasters. I limited the selection of shoutcasters to full-time professionals, rather than amateurs, to get an insight into how these new professionals view their role within the industry. Roughly half the participants are veteran shoutcasters of five or more years. The other half have joined the scene more recently with one in particular having shoutcasted professionally for less than one year. As these informants are a few of only dozens of professional shoutcasters in the world, I have attempted to keep their identities anonymous. As professional personas, some of these casters may benefit from being associated with this work, but I do not want to run the risk of potentially linking these shoutcasters with their statements in the event that this information could somehow affect the community's perception of the individual or potentially harm their prospects within the e-sports industry. The conversations were all positive, but one can never truly assure their informants that information they have provided in confidence will have no repercussion in any foreseeable future. With these considerations in mind I decided before conducting the interviews that the informants would remain anonymous. Finally, I was also able to spend time working within the e-sports industry. My time spent working for a prominent e-sports company profoundly shaped this thesis. Working alongside industry professionals sparked countless conversations about the current climate of the e-sports industry and possible futures. These conversations have both helped and challenged my thinking about the e-sports industry. While I often refer to the e-sports industry or community as a E-Sports Broadcasting 16 homogenous whole, the professionals who live within the space are not all of one mind and it would be a mistake to present them that way. Within e-sports, there are many different games and communities vying for viewers, players, and attention. What follows is my best attempt at wrangling the many paths e-sports has started to follow. E-sports Literature Review E-sports is still a young industry and an even younger subject of critical inquiry. Most entries into e-sports scholarship have emerged within the last five years. E-sports literature tends to come from the much older tradition of games studies, but ties into many other fields including the social sciences, cultural studies, economics, and law. Professional-gaming literature is a veritable hotbed of potential research topics with more articles, theses, and dissertations appearing every year. Much of the growing body of e-sports literature focuses on the professionalization of gaming (Jin 2010; Mora and Heas 2005; Swalwell 2009; Taylor, Nicholas 2009; Taylor, T.L. 2012; Witkowski 2012). These histories offer much more than a rundown of the events that created the e-sports industry. They also offer insight into our contemporary social moment. The arrival of a professionalization of video gaming signals many significant developments within both western and non-western culture. The global nature of e-sports and its meshing together of complex and often conflicting identities continues to beg investigation. E-sports literature primarily resides within the social sciences. Many cultural analyses in e-sports (Chee and Smith 2005; Harper 2010 and 2014; Hinnant 2013; Swalwell 2009; Taylor 2011) have focused on the communities growing within different scenes. Todd Harper, for instance, investigates the culture of competitive fighting games, a fascinating community which stands both within and at odds with the rest of competitive gaming. Gender studies are also E-Sports Broadcasting 17 becoming increasingly common within e-sports literature (Chen 2006; Crawford 2005; Leonard 2008; Taylor 2009 and 2011; Taylor and Witkowski 2010; Witkowski 2013). With the fascinating and fraught formulation of masculinity within these spaces as well as the perceived absence of femininity, gender studies are incredibly important within e-sports literature. Nicholas Taylor (2011) offers insight into the ability of e-sports to create embodied performances of masculinity at live events which spread through communities specific to certain titles or genres. Taylor and Witkowski (2010) also show the conflicting versions of masculinity that appear in different e-sports genres. There has also been an increasing focus on e-sports as a spectator activity. Jeff Huang and Gifford Cheung (2012) found in a study that many of the e-sports fans they investigated prefer watching high-level play rather than playing a match themselves. Kaytou and Raissi (2012) also investigate spectatorship in e-sports with a focus on how best to measure live- streaming audiences. Others (Bowman 2013; Gommesen 2012; Kow and Young 2013) show that the audience in e-sports has a profound effect on performance for the players, akin to a traditional sports audience. These scholars also investigate the expertise apparent in e-sports players that is passed on through spectating as often as practicing. As the professional play of video games fascinates so many, e-sports literature has understandably focused primarily on professional players. Notable exceptions include Jin (2012) and Taylor (2012) who, while still heeding players, also investigate the surrounding factors which allow for play at a professional level. Without these other factors, professional players would not exist. It is from the tradition of these two authors, among others, that I base this work. This thesis, like many of the works listed above seeks to better understand the phenomenon of e- sports while analyzing a particular segment of the scene. With few investigations into the E-Sports Broadcasting 18 broadcasting of e-sports, I hope to contribute to e-sports literature in a way that is both unique and replicable to other systems found within the larger e-sports framework. Sports Media Industrial Complex As sport and media become increasingly intertwined, it becomes difficult to analyze one without at least acknowledging the impact of the other. Pointing to the inextricable link between sports and media, sports media scholar K. Lefever (2012) argues, "while sport provides valuable content and audiences for media operators, the media is a revenue source and promotional tool for sport." As such, the steady professionalization and, in turn, commercialization of sport relies heavily on its media counterpart. The subsequent interdependence between media outlets, sponsors, and sports leagues creates what is often referred to as the sports/media complex or sports media industrial complex (Jhally 1989, Rowe 1999, Maguire 1991). Wenner (1989) coined the neologism, MediaSport, to define the deeply rooted relationship between sports and media. The two can hardly be considered separate anymore. Stein (2013), a Comparative Media Studies alumni, building on the work of these earlier scholars created a model which could be applied to new arrivals in the sports media landscape. Thankfully, Stein provides a fairly replicable analysis of sports video games within the broader sports media landscape. His investigation of the relationship between televisual sports video games and sports media largely informs my own work. He notes an almost relentless stream of advertising and commercialization rhetoric appearing in sports video games. Building on the work of Wenner, Rowe, and Jhally, he argues that the commodification and capitalist trends found in traditional sports broadcasting bleed into newer media such as video games. This steady influx of advertising and commercialization can be found in e-sports as well. E-Sports Broadcasting 19 As e-sports broadcasters gain more experience and access to more robust technology, they have started to incorporate many of the same commercial opportunities Stein noticed in sports video games. Segments of the broadcast are occasionally sponsored, or one might see a sponsor make an appearance in an event's title such as the Intel Extreme Masters tournament. Where Stein argues that sports video games incorporate these advertisements as a signifier of their televisual legitimacy, I argue that e-sports broadcasters make use of the same strategies because they are informed by earlier forms of sports media. The steady commercialization found in e-sports reveals the influence that the sports media industrial complex has had on the e-sports industry. In documenting the dynamics of the sports media industrial complex, Jhally (1989) argues that sports are best viewed as commodities. Jhally's model focuses on the sporting industry in the US prior to the emergence of new media. More readily applicable to e-sports, Lefever's (2012) analysis of the sports media complex within new media details a phenomenon which has upended the former relationships between stakeholders in the sports media industrial complex. She claims that, "the sports/media complex has somehow changed, allowing the different stakeholders to take up new roles" (Lefever 2012, 13). The stakeholders, including sports franchises, sponsors, and media outlets, have had to adapt to a new media landscape with new roles. These new roles are more transient within the high-demand world of new media. Sports organizations and franchises have taken a more active role in connecting with fans, media outlets have taken a larger interest in sports franchises (often buying sports franchises if it is less expensive than purchasing media rights), and sponsors have taken advantage of new, innovative ways to reach consumers (Lefever 2012, 21). According to sports scholars Haynes and Boyle (2003), television sports viewers are no longer expected to just sit back and relax. Instead they are expected to follow their sport through E-Sports Broadcasting 20 social media, forums, blogs, and other digital outlets. This new, active fan fits well within the e- sports industry and live-streaming, but has changed the traditional sports media industrial complex. Before delving too far into the role of traditional sports economic models on e-sports, however, I will first situate live-streaming and e-sports within the larger sports media industrial complex. E-Sports Broadcasting 21 Chapter 1 Sports Media in Transition From Print to Live-Streaming Every day, millions of Americans are catching up with the latest sports news through print, radio, television, and online. Sports have saturated the entire spectrum of mass media in the US. With the emergence of each form of mass media, sports coverage has been at the forefront of adoption and innovation (Bryant and Holt 2006, 22). Each major medium shift in the US has been accompanied by a massive reshuffling of the sports media landscape. Often, this reshuffling opens a space for a particular sport to take up the new medium, create conventions, and carve a path for others to follow. These sports were not spawned by mass media, but their spike in popularity around the emergence of a new medium indicates very specific social moments in the US. Early sports magazines and print coverage of sports focused primarily on prize-fighting, radio ushered in the golden era of baseball, and television transformed football into a titanic entertainment industry. The rise and stabilization of sports media are as much a product of available technology as they are indicative of societal preoccupations of the time. If sports and sports media are indicative of our social moment, then what can we glean from the arrival of live-streaming and e-sports? The co-evolution of sports and media is the coalescence of many factors including changes in power structures, modes of production, and available technology. As Bryant and Holt argue in their investigation of the history of sports and media, "[e]ach epoch of social evolution has witnessed important sports-media developments that were affected by the evolving socio- cultural environment" (2006, 22). In what follows, I trace the co-evolution of sports and media with particular focus on the relationship between emerging mass media and the media ecology E-Sports Broadcasting 22 surrounding that emergence. By documenting these moments of turbulence, I establish the framework necessary to analyze live-streaming as a new medium with which e-sports has emerged as an early adopter and convention creator. Live-streaming did not emerge independently from its predecessors, but rather delivers on the preoccupations of our current social moment. It has once again started a reshuffling of the roles of media within the sports media complex. E-sports, while primarily viewed through live-streaming, relies on all of the previous forms of media to varying degrees. With this framework in mind, I argue that the feedback between live-streaming, e-sports, and traditional sports has spawned an industry which roots itself in traditional sports media while still investigating the full potential of live-streaming. I begin by briefly discussing sports media in antiquity with Thomas Scanlon's (2006) piece on ancient Mediterranean sports and media. After this introduction to sports media, I move to the US in the late eighteenth century with the emergence of the first sports-only publication, the sports magazine, as well as early print news coverage of prize fighting during the rise of industrialization and nationalism. The next section maps the push towards immediacy in sports coverage and the rise of radio. On the heels of radio and the golden age of baseball, I discuss the early issues with televised sport before the post-war era. Moving into the 1950s and 1960s, I detail the transformation of football into a televisual sport accompanied by a very specific social contingency. I then transition into an investigation of live-streaming and e-sports, particularly how both are in conversation with sports media history. Origins of Sports Media As classicist Thomas Scanlon (2006) posits, there is no history of sports without its media counterpart. Media in antiquity, he argues, "are a tool of society, a means of transmitting a message, primarily one from the rulers to the ruled" (Scanlon 2006, 17). While his definition is E-Sports Broadcasting 23 quite limited, Scanlon is correct in noting that media are inflected with the power structures of a society. Sports as media were classically used by those with power to reinforce the hierarchy. Sports events were "represented as a benevolent benefaction from the rich, noble, and empowered to those marginalized" (Scanlon 2006, 18). This reinforcement of power structures comes through not only in the production of sporting events, but also in the medium itself. Scanlon suggests that the most powerful sports 'medium' in classical times was Roman architecture. The massive circuses and arenas were meant to "provoke awe, admiration, and obedience in the citizens" (Scanlon 2006, 18). Scanlon establishes that the predominant sports medium in a given society correlates directly with their notions of power. Within the realm of more dispersed authority such as the Ancient Greeks, sports media reflected the high value of an individual and his merits. Depictions of athletics in Ancient Greek poetry and pottery, made by and for the common people, focus on a particular athlete's prowess more than the event itself. On the other hand, societies with incredibly rigid hierarchies and god-kings such as the Ancient Egyptians and Persians, tend to represent sports as a demonstration of the ruler's power over their people. Ancient Rome, with its centrally focused authority, used architecture to demonstrate the power of the nobility as both benefactors and arbiters, diminishing the role of the athlete to that of an entertainer. Moving into more recent history with media such as newspapers and radio, Scanlon concludes that sports media became an amalgamation of both the Roman and Greek styles: large spectacles with massive personalities. E-Sports Broadcasting 24 Establishing a Media Landscape: Early Sports Media in America The importance of the printing press on modem society cannot be overstated. While its precise effects are still being debated', the affordances of the printing press allowed individuals to produce and disseminate a massive amount of information far more efficiently than ever before. With a massive rise in literacy rates and increased access to print brought about by the printing press, the reading population of the world shifted (Eisenstein 1983). While early readership was restricted to a very small subset of society, the printing press paved the way for the coverage of more mundane topics such as sports. In their analysis of sports media in pre- industrial America, sports media scholars Jennings Bryant and Andrea Holt point to two major developments: first, the appearance of sports in newspapers as 'general news' and second the creation of a completely sports-centered publication: the sports magazine (2006, 22). The advent and success of sports magazines in the early nineteenth century stands as a marker for some of the intellectual shifts of the industrial era. During this time we see a professionalization of sport in the form of prize fighters. We also see a shift from sports as a local leisure activity to something that one follows from a distance. Sports contests began to take on implications beyond a mere matching of athletes. Many sports magazines started out as independent, one-person operations that began circulation in the 1820s and 1830s (Bryant and Holt 2006, 22). The Spiritof the Times, one of the earliest iterations of the sports magazine, actually reached a circulation of over 100,000 readers by the 1840s. The success of this initial sports-focused publication displays the roots of the American sports media tradition. While they note the significance of sports magazines in the overall climate of sports media in America, Bryant and Holt trace the advent of modem sports 1See Elizabeth Eisenstein. 1983. The Printing Revolution in Early Modern Europe. New York: Cambridge University Press. E-Sports Broadcasting 25 media to recaps of prize fighting in the Penny Press age of the 1830s. With increased circulation to the middle and lower classes, sports coverage increased substantially in the mid-nineteenth century. Sports coverage in the Penny Press era focused on creating spectacular depictions of sporting events. As McChesney, a media historian points out, James Gordon Bennett, owner of the New York Herald,was "one of the first exponents of 'sensationalism' as a means of generating circulation, and sport fit comfortably within this rubric" (1989, 51) Out of the sensationalism present in these early newspapers, sports began to take on more significant cultural meaning. There was particular focus on regionalism and nationalism. Sports media scholar J. Enriquez explains that sporting events were far more likely to be covered if they featured a contest which reflected the social preoccupations of the day such as a northern horse racing against a southern horse, or an American boxer fighting a European (2002, 201). Through these mediated depictions, sporting events were encoded with much more meaning than a simple contest. They reflected the contemporary hopes and anxieties of the people. Sports media built up athletes as representatives. Newspaper recaps did much more than simply describe the actions; they created dramas (McChesney 1989, 51). The hyped up imagery of athletes and their contests created through the Penny Press and sports magazines became the paradigm for sports coverage for decades while a new sport caught America's attention. Newspaper Sports Writing and the Rise of Team Sports The rise of baseball as a national pastime coincide with the period of time just after the American Civil War. McChesney explains, "The Civil War introduced baseball to an entire generation of Americans, as the troops on both sides played the game when time permitted. Indeed, baseball emerged as the preeminent national team sport during this period" (1989, 52). E-Sports Broadcasting 26 After the Civil War, baseball helped mediate conflict by providing common ground for northerners and southerners. This moment was one in which the country was seeking to heal its rift, looking for neutral things that could bind the nation together. Baseball filled a political agenda by giving people something to focus on without opening old wounds. Sports writing changed drastically in the years following baseball's spike in popularity. Sports coverage began to receive regular columns and increased coverage throughout the late nineteenth century, leading to a new kind of journalistic specialization: the sports-writer (Enriquez 2002, 202). This fixation on sport was a result of new socio-cultural environments. Mandelbaum (2004), a sports media scholar and historian, argues that the industrial revolution created a new sports landscape through several major developments. First, the notion of childhood had expanded. In the nineteenth century, the period between birth and entering the workforce increased substantially. The new notion of childhood permitted more people to engage with baseball, football, and basketball. This increased interest in team sports continued into adulthood. Watching and reading about sports in the newspaper or sports magazines became an acceptable way to recapture the "carefree years of their lives" (Mandelbaum 2004, 2). Mandelbaum also argues that baseball offered a renewed connection to pastoral America, creating a feeling of nostalgia for the new city dwellers and factory workers who desperately missed the pace and beauty of rural America. Baseball coverage created the first major feedback loop between sports and media in America. Bryant and Holt claim that the importance of sport was downplayed significantly in the puritan era, but, "regular, routine reporting of sports in newspapers and specialized magazines helped shift the cultural attitude towards sports in general" (Bryant and Holt 2006, 25). They argue that in the late 1870s through the 1890s, Americans adopted a new stance on sports as important for the development of mind, body, and society. This new cultural stance on sports E-Sports Broadcasting 27 was shaped and fostered by an increased media coverage of sports. As baseball and its media coverage became more professionalized, Americans began to consume sports media in completely different methods. Sports spectatorship became a regular and acceptable pastime for the industrial worker. The industrial revolution created the first opportunity in America for sports production and spectatorship to be commercially successful endeavors. The growth of cities and the massive developments in individual mobility allowed for sporting events to take on new significance (Mandelbaum 2004, 3). Cities provided large numbers of sports players as well as spectators to fill newly built stadiums and watch newly formed teams. Sports fandom in the U.S. fit neatly into the predominant forms of labor and leisure. Zillmann and Paulus (1993), two psychologists who wrote on sports spectatorship, explain, "spectatorship, as a significant form of recreation, is an outgrowth of the monotony of machine-dictated labor, sports events became the weekend love affair of all those whose workday was strictly regulated by production schedules" (601). Zillmann and Paulus' article further supports the feedback between sports media consumption and societal structures. Live spectatorship in America had previously been seen as a luxury for the rich and powerful, but with the increased circulation of newspapers, and in particular sports coverage, to the middle and lower classes, sports spectatorship became accessible to an entirely new sector of the population (Bryant and Holt 2006, 21). Architecture once again emerged as an important medium. Large concrete and steel stadiums were created, replacing the more organically created playing fields of the late nineteenth century (Mandelbaum 2004, 52). We see here an important transition into the production of sport as a money making opportunity. As I discuss in the third chapter, the introduction of investors and producers fundamentally alters sports and their media counterparts. E-Sports Broadcasting 28 The available media shaped the portrayal and perception of athletics in the industrial era as well. The idea may sound a bit romantic, but Benjamin Rader (1984), a sports scholar focused on the transformation of sports media in America, labels the period of sports media prior to television as an era of heroes. Whether speaking of prize-fighters or the Mighty Casey of folklore, sports media in the industrial era painted athletes as larger-than-life characters. Rader claims, "[t]hose standing on the assembly lines and those sitting at their desks in the bureaucracies increasingly found their greatest satisfaction in the athletic hero, who presented an image of all-conquering power" (1989, 16). To Rader, sports media before television presented the American ideal. Athletes were meritocratic role-models playing for the love of the game. Rader's analysis places the impetus on newspapers to depict dramatic stories with characters akin to David and Goliath. In addition to individual mobility, urbanization, and industrial work, Enriquez attributes the rise and legitimacy of sports journalism as the catalyst for the nationalization of sports in America (2002, 201). As all forms of communication and nationalization were transforming, sports coverage lead the charge. In the early twentieth century, most newspapers had dedicated sports writers on staff. These sports writers became famous through their innovative and entrancing writing. Writers like W. 0. McGeehan, who worked for many San Francisco papers, described athletes as sorrowful sages and their contests as the clashing of titans on a battlefield (Nyhistory.org 2015). In this period however, it is difficult to judge the difference between journalism and public relations (Bryant and Holt 2006, 30). In fact, the issue of PR penetrating journalism in the late nineteenth to early twentieth century is explicitly laid out in Michael Schudson's (1981) chapter, "Stories and Information: Two Journalisms in the 1890s". At the turn of the century, there existed a dichotomy between news as entertainment and news as E-Sports Broadcasting 29 information. As papers around the country struggled to define themselves, sports media also went through a defining period. Legitimate sports writing became known for its higher literary quality, but read more like advertisements with its exaggerated, often hyperbolic, language. Public relations soon became as much a part of sports journalism as describing the events themselves. Team owners understood the media's role in keeping attendance at sporting events up and began catering to sports journalists for coverage (Enriquez 2002, 206). The team owners expected sports journalists to act as publicists for their events. The gambit paid off as sports writing filled more and more of the daily papers and attendance at live events continued to rise. The sports writers added significance to the experience of watching a sporting event. Between the shifts in the American middle class, leisure activities, and the flowery language of sports journalism, watching a sporting event began to take on the significance of watching history unfold. We will see these same issues appear again in e-sports coverage as journalism becomes a legitimizing force within the e-sports landscape, torn between deep analysis and hyped-up depictions for the sake of generating publicity. Liveness continued to assert its role in sports media as new technologies emerged. The telegraph especially placed the impetus on news sources to provide timely information. In a fascinating illustration of the desire for timely sports news, the ChicagoTribuneran the following note on March 17, 1897, the day of the legendary boxing match between Jim Corbett and Rob Fitzsimmons: "The Tribune will display bulletins today on the prize fight. It has secured a telegraph wire to the ring in Carson City and a competent man will describe the progress of the fight, blow by blow, until the test is decided. The bulletins will be posted thirty seconds after they are written in the far Western city" (Bryant and Holt 2006, 29). This fixation on live updates for sporting events across the nation is another example of how sports media has shaped the E-Sports Broadcasting 30 media landscape of America. Information began traveling faster than ever via wireless transmissions, but it was actually a yacht race which saw one of the very first implementations of wireless for live information transmission. Sporting events saw some of the earliest uses of the telegraph for news reporting as well (Mott 1950, 597). As the telegraph allowed for a sense of liveness even for remote events, it paved the way for the most significant development in sports media prior to television: radio. A Fixation on Liveness: Radio and Sports Consumption Radio delivered on the push towards liveness established by the telegraph. The first broadcast of a Major League Baseball game occurred within a year of the commercial release of radio (Enriquez 2002, 206). Rader remarks, "Now the fan did not have to await his morning newspaper; he instantly shared the drama transpiring on the playing field" (Rader 1984, 23). For the first time, sports were perceived as home entertainment. Broadcasters as well as businesses capitalized on the shift. Sports coverage was integral to the rise in popularity of radio in the interwar period. In Rader's words, In the pre-television era, the heroes of sports assisted the public in coping with a rapidly changing society. The sports world made it possible for Americans to continue to believe in the traditional gospel of success: that hard work, frugality, and loyalty paid dividends; that the individual was potent and could play a large role in shaping his own destiny (1984, 15). By Rader's account, sports programming on radio delivered a much needed revitalization of the American ideals through the transient industrial period and The Great Depression. The rise of radio coincides with the golden age of baseball, but there was an awkward transitional phase into the new medium while newspapers and radio both tried to define their new boundaries. While consumers clearly desired liveness, initial radio broadcasts felt flat and emotionless (Bryant and Holt 2006, 27). Some of the greatest blow-by-blow sports writers were E-Sports Broadcasting 31 terrible at delivering a compelling radio broadcast. Sports writers were extremely adept at creating dramas through print, but they failed to capture audiences in the early days of radio. Oddly enough, their sports knowledge undermined their sports coverage in the new medium. Instead, a new role emerged: the sportscaster. In the era of radio, the performance of live sports broadcasts came with significant stakes. Adept sportscasters were cherished more for their voices than their sports knowledge. Delivering play-by-play depictions of sporting events takes little technical knowledge, instead the entertainment comes from the delivery. Mandelbaum writes of early radio sportscasters, "the broadcasters were akin to poets and troubadours who preserved and handed down the great tales oftheir cultures by committing them to memory and reciting them publicly" (2004, 80). Delivery was actually so important that sometimes sportscasters such as Graham McNamee, known especially for his baseball broadcasts, were not even present at the event but instead handed written play-by-play depictions of the game so that they could add their own dramatic and authorial tone to the live event (Mandelbaum 2004). Another issue during the emergence of radio was redefining the role of newspaper sports coverage. Radio could deliver the liveness desired by sports fans and was incredibly well suited for play-by-play commentary. Newspapers had traditionally covered the blow-by-blow report of an event, capturing the drama through flowery language and hyperbole. With radio, the sportscaster captured the audience's attention through the same means, bringing in even more emotion as his voice rose and fell with the action of the contest (Enriquez 2002, 202). Sports writers instead decided to focus on an area that radio broadcasters could not: strategy. Early sportscasters had to focus so much on the delivery of the action that they could not elaborate on the reasons behind certain maneuvers. Sports writers took advantage of this deficiency and began E-Sports Broadcasting 32 writing articles which focused on everything around the action. From in-depth analysis of strategy to the creation of larger than life athlete personalities, newspaper coverage of sports in the era of radio completely changed to remain relevant. Sports magazines also had to find a new space to occupy during radio's reign. Completely unable to keep up with the live coverage by radio and the strategic coverage of America's favorite sport, baseball, sports magazines instead began to focus on niche sports such as yacht racing. The other innovation of sports magazines in the early 1930s was their addition of full page color photographs of athletes, something that neither radio nor newspapers could offer (Enriquez 2002, 202). They remained as an important sports medium but had been supplanted by both radio and newspapers. Baseball's hold on the American public was so strong that the niche sports, which were typically covered in sports magazines, hardly seemed relevant. Football in particular rarely saw coverage anywhere other than sports magazines (Bryant and Holt 2006, 32). Football had traditionally been seen as a college sport reserved for the wealthy, but with an increasing number of college graduates in the U.S. and the rise of a new medium, its niche status was about to change (Oriard 2014, vii). The Televisual Transformation of Sport Television's initial debut into the sports world was a colossal failure. Reaching only a few hundred people, the first American televisual sports broadcast was a Columbia-Princeton baseball game on May 17, 1939. Just a few years after the commercial release of the television in the U.S., RCA's first foray into televised sport flopped. The New York Times' Orrin E. Dunlap Jr. recounted on the following Sunday, "The televiewer lacks freedom; seeing baseball on television is too confining, for the novelty would not hold up for more than an hour if it were not for the commentator" (Rader 1984, 17). He goes on to say, "To see the fresh green of the field as The E-Sports Broadcasting 33 Mighty Casey advances to the bat, and the dust fly as he defiantly digs in, is a thrill to the eye that cannot be electrified and flashed through space on a May day, no matter how clear the air." Bryant, Holt, Enriquez, and Rader attribute the failure of early televisual sports to several factors. First, television camera technology was rudimentary and receivers were even worse (Bryant and Holt 2006, 31; Rader 1984, 18). Viewers could hardly see the player, much less follow the ball or action on the field. Second, television was not a commercial success upon its release. Sets were expensive and did not offer nearly enough programming to warrant their price: an issue that created a sort of negative loop as the television industry needed more viewers to warrant more content yet could not supply enough content to attract more viewers. The third factor, described by Enriquez, is the failure for broadcasters to adapt to the new medium. Sportscasters could not actually see the video feed and casted the game as if they were still on radio; recounting every single action that occurred on the field despite what was on viewers' screens at home. Inexperienced camera operators had difficulty following the action and the image rarely matched what the sportscaster was describing. Radio sportscasters also had difficulty transitioning into the new visual medium because they could no longer provide the same level of drama through exaggeration and hyperbole. Where short infield ground balls could previously be described as laser-fast bullets, the viewers at home now saw that the play was just another ordinary event. Situated somewhere in between watching the game live at a stadium yet still sounding like radio, televisual sport had a difficult time defining itself in the late 1930s and early 1940s. According to Rader, televisual sport experimentation stopped completely during the Second World War (1984, 23). With the well-established roles of radio, newspapers, and sports magazines, the revival of televisual sport seemed to be impossible. The utter failure of televised sports in the late 1930s E-Sports Broadcasting 34 into the Second World War left televisual sport in a difficult position. Sports radio's popularity was at an all-time high in the 1940s. Baseball had captured the hearts and minds of the American people, and famous radio broadcasters such as Bill Stern and Jack Armstrong kept them listening with bated breath (Rader 1984, 30-3 1). Baseball and more generally live event sports spectatorship, however, could not keep the nation content for too long. In what has been dubbed the Sports Slump of the 1950s by Rader and others (Bryant and Holt 2006, McChesney 1989), spectatorship had finally started to dwindle. Television sets were making their way into homes in record numbers after World War 11. In the post-World War 11 era, pastimes shifted from inner-city, public forms of recreation to private, home-centered forms of recreation. Sports revenue was down and change was in the air. People could watch baseball on their television sets at home, but not many people wanted to. As shown by the earlier quote from The New York Times, television had difficulty containing the magic that baseball once held. Football, however, was poised to rise with the new medium. It had been long overlooked, but football was incredibly well suited for television broadcasts. The large, visually distinct ball and typically slow moving action provided an acceptable subject for contemporary television camera technology (Grano 2014, 13). College football had seen a bit of success in newspapers, but professional football had a negative reputation as a "perversion ofthe college game played for alma mater rather than a lousy paycheck" (Oriard 2014, vii). Radio broadcasts of football had never reached the same level of success as baseball. Professional football seemed to be a sport without a suitable medium. As sports media scholar Michael Oriard explains, "[o]nly television could give the professional game a national audience, and Pete Rozelle's defining act as the commissioner who ushered in the modem NFL was to market the league through a single television contract, rather than leaving clubs to work E-Sports Broadcasting 35 out their own deals" (2014, vii). This deal with broadcasting giant, NBC, led to the NFL's great breakout story and what would soon become the model for televised sports (Rader 1984, 85). With the NBC still losing money on a dwindling sports fanbase, they were ready to pull the plug on their deal with the budding NFL until the championship match between the Baltimore Colts and the New York Giants of 1958 (Grano 2014, 13). This match, still hailed as the 'Greatest Game Ever Played', would become the longstanding origin story of televised football. The game went into a second overtime, pushing the broadcast into prime time on the East Coast, a slot in which NBC never dared to place professional football. As millions of Americans tuned in for their regularly scheduled programming, they instead found John Unitas and his Baltimore Colts scoring the game winning touchdown after a long, hard-fought battle. Oriard, Rader, Grano, Oates, and Furness all trace the NFL's commercial success to this one defining moment. As compelling as origin stories often are, the truth is that many other factors lead to the success of football in the new mass medium. New technologies such as video tape were integral to the rise of football in America. Hitchcock argues that instant replay in particular helped with the rebranding of professional football: "The use of video-tape gave the game of football a whole new image... The instant replay changed football from brutal, quick collisions into graceful leaps, tumbles and falls. It gave football an aura of art in movement. It made football attractive to entirely new segments of the audience" (1989, 2). Where football players had once been seen as lethargic brutes, instant replay allowed broadcasters to slow down images, dissect plays, and highlight the athleticism of players (Rader 1984, 83-84). Sports, with football leading the charge, were once again on the cutting edge of media adoption. According to Dylan Mulvin, the first documented use of instant replay for review and training purposes was in 1957 during a game between the Los Angeles Rams and the San E-Sports Broadcasting 36 Francisco 49ers (2014, 49). By 1964, instant replay was a standard broadcasting technique across all sports. The NFL's willingness to adapt to the new medium set it apart from other sports at the time. In addition to these technological and legal advances, Bryant and Holt as well as McChesney argue that one particularly innovative producer reinvented sports broadcasting for television: Roone Arledge. With ABC's full support, Arledge established television broadcasting conventions still present today. After the 1958 Championship game between the Colts and the Giants, ABC was scrambling to catch up to the NBC's success in televised sports broadcasting. As Enriquez describes, "Television broadcasting affected different sports in different ways. It devastated boxing, had mixed effects on baseball, and proved a boon to college and professional football" (2002, 202). As NBC began to ride the wave created by the NFL, ABC looked to get in on the action. Arledge was given free rein to perform a complete overhaul of ABC Sports. Bryant and Holt argue that the single most important innovation Arledge brought was the notion that a televisual broadcast should be presented "from the perspective of what the typical fan would see if he or she attended the game live" (Bryant and Holt 2006, 33). Arledge (2003) believed that the broadcast should capture the essence of attending a game, not just the play on the field, but the roar of the crowd, the cheerleaders, the marching bands, and the coaches on the sidelines. As Enriquez describes, "under Arledge, television assumed every role previously played by print media; it served as the primary medium for experiencing events, it provided detailed analysis, and it gave human faces to the participants" (2002, 205). Through football, televised sports were able to set conventions which separated them from earlier forms of media. This transition lives E-Sports Broadcasting 37 on in live-streaming today as we will see later with live-streaming's adaptation rather than transformation of televised sport. The arrival of television meant that sports radio and print media had to redefine their role in sports coverage. Television could deliver the liveness of radio and, with the help of commentators and technology like instant replay, the drama and dissection of strategy found in print media. Newspaper coverage of sports was now relegated to simple recaps. Sports magazines on the other hand rode the success of television. As Bryant and Holt assert, "Sports Illustratedoffers a classic example of an old medium responding to a new one" (2006, 36). Rather than seeking out an area left uncovered by television, Sports Illustratedsupported televised sports by providing innovative action photography and updates on the most popular athletes and teams at the time. Sports broadcasts of the 1960s were infused with the hopes and fears of the Cold War era. R. Powers, a television sports scholar, suggests that sports filled a void in the American public, "shrugging off the darker morbidities of the Cold War and McCarthyism" (1984, 118). The re-found focus on sports as spectacle established by "the youthful theme of ABC, echoed the Kennedy idealism of the new frontier, the sporting emphasis echoed Kennedy's image of muscular athleticism..." (Whannel 2002, 34). Entertainment sports media, with its art-in-motion presentation, delivered a message of newness and regeneration to American. Through broadcasting and advertising deals, sports helped build and perpetuate the growing conspicuous consumption movement and the capitalist ideals of post-war America. Athletes resumed their star status. Sports stars began appearing in advertising everywhere. Merchandising became a key part of sports promotion. Anything from replica jerseys of sports stars to blankets and flags with team branding can be found almost anywhere in the U.S. E-Sports Broadcasting 38 Contemporary Sports fandom has come to mean much more than simply following a team. It means buying a team's products, playing sports video games, joining fantasy leagues, and watching sports entertainment television. Oates, a sports media scholar focused on the NFL, writes that fandom has been transformed by the presentation of athletes as commodities to be consumed selectively and self-consciously by sports fans (2014, 80). The previously subcultural hyper-fandom activities such as fantasy football and sports video games, Oates argues, have moved into mainstream prominence and profitability. Fans are invited to interact with athletes as vicarious managers in fantasy sports, offering a completely new, personally tailored form of interaction with sports organizations. This new drive for constant connection and feedback within the sports industry culminates with live-streaming. Live-Streaming: Constant Connection As Oates suggests, sports fandom has fundamentally changed to reflect an increased involvement on the part of the spectator. Athletes and personalities have become commodities for fans to interact with. Social media, fantasy sports, and video games have created a connection to sports stars that was never before available in other media. At any moment, a spectator can catch highlights on ESPN, head over to forums to discuss major sporting events, or load a stream of a match on their phone, all while tweeting at their favorite athletes with the expectation that their words will be received on the other end. Recent trends show a change in the sports media landscape as new platforms begin to vie for control over sports broadcasting in the US. The NFL has recently signed a deal with Google allowing for the streaming of games over the internet after their current contract with DirecTV ends in 2015. This deal reflects the changing media landscape in the internet era. The rise of new streaming platforms poses an interesting dilemma to the current media titans and new E-Sports Broadcasting 39 opportunities for new forms of media sports. Thus far, using the tradition established by McChesney, Bryant, Holt, and Rader among others, I have used sports media as a lens through which to view particular socio-cultural moments in America. I now turn that lens towards the contemporary sports media landscape. What can we learn about our own social moment by looking at the use of streaming platforms for traditional sports or the arrival of e-sports as an entirely new form of professional competition that makes use of older forms of media, but thrives in live-streams and video on demand? The MLB offers an early case study into the use of live-streaming for major league sports broadcasting. The regular season in the MLB consists of 2,430 games, a staggering number compared to the NFL's 256. The sheer number of regular season games held each year causes a problem with over-saturation. This inundation of content lowers the value of each individual game in the eyes of the major networks (Mondelo 2006, 283). The games that these networks choose not to air due to scheduling conflicts previously caused many games to go unseen by fans outside of the local media market for the two competing teams. To remedy the situation, the MLB streamed over 1,000 regular season games online starting in 2003. The launch of MLB.tv in 2002 allowed engaged MLB fans to continue watching content even when they did not have access to the games through the major networks. While not initially a huge commercial success, MLB.tv still runs today, over a decade later at a monthly subscription of $19.99 and as of 2014 incorporated both post-season games and the World Series as part of the package (MLB.tv 2015). While the MLB has not released the official revenue totals for its live-streaming service, with 3.7 million subscribers the platform generates well over $400 million per year (MLB.tv 2013). This little-known use of live-streaming shows a hunger for immediate interaction with sports media regardless of the available medium. E-Sports Broadcasting 40 Early live-streaming fundamentally looks and feels like television, but it filled a role which network television could not: all access and constant connection to media. It took form on a new platform, but did not truly differ from television. Early live-streaming is more like an adaptation of television than a new medium. Rather than creating something new, the early foray into live-streaming by the MLB simply adapted the already present broadcasting infrastructure and applied it through a different avenue. Television is often invoked in live-streaming. If we look at MLB.tv, the .tv signifies its connection to television, but that domain is actually the official domain for the country of Tuvalu. Other streaming platforms like ustream.tv, twitch.tv, MLG.tv, all based outside of Tuvalu, use the same domain to signal their televisual connection. Live-streaming emerged at a very particular moment in the evolution of sports media. With air-time limited on the major networks, the internet allows a near infinite amount of content to reach sports fans. As Oates would argue, from fantasy sports, to blogs, to live-streaming, the internet is, for many, the new space of the sports fan. Live-streaming goes beyond the ability of other media to reach viewers wherever and whenever, whether from a home computer or a mobile device. Live-streaming delivers on the constant connectedness expected by consumers today. At its roots, live-streaming is a televisual medium. So what separates it from television? Live-streaming today has created its own niche by blending other forms of media. Most live-streams host an internet relay chat (IRC) in addition to the audiovisual component of the broadcast. This IRC allows viewers to chat with other audience members and often the broadcaster, a functionality not currently available in television. This live audience connection in live-streaming is unparalleled in television. Hamilton et al., in their investigation of the significance of live-streaming for community creation, situate Twitch streams as an important 'third place' for community. Building on the work of both Oldenberg and McLuhan, Hamilton et E-Sports Broadcasting 41 al. (2014) suggest that "By combining hot and cool media, streams enable the sharing of rich ephemeral experiences in tandem with open participation through informal social interaction, the ingredients for a third place." The third place that the authors point to creates a rich connection akin to interpersonal interaction. The ephemeral nature of these interactions creates a deep sense of community even in streams with hundreds of thousands of viewers. Live-streaming and in turn, the IRC associated with streams creates a shared experience tantamount to the "roar of a stadium" (Hamilton et al. 2014). These streams also pull in a global audience, connecting isolated audiences into one hyper-connected community. Live-streaming draws on television for its look and feel, but delivers not only on the desire for liveness perpetuated in sports media but also the hyper-connectivity present in today's globalized world. E-sports, Live-streaming, and Sports Media Many factors contributed to the success of live-streaming for e-sports. It arrived at a moment when television seemed closed to e-sports, it was much less expensive to produce, and much easier to cultivate. Television broadcasts are prohibitively expensive to produce. Early attempts at airing e-sports on television have typically flopped, rarely surviving past a second season. E-sports are difficult to film when compared to traditional sports and conventions had not yet been set for the televisual presentation of e-sports (Taylor 2012). The action in traditional sports can typically be captured by one shot. E-sports broadcasts, in contrast, must synthesize one cohesive narrative out many different player viewpoints with varying levels of information. In a game like CounterStrike, broadcasters must wrangle with a large map with ten players in first-person perspective. The resulting audiovisual feed is a frantic attempt to capture the most relevant information from the players with an outside 'observer' controlling another viewpoint E-Sports Broadcasting 42 removed from the players' point of view. The observer functionality in the early days of e-sports broadcasting created a difficult barrier to overcome for commercial success on television. Observer functionality had not yet become a focus for game developers and commentary had not reached the level of competency it has in more contemporary broadcasts. Instead of finding success on television, e-sports pulls in millions of concurrent viewers on live-streaming sites such as Twitch.tv. With television seemingly out of reach and streaming requiring significant investment per event in the early 2000's, e-sports broadcasting remained relatively stagnant until the arrival of a reliable, and cheap, live-streaming platform. Justin.tv (and other similar sites like UStream and Stickam), which launched in 2007, delivered exactly what e-sports broadcasters needed to grow. The site allowed users to quickly and easily stream content online with the use of some relatively simple software. Both broadband internet reach and streaming technology had developed to a point that lowered the barrier of entry for broadcasters. Players from around the world streamed games from their bedrooms. E-sports broadcasters reached new, massive audiences. The success of gaming content on Justin.tv spurred a new streaming site dedicated solely to gaming. The games-centered streaming site, Twitch.tv, launched in 2011. Twitch.tv revolutionized the e-sports industry. Each of the casters I interviewed spent time detailing the importance of Twitch.tv without being prompted. As one explained, Twitch.tv is "the clearest driving factor that's grown e-sports over the past 2-3 years." As mentioned in the introduction, e- sports audiences have reached previously unheard of levels. Large scale e-sports events regularly see concurrent viewer numbers in the hundreds of thousands. These broadcasts still largely resemble televised sports however, rarely, if ever, making use of the IRC. E-Sports Broadcasting 43 Live-streaming is just one of the forms of media the e-sports industry makes use of. In fact, e-sports interacts with most media in the same ways that traditional sports have. The e- sports industry pushes back into almost all of the earlier forms of media discussed in this chapter. Print and radio typically fill a PR role in e-sports coverage. Large events or developments often make their way into publications like The New York Times. Local radio segments will occasionally feature summaries of e-sports events occurring nearby. Internet versions of both of print and radio sports coverage are fundamental segments of the e-sports media ecosystem. Podcasts, digital audio files available on the internet through downloads or streaming, vlogs, and video diaries fill essentially the same role for e-sports that radio currently plays for traditional sports. Experts weigh in on recent developments and players breakdown certain aspects of a game. E-sports journalism has also immerged as a legitimizing force within the industry. Sites like ongamers.com and esportsheaven.com keep fans abreast of any new developments in the professional scene for all of the major e-sports titles. Journalists like Richard Lewis add legitimacy to e-sports through their coverage of current events. Their recaps of developments as well as summaries of various tournaments and leagues closely resemble their print counterparts in sports coverage. It is clear that the e-sports industry is in conversation with many forms of media. Many of the forms and techniques are borrowed directly from sports coverage. These forms of media did not appear instantly however, they are the result of years of push and pull with the larger sports media landscape. Nowhere is this more apparent than in the commentating of e-sports live-streams. E-Sports Broadcasting 44 Chapter 2 Shoutcasters Collecting Conventions E-sportscasters, often referred to as shoutcasters, both look and sound like professional sportscasters. Their attire and cadence both create an instant connection to televisual sports. Having never seen a game of Starcraft 2 before, you may watch the flashing lights and explosions with a perplexed look on your face. As you continue to watch, you hear two commentators provide a narrative, stats fly across the screen, and you start to piece together the game in front of you. After a few minutes, you know the two players who are facing off against one another, you feel the excitement as they engage each other's armies, and a slight sting as the player you were rooting for concedes the match with a polite "GG." The whole presentation feels like a variant of Monday Night Football with virtual armies instead of football teams. From the stat-tickers to the sound of the commentator's voice, you can almost imagine the ESPN or CBS logo gracing the bottom corner of the screen. Shoutcasters have become a staple in e-sports. One of the main signifiers of the 'sports' moniker professional gaming has taken on, shoutcasters lend an air of professionalism to a scene which often struggles to define itself. By adopting the 'sport' title, a precedent has been set for e-sports broadcasters which informs their style and conventions. Shoutcasters are important to investigate because they form a fundamental grounding for e-sports which helps it to create its identity in the face of blistering turnover rates and constant field shifts. E-sports stand in a unique position compared to traditional sports. Where players and coaches in traditional sports often have careers that last for several years, e-sports personalities E-Sports Broadcasting 45 suffer from intense turnover rates where professional careers can end within a year. E-sports players burn out quickly and coaches rarely make a lasting name in the industry. The recognizable personalities in e-sports are the few innovators and commentators who turned their passion into a career. In this chapter, I analyze the role of shoutcasters within the larger framework of the e-sports industry. I build much of this analysis on the foundation that Taylor (2012) established in her investigation of the rise of e-sports. Much of Taylor's analysis still holds true today, but some other developments in the field have created new dynamics within shoutcasting that were not present during her initial encounters with shoutcasters. Understanding how shoutcasters borrow from earlier forms of media, the issues they perceive within the industry, and how they cultivate their own identity as shoutcasters while grappling with the hyper-connection found in live-streaming as a medium allows us to grasp the relationship e- sports broadcasting has with earlier forms of media while still creating its own identity. I begin with a very brief look at the history of shoutcasting. Shoutcasting History One can see that even early attempts at broadcasting competitive gaming borrowed heavily from its media contemporaries. Starcade,a 1982 show that ran for two years, marks one of the first forays into e-sports broadcasting. Though the term e-sports had not yet emerged, the show featured two opponents attempting to outscore each other on various arcade machines. If we look to Starcade as an early example of e-sports, then the origins of e-sports commentating resemble game show commentary found in Jeapordy! or The Price is Right. Watching Starcade for the hosting alone reveals many similarities to other game shows: the host wears typical game- show host garb, pleasantly explains every aspect of the competition, and speaks with the E-Sports Broadcasting 46 broadcast voice we all recognize. Starcadealso shows the constant evolution of competitive gaming coverage as it continued to refine its camera angles, presentation, and format over its two year run. The model which more closely resembles our modern vision of shoutcasting gained momentum at the turn of the twenty-first century. The title shoutcaster comes from the early streaming software used for e-sports broadcasting, SHOUTcast. While many people familiar with e-sports may have no idea where the term comes from, a prominent shoutcaster, djWHEAT (2012), claims that the title remains due to its signaling of the history of e-sports. SHOUTcast, a media streaming program, arrived in 1998, allowing interested parties to broadcast audio recordings to various 'radio' channels for free. SHOUTcast allowed for video streaming, but as one early shoutcaster I interviewed lamented, the bandwidth and equipment required for video streaming was prohibitively expensive. Instead of the audiovisual broadcast we regularly associate with e-sports live-streams today, early shoutcasters relied on audio recordings akin to early radio coverage of traditional sports. These early broadcasts only streamed audio to a few hundred dedicated fans on internet radio. Early shoutcasts follow the form of traditional play-by-play radio broadcasts, focused primarily on presenting every development in the game. In interviews, veteran shoutcasters were not shy about admitting the influence radio sportscasters had on their own style. One mentioned that he spent hours listening to live sports radio to hone his own skills. Early shoutcasters also performed many aspects of the production that they are no longer required to perform in the more mature e-sports industry. They would attend events, set up their own station, typically with their own laptop and microphone. It was a very grassroots affair. E-Sports Broadcasting 47 With little experience in the technical aspects of broadcasting, the productions emulated as much as they could from sports broadcasting to lend an air of professionalism. With the arrival of Twitch.tv, and other reliable streaming platforms, much of the onus of production was taken off of shoutcasters. Instead of acting as producers, directors, editors, and on-air talent all at once as they had in the early audio-only streams, shoutcasters are now more able to focus on the portion of their work from which they get their name. Shoutcasting after the early days of internet radio has come to not only sound like traditional sportscasting, but also look like traditional sportscasting. Something Borrowed: Influences from Sportscasting Wardrobe Many ofthe shoutcasters I interviewed talked about wardrobe as a huge change within shoutcasting, one that was spurred entirely by looking at traditional sportscasting. Most shoutcasters got their start wearing t-shirts and jeans at various e-sports events. Today, you will rarely find a shoutcaster not wearing a shirt with a blazer. Looking at the image below shows the incredible shift in shoutcasting just within the last six years. Both images feature the same Figure 2-Left: Joe Miller at 2009 Intel Friday Game London; Right: Joe Miller at 2015 Intel Extreme Masters World Championship in Katowice Poland. Image credit: ESL, Philip Soedler and Helena Kristiansson. Flickr.com/eslphotos E-Sports Broadcasting 48 shoutcaster: Joe Miller. The left-hand image comes from the 2009 Intel Friday Game London while the right-hand image comes from the 2015 Intel Extreme Masters World Championship. While the images are quite similar, the professionalism apparent in the right-hand image resembles a professional sportscaster. The gamer/geek vibe found in the left-hand image has been removed from the shoutcasting image. As a few of the shoutcasters I spoke with admitted, the drive to rework the shoutcaster wardrobe came purely from traditional sports. On top of that, they pointed to a desire to shed the gamer/geek stereotypes that e-sports had come to inhabit. By adopting professional attire, they felt that they could get rid of the old image and emulate the professionalism of a sports broadcast. Wardrobe is not the only aspect of traditional sportscasting that has made its way into shoutcasting. Style One of the more elusive aspects borrowed from traditional sports is the actual commentary style. I use the term elusive here to signal the difficulty in pinning down exactly why shoutcasters remind us so vividly of traditional sportscasters. Early shoutcasters had no models outside of traditional sportscasting so they took as much as they could: "So as a broadcaster we look at traditional sportscasting. We pull from that and then make sure it fits in game casting." As it turns out, many sports commentary conventions translate well into game casting. As such, the first generation of casters share many similarities with television sportscasters. Most of these early shoutcasters admit to being influenced almost entirely by traditional sportscasters. One caster explains, "Television is where we grew up, it's what we watched. So clearly that's where we're going to pull from." E-Sports Broadcasting 49 Shoutcasters typically have no media training, instead relying on mimicry of earlier conventions to get by. As with most positions in e-sports, and similar to early sports writers and radio casters, shoutcasters are just passionate fans turned professional. In conversations, they each revealed a bit of their own personal history that pushed them towards broadcasting, but only one ever mentioned having received any sort of formal training. Years into his shoutcasting career, he "went back and did a journalism and broadcasting course for 6-9 months." Of particular note, he mentions, "they did one really good project which was 'how to be a news presenter'. They taught me the basics of that." The rest, he says, he learned on-air through experience. The other shoutcasters I interviewed echoed this story. Most of the shoutcasters I interviewed fell into shoutcasting through happenstance and had to learn their craft on-air. Shoutcasters are akin to the very early television sportscasters who had to reinvent their style during broadcasts like Bob Stanton, a radio sportscaster turned television sportscaster who would send his friends to sports bars to gather feedback and suggestions from audience members (Rader 1984). Echoing this inexperience and improvisation, one shoutcaster I interviewed confided, "the first time I had ever been on camera, I sat down and I was like, 'I have no idea how to do this.' I had done two and a half years of audio casting, but I had never done video." Another caster recalls of his first show, "All I knew going into my first broadcast was that I know this game. I know how it works, I know these players, and I play against these kinds of players. I don't know how commentary works, but I can do this." After these first, trial broadcasts, both of the above-mentioned shoutcasters admitted to going back and watching traditional sportscasters to learn more about their craft. Other broadcasting style conventions such as how to handle dead-air, how to end a segment, or how to transition into gameplay were lifted directly from sportscasting. Paul E-Sports Broadcasting 50 "ReDeYe" Chaloner, a prominent personality within the e-sports industry, addresses each of these techniques in his primer on becoming a professional shoutcaster, constantly pointing to various examples from traditional sports broadcasting to illustrate his points. In his section on dead-air, Chaloner writes, "[o]ne of the best pieces of advice I had for TV was from legendary sports producer Mike Burks (11 time Emmy award winner for sports production) who told me 'A great commentator knows when to shut up and say nothing"' (2009, 9). Chaloner uses traditional sports broadcasting as a way to explain shoutcasting, a clear indication of its influence on e- sports broadcasting. Content Analysis: Play-by-play and Color Commentary in the NFL andLCS Another convention lifted directly from traditional sports broadcasts is the arrangement of the casting team. Traditional television sportscasters fall into one of two roles: play-by-play or color commentary. Shoutcasters use these same two roles. Both sports broadcasts and e-sports broadcasts feature one of each type. The play-by-play commentator narrates the action, putting together the complicated and unconnected segments of the game into a cohesive narrative. The color commentator provides their in-depth analysis of the game, typically from the stance of a professional player. Shoutcasters have adopted the two-person team directly from traditional sports broadcasts. The path to each role follows the same pattern as well. An ex-professional player almost always fills the role of color commentary in both traditional sports and e-sports. Their insight is unparalleled. Color commentators attempt to breakdown complex series of events or highly technical maneuvers as if they were still a professional player. In the words of one e- sports color commentator, "I'm not pretending to be a professional player, but I'm doing my best E-Sports Broadcasting 51 to emulate them." He goes on to say, "You can read up on it and study it as much as you like, but unless you've lived it, you can't really comment on it." In comparison, a play-by-play commentator does not need to have the technical depth, but relies more on presentation. Even though a play-by-play commentator has most likely played hundreds of hours of whichever game they cast, they cannot fill the role of the color commentator. This dynamic allows for play-by- play commentators to switch games with relative ease whereas color commentators, both in traditional sports and e-sports, are locked into one game. To illustrate the emulation of sports broadcasting found in e-sports, I now turn to a brief content analysis of the commentary found in a regular season NFL game and a regular season League of Legends Championship Series game. I start with the commentary from one play in an NFL game. After presenting the traditional model, I move to the commentary from one team fight in League of Legends to demonstrate how the convention has been adapted for e-sports commentary. In both cases, I have removed the names of players, commentators, and teams to cut down on jargon and clutter. Each case exhibits the dynamic present in the two man commentary team. NFL With both teams lined up, the play begins and the play-by-play commentator comes in immediately. Play-by-play: Here's [player 1] out to midfield, a yard shy of a first down. [player 2] on the tackle. After the play has ended, the color commentator takes over. Color: It's been [team 1] on both sides of the ball. Whether it be defense and the way that they dominated this ball game and then offensively, the early going had the interception, didn't get much going over the next couple of possessions offensively but since that time, [player 3] has been very precise in how he has thrown the football and they just attacked this defense every which way. E-Sports Broadcasting 52 LCS Three members ofthe Red Team engage Blue Team atRed Team's turret Play-by-play: This is going to be dangerous. Doing what he can to hold out. They're going to grab the turret, the fight will continue after the shield onto [player 1] is already broken. He gets hit, the ignite is completely killing the ultimate! He gets hit by [player 2] who turns around again and heads back to [player 3]. With the action overfor the moment, the colorcommentatorbegins to speak Color: I thought he finished a camp here too... The color commentatoris cut off as two more members ofBlue Team attempt to attack. Play-by-Play Heyo, as the top side comes in here too. [player 1], will he hit a good ultimate!? Oh! They were staring right at him but now he's just left to get shredded apart here. They couldn't have thought that this was going to go well for them. With thefightconcluded, thecolorcommentatorcontinuesagain. Color: Is this just the week of chaos? Because that was a really really uncharacteristic lapse in judgement from [Blue Team]: Not calling everybody into position at the right time, and [Red Team] with the advantage make them pay for it. They didn't expect the ignite from Nautilus. I think they expected Nautilus to have exhaust instead, but [player 1] pops the ignite, and as we said there is no armor so [player 2] just... and it continues! The color commentator is cut off once again as the two teams engage one another for a third time. If we look at these examples for their content rather than the specific moment in the game we can catch a full illustration of the two-caster dynamic. As we can see by the NFL example, the play- by-play commentator provides a running narration of the action in the game. When the action ends, the color commentator provides the meta-level analysis of the unfolding events. In the LCS example, we see that the same dynamic is present, however, due to the continuous action in the game, the transition into color commentary becomes difficult. In the first lull, the LCS color E-Sports Broadcasting 53 commentator tries to insert his analysis, but he is cut off by a second engagement. The color commentator stops talking immediately and allows the play-by-play commentator to continue describing the action. After the engagement ends, we hear the color commentator pick up again, explaining why the fight developed the way it did as well as his insight into why the teams played the way they did. Entertainment and Narrative Entertainment value was a repeated concept in my interviews with shoutcasters. Some went so far as to claim that their role was only to entertain. One stated, "I want to get you excited. I want to get you to watch the game as if it was a show on television." Many would point to good sportscasters as an example to follow. If we recall the example of the early days of radio sportscasting, casters had a difficult time making the transition to the new medium. Their broadcasts felt flat when compared with their print counterparts (Bryant and Holt 2006, 27). Early sportscasters got locked into the idea that their responsibility was to provide the basic play- by-play depiction of a match. The golden age of sports radio was brought in by popular sportscasters, such as Graham McNamee, who were so popular that they'd be asked to cast games remotely. McNamee, like a live version of his print counterparts, was famous for creating florid depictions of the game, athletes became heroes and their play became combat as told by McNamee. While the presentation of live and accurate information was still essential, popular radio sportscasters shifted sports media from news reports to entertainment. Sportscasters are responsible for this shift. Without their expert embellishment, play-by-play depictions lack entertainment value. E-Sports Broadcasting 54 Even non-sports fans can feel the excitement from a particularly good sportscaster. The game they portray is far more intriguing than any actual events happening on the field (Bryant, Brown, Comisky, and Zillmann 1982). This disconnect forms one of the primary reasons that the transition to casting televised sport was so difficult. The small liberties that sportscasters took were no longer acceptable in the visual medium. Once the home viewer could see the game, commentary had to shift to accommodate more scrutiny. Radio sportscasters were notorious for their embellishment. As Bryant, Comisky, and Zillman note from one of their several investigations of sportscasting, roughly forty percent of commentary is dramatic embellishment (1977). In 1977, the authors tracked the amount of hyperbole and exaggeration in sports broadcasting and found that over half of the speech was dedicated to drama. E-sports shoutcasters, by comparison, rarely use dramatic embellishment of action. A few of the informants noted that they feel that embellishing actions is not possible due to their audience. The e-sports audience as pictured by shoutcasters, includes mostly dedicated players. While many sports fans may play their sport casually, e-sports fans engage with the games they watch regularly. As one shoutcaster explains, "we've only ever gone out to a hardcore audience." He acknowledges that the current audience is in flux, but the primary base of e-sports fans are intensely dedicated viewers and players. Because of this dynamic, shoutcasters feel that embellishment of the actions on screen would be difficult to slip past a discerning eye. Their belief that dramatic embellishment isn't possible may say more about their understanding of traditional sports fans than it does about their formulation of their role as commentators. While unacknowledged in interviews, the possibility for shoutcasters to add embellishment exists. Their choice not to use embellishment speaks more to their formulation of the e-sports audience than it E-Sports Broadcasting 55 does to their casting quality. Instead of embellishment of action, shoutscasters rely on another convention found in traditional sportscasting: narrative. Studies that focus on the media effects of sportscasting suggest that sportscasters fundamentally alter the audience perception of the telecast through story-telling and narrative (Krein and Martin 2006). Sportscasters take many liberties in their descriptions of the game to add a dramatic flair. In several empirical studies, Bryant, Brown, Comisky, and Zillman (1979) found that when sportscasters created a narrative of animosity between players, viewers felt an increased amount of tension and engagement. They conclude that the narrative scope of the sportscaster is critical in the perception of sports broadcasting. This narrative creation has bled into shoutcasting as many shoutcasters attempt to amplify the emotional content of their games by highlighting underdog stories or hyping up animosity between players. One caster I interviewed connected his work to the narrative creation in sports commentary by stating, "Emotion is one of the key words in commentary. You need to be able to connect a certain emotion to the words you're saying. You need to be able to make someone scared for their favorite player or overjoyed when they win. Create greatest enemies. You need to be able to make these feelings through what you say or how you say it. Emotion is everything." This caster goes to great lengths to dig up statistics from previous matchups to provide a narrative for the match he casts. Through this investigation, the shoutcaster is able to contextualize a match with a rich history. Perhaps two players have met three times before and each time the result has been the same. Will viewers be able to share in the momentous victory of the underdog? As part of their preparation, shoutcasters will research all of the previous meetings between two players to create a history between them, a tactic which they acknowledge has been used in traditional sports for decades. E-Sports Broadcasting 56 Production Stream production is another realm where e-sports have started to borrow heavily. While e-sports producers may have gotten a head start on streaming live events, they often rely on the expertise of television producers to put a show together. Multiple shoutcasters pointed to a steady influx of television producers making their way into e-sports, "the way we approach a production is very much like television. A lot of the production guys that are getting into it are from television." In fact, the executive producer of the League of Legends Championship Series, an immensely popular e-sports program, is former emmy-winner Ariel Horn. Horn won his Emmy as an associate producer of the 2004 Olympics for NBC. Likewise, Mike Burks, executive producer for the Championship Gaming Series mentioned in the above quote from Paul Chaloner, had an immense amount of experience in televised sports before migrating to e- sports. These are just two of the many experienced television producers making their way into e- sports. Their style is beginning to show as e-sports events become more polished every year. If we recall the image of Prime Time League in the introduction to this thesis, we can see the influx of television conventions in e-sports from the production side. The shoutcasters benefit from the experience of working with television producers to refine their style. As the field has grown, however, we begin to see minor tweaks in style and delivery. Spending a significant time with e- sports casting, in comparison with sportscasting, reveals several distinctions. Much of this difference comes with the age of the field, but just as Starcadeevolved over its short lifespan, shoutcasters have found ways to make themselves unique. Their understanding of their role within the overall e-sports industry informs us of some of the key differences here. E-Sports Broadcasting 57 Something New: Shoutcaster Identity Shoutcasters are situated somewhere between fan and professional. As evidenced by the above investigation of how shoutcasters are informed by their traditional predecessors, the role of shoutcasters is still very much in flux. Shoutcasters are just recently creating their own identity separate from their sportscasting roots. In particular, the less experienced shoutcasters I spoke with use markedly different models to inform their own casting. The Second Generation of Professional Shoutcasters A second generation of casters is just now coming into the scene. Instead of looking to traditional sportscasters as their models, they emulate veteran shoutcasters: "my influences are the streamers that I watched. I watched everyone who casts and commentates...my commentary style comes from those guys. I don't know how much is conscious or just mimicry." This new caster has been on the scene for only a fraction of the time that the veterans have. In that time he has honed his shoutcasting skills not by finding sports commentary and seeing which aspects apply to shoutcasting, but by absorbing as much information as he could from other shoutcasters. Another fresh shoutcaster offers a fascinating disconnect from the older casters: "I definitely bounce off more e-sportscasters than sports. I just watch more e-sports than sports. Sports are so different than e-sports, there's so little that I can actually use from them." Where his predecessors admit to borrowing primarily from traditional sportscasters, this new generation has left the realm of traditional sportscasting behind. The professional casters provide material for an amateur level of shoutcasters to pull from. The shoutcasters I interviewed were all professionals who typically work on major events with massive support and budgets. With a robust network of shoutcasters to pull from, however, E-Sports Broadcasting 58 we may see much more support for the grassroots level of e-sports that many early fans are accustomed to. Current shoutcasters also provide a model for potential careers. Through the hard-fought struggle of years-worth of unpaid events, the shoutcasters I spoke with have created a legitimate profession worth pursuing. Most warned me that the path is no longer as easy as they once had it. Most of them pursued shoutcasting for the love of e-sports. They had years to fumble through persona creation, broadcast techniques, and conventions. New, potential shoutcasters are automatically held to a higher standard. A senior caster offered the following advice, "With how casting has changed, you need to be open to casting multiple games. You have to be willing to learn. There is a lot we can teach a caster, but you have to have some skills within you alone. You have to have some camera presence." The mention of camera presence signals a significant jump from early shoutcasting. Just a few years ago, the shoutcasters I interviewed sat down in front of a camera for the first time armed with nothing but game knowledge; camera presence was a foreign word to them. Perhaps the most significant change to casters is their overall level of experience. Some of the shoutcasters I spoke with have been broadcasting for over a decade. Time has allowed these casters to experiment and find their own style. As mentioned earlier, many of the minutia involved in running a show take time to learn. Most casters got their start casually. They may have been passionate about e-sports and created a role for themselves within the industry. Some are former players who made the hard decision to give up on their hopes of winning big to instead cultivate a community. As new professionals, shoutcasters are just now coming together with the support of e- sports companies under legitimate full-time contracts. The professional casters I spoke with all acknowledged a significant change in their commentary since making the transition into full-time E-Sports Broadcasting 59 casting with other casters around for feedback and training. One explained that he had never been sure how to handle dead-air, moments when both casters are silent and there is little action in the game. Through feedback sessions with other casters, he learned that there are some appropriate times to let the viewer formulate their own opinions on the match. Heeding the advice of veteran casters like Paul Chaloner, he went on to explain that one of the problems he sees in shoutcasting more generally is that shoutcasters are afraid to just be quiet during a stream. Part of the emotional build-up of a game, he explains, is letting the natural flow of a game take its course without any input from the casters. It will be fascinating to watch as these expert networks inform e-sports broadcasts across the world. One informant remarked, "Now that we're all working together, we're learning a lot off of one another, which hasn't happened in commentary before." Beyond allowing veteran shoutcasters to compare notes, the professional status of shoutcasting provides training to new shoutcasters. One veteran claimed, "All the junior people are learning so much faster than we ever did. They're taking everything we learned over 5-10 years and doing it in months." These veteran casters can now pass on their experience and their style. Techniques like hand-offs at the end of a segment or transitions from the desk to gameplay often came up in my interviews as issues which take years to learn, but newer shoutcasters are able to pick these cues up from earlier shoutcasters instead of taking what they can from a sports show and hoping that everything translates well. Beyond the expected roles that shoutcasters fill, they also perform many secondary tasks which don't typically fall to traditional sportscasters. In the very early days of live-streaming, shoutcasters were often responsible for every aspect of the broadcast from set-up to teardown. Some shoutcasters still regularly assist on production aspects of the broadcast such as graphics E-Sports Broadcasting 60 packages, camera set-up, and audio checks, but others leave the production aspects of the stream to more experienced hands while focusing instead on updating websites, answering tweets, creating content, or streaming their own play sessionss. No two casters seem to fill exactly the same role within the broadcast team. They do, however, share some similarities which seem to form the shoutcaster identity. Record-keepers and Community Managers All of the casters pointed to stats-tracking as part of their roles outside of their air-time responsibilities. Most of them keep highly detailed databases full of every possible stat they can get a hold of from game clients and public databases. These stats can be as simple as wins and losses from remote regions or LAN tournaments that do not post their results online. The stats can also get as minute as the number of units a particular Starcraft 2 player built in one particular match. When the data isn't readily available, shoutcasters go out of their way to curate the database themselves. While some keep their database secret to provide a personal flair to their casting, others find it important to share this information with their e-sports communities. One shoutcaster recalled his surprise when he first worked with a major South Korean e-sports company with its own dedicated stats team. He expressed that he had never realized how much he needed a dedicated stats team like you find in traditional sports until that moment. It was then that he realized how much of his daily routine stats curation filled. While he was grateful for the help, he also felt personally responsible for stats collection and did not entirely trust the figures from the professional statisticians. This example shows the difficult position e-sports fills, constantly stuck between borrowing from traditional sports while not fully able to cope with the maturity of the sports media industry. E-Sports Broadcasting 61 Another role which tends to fill a shoutcaster's daily routine is community maintenance. Whether the caster creates their own content on gaming sites, responds to fans on social media, or spends their time streaming and interacting with the community, they all mentioned some form of community maintenance as part of their duties as a shoutcaster. This particular focus on community maintenance most likely results from the grassroots origins of shoutcasters. These casters were a part of an e-sports community long before they became shoutcasters. Whether they view it as their professional responsibility or a social responsibility remains unclear. They all admit to some level of e-sports advocacy, however. They view PR, and the proliferation of e- sports as part of their responsibilities. The most effective way to tackle this issue, many of them have decided, is through community engagement. The community aspect of shoutcasting identity leads me to a discussion of the affordances of the hyper-connectivity in live-streaming. Grappling with the Hyper-Connectivity in Live-streaming and E-sports Shoutcaster Connection I have yet to meet anyone in the e-sports industry who has not remarked on the unique level of connection present in e-sports. Shoutcasters especially, tap into the network created in these online communities. In a representative summary of my conversations, one shoutcaster explained, "the connectedness is so unique in e-sports. The way that we can interact with fans instantly. The players at the end of the day are gamers, they know exactly where to look. They've got Twitter, they go on Facebook, they post on Reddit." Audience members connect ephemerally in the IRC of a Twitch stream, but they constantly scour the social media outlets of their favorite stars, e-sports companies, and shoutcasters, creating a deeply connected community. Professional shoutcasters understand that the e-sports communities operate in a E-Sports Broadcasting 62 unique way when compared to traditional sports fandom. E-sports fans have an odd connection to franchises or teams within their chosen e-sport. As mentioned before, turnover rates and general industry growth force entire communities to radically reform from one season to another. Where traditional sports fans often follow a team based on geographic loyalty, or familial connections, e-sports fans do not have that option. While you will often hear of fans cheering for teams in their geographic region (North America, Europe, South-East Asia, etc) if they make it to the last few rounds of an international tournament, they may also base their fandom off of a team logo, or a particular player instead. Shoutcasters recognize this dynamic and use it to cultivate the community. Communication, they claim, separates them from traditional sports broadcasts or even news anchors: "We communicate more with our audience than you'll see TV news anchors or celebrities, but it's part of our job to get more information out there." The focus on communication seems to be unique to shoutcasters as the majority of it happens outside of their broadcasts. While many shoutcasters define their role on-screen as an educator of sorts, the notion of spreading information about e-sports falls outside of their screen time. This double role of broadcaster and community manager extends what media scholars have dubbed the broadcasting persona beyond the point typically associated with sportscasters or news anchors. Shoutcasters and Persona Horton and Wohl (1956), two social scientists who study mass media, make the assertion that mass media performers make a conscious decision to create and maintain parasocial interactions through the creation of a persona. Social scientists have coined the term parasocial interaction for the intangible connection which most of us feel to some form of media or another. E-Sports Broadcasting 63 Standing in contrast to interpersonal interaction, a person to person exchange between two real and cognizant human beings, parasocial interaction is instead a unidirectional relationship (Miller and Steinberg 1970). The feeling of connection we create with fictional characters, news anchors, or sports stars does not fall within the definition of an interpersonal interaction. Whether mediated through a screen or the pages of a book, a parasocial interaction does not manifest in an exchange of thoughts or words between individuals. Rather, it is embodied and lived through one individual. Schiappa et al. (2007) conducted a meta-analysis of parasocial interaction literature to better understand how broadcasters 'hook' viewers to a certain show. They concluded that parasocial interactions can create and prolong connection to television programming. While Schiappa et al. concede that there are a few opportunities for a parasocial interaction to result in interpersonal relationships in the physical world, the compelling issue is the establishment of intimacy mediated through means well outside of a person to person context. Horton and Wohl set out with the goal of creating a term for the relationship between performers and their audience in mass media. The authors suggest that the emergence of mass media created an illusion of connection to performers which was previously unavailable. They argue that the connection people feel to mass media stars is analogous to primary social engagement. If this type of engagement takes place in radio and television, where users have no opportunity to interact with audience members who are not co-present, it follows that the interaction between broadcasters, their audience, and one another in a Twitch stream is a particularly deep connection even beyond the level noticed by Horton and Wohl. Shoutcasters create a familiar face and personality for audience members to connect with. Mark Levy (1979), another proponent of parasocial interaction who focused his work on news anchors, suggests that both news anchors and sportscasters help to create and maintain E-Sports Broadcasting 64 communities through regular scheduling, conversational tones, and the creation of a broadcasting persona. Shoutcasters perform this same role to even greater effect due to the constant changes surrounding the e-sports industry. The regularity and consistency of shoutcasters' broadcasts helps to foster a feeling of genuine connectedness within the community. Although difficult to quantify, many conversations with shoutcasters turned to the odd feeling of connection that e-sports fans feel towards one another. One shoutcaster attempted to explain this connection by stating, "[w]henever I go to an event, I realize that fans are just friends I haven't met yet." I found this statement to be particularly poignant. It hints to the sort of intangible connection e-sports industry personalities and fans feel to one another through live- streams. Anecdotally, this air of friendship permeated e-sports events that I have attended and went well beyond what I have felt at traditional sporting events or concerts. Previously, persona creation and maintenance occurred on-screen or at events only. Social media has forced many media personalities to extend their personas beyond the long-held notions of broadcaster-fan interaction. In many ways, shoutcasters must go beyond even these extended boundaries into a near constant persona maintenance because of their roles in live- streaming and community maintenance. Many shoutcasters give up their personal, off-air time to stream their own gameplay or to create video content which necessarily prolongs the amount of time they embody their broadcast persona. I found that shoutcasters create a variation on the broadcast persona. Rather than a full- blown broadcasting personality which they inhabit while on-air, most shoutcasters have found that between community management, social media interactions, and broadcasts, they almost never get an opportunity to step out of their role as a shoutcaster. Due to this near constant connection, most shoutcasters acknowledge that they act differently on air, but they tend to E-Sports Broadcasting 65 simply invoke a more upbeat and charismatic version of themselves. Echoed in each of the interviews, the casters point to the idea of excitement, "you have to get excited for the person out there watching." Even if they are not in the mood to shoutcast, or they have had a bad day, shoutcasters must leave their personal issues out of the broadcast. This aspect of the shoutcaster's personality comes out in all of their interactions on social media as well. Most of the shoutcasters I interviewed situated their role in e-sports as somewhere between Public Relations, Marketing, and Community Management. One of the casters explained the importance of invoking the broadcast persona when speaking about sponsor expectations: "We're working in an industry with companies behind us, we can't always say exactly what we want to say." Shoutcasters' acknowledgement of their involvement in securing sponsorships signals an interesting shift in the e-sports industry: the focus of the broadcast team on potential revenue generation. I turn now to an analysis of the revenue streams found in both traditional sports and e-sports broadcasting. E-Sports Broadcasting 66 Chapter 3 Revenue Funding Professional Play After situating e-sports broadcasting within the greater sports media landscape, particularly in conventions, casting, and use of medium, it is important to analyze the portions of sports media production that have made their way into e-sports broadcasting. If we acknowledge the influence that traditional sports broadcasting has had on e-sports broadcasting in the realms of conventions and casting, we must also understand the importance of this relationship at the production and economic levels. In this chapter I discuss how the history and development of the sports media industrial complex in the U.S. has bled into the economics of the e-sports industry. In particular, I focus on how sports media models inform the e-sports industry while portions of the sports industry's revenue streams remain out of reach for e-sports broadcasters. Despite the reshuffling of the sports media industrial complex mentioned in the introduction to this thesis, traditional sports broadcasting still relies on the same revenue streams that it had in the past. Traditional sports producers have fully capitalized on the commodification of their content. E- sports producers, in contrast, are still shaping their revenue streams within live-streaming. The commercialization found in the sports media industrial complex has taken hold of the e-sports industry in several notable ways. Following in the example set by Stein's thesis work, it is not enough to just acknowledge the relationship between e-sports and traditional sports media, we must also understand the path which brought e-sports broadcasting to its current state.
E-Sports Broadcasting 8 Introduction Sportscasters on a Digital Field Sitting at a desk underbright lights, two announcerstalk at afast clip. After a weekend full of commentating, theirvoices are scratchyandfading, yet theirexcitement never wanes. No one watchingcan see the two men, though a camerasitsjust afew feet infront ofthem. Instead, the live audience andhome viewers see the Europeanchampions, Fnatic,going head to head with SK Gaming on a virtualbattlefield. They're 55 minutes into an absoluteslugfest, the two announcers'voices rise andfallwith the action ofthe game. Over the PA, the audience hears that this game is mere seconds awayfrom ending. The SK team has Fnaticon the ropes after brilliantlydefending their base. Fnatic'sstarplayer, Xpeke stays, attempting to win the game singlehandedly. The casters initiallydismiss the lastditch effort while the bulk of SK's team move to end thegameontheothersideofthemap.However,thecamerastaysonXpeke whoisina showdown with one memberofSK. NanosecondsawayfromdefeatXpeke dodgesa deadly ability. The casters erupt in nearly unintelligible,frantic excitement as the 25,000 live attendees atSpodek Arena in Katowice, Polandcheerat the sudden Fnaticvictory. Back in the realworld, theentireFnaticteamjumpsawayfrom theircomputersandpileontoXpeke whilewe hear, "I do not believe it! Xpeke's done it!" Over 643,000 online viewers around the world watch the camerapan acrossthe SK team, stunnedin theirdefeat. From theirhome computers, these viewers have just witnessed e-sports history. E-Sports Broadcasting 9 The above scene unfolded at the 2014 Intel Extreme Masters World Championships in League of Legends, a popular e-sports title. The solo maneuver that Xpeke performed on that stage has since made its way into common LeagueofLegends vernacular, being invoked in any match, casual or professional, where a player deftly ends a game singlehandedly. E-sports, which encompasses many more titles than League of Legends, has become a cultural phenomenon of sorts. People may wonder whether the whole scene is just a flash in the pan or something more significant. I begin this thesis in much the same way that I have begun many conversations over the past two years: defining e-sports. In most of those conversations, I simply say "professional video-gaming" and move on to other topics. Here, though, I fully elaborate on what e-sports means. More than just professional gaming, e-sports is an entire industry created around competitive gaming at all levels of play. An e-sport is not a just a sports video game like the title might suggest, though some e-sports titles are sports video games. Instead, e-sports titles are meticulously balanced, competitive, multiplayer games. Many games would fall into this category, but it takes a community of people to take an e-sport to the level of the classics like Counter Strike and Starcraft. Such communities are core to the identity of e-sports. Indeed, this identity itself is an oxymoronic collision of geek and jock culture; a mixture that media would have us believe acts like oil and water. Even within e-sports communities lines are hazy and misdrawn. As Taylor and Witkowski (2010) show in their study of a mega-LAN event, the e-sports scene is fraught with identity issues not only from outside, but within as well. The jock-like first-person-shooter (FPS) players competing at the same event as the nerdy, enigmatic World of Warcraft players E-Sports Broadcasting 10 shows the conflicting, lived masculinities in e-sports. Players are unsure whether to act like superstar athletes or tech-geeks. Can you be both? The word e-sports alone evokes such a conflicting image. Electronic sports seems almost paradoxical in nature. Have we moved beyond a physical match of skill and extended our contests to avatars in a digital world? How can two players sitting at a desk be sporting? As e- sports continue to grow not only as a segment of the gaming industry, but as a spectator affair, we begin to see the 'sports' side of e-sports both challenged and invoked more frequently. In a telling case, Twitter erupted after a Dota 2 tournament made an appearance on ESPN 2 in 2014. With $10 million at stake, many e-sports fans thought the event warranted the attention of the all-sports network. Plenty of viewers took to social media to praise the move made by ESPN. Others were shocked: "Espn2 is seriously airing an online gaming championship? Wtf man. This is our society now. That is not a sport" (Hernandez 2014). The sports status of e-sports has been both defended and attacked by journalists, academics, and fans alike. The debate about the status of e-sports has been raging for many years. Witkowski's piece, "Probing the Sportiness of E-Sports", presents both sides of the argument pulling from games studies scholars and assessing e-sports on their terms. Ultimately though, I believe she shelves the debate deftly when she states, "sport is a personal experience... as many a sporting scholar has written before - if an individual considers the sporting activity they are engaged in to be a sport... then it is a sport" (2009, 56). I do not wish to rehash this debate. I have no stake in it. As Witkowski asserts, the attempt would be futile. Instead, I accept the role traditional sports have played in the shaping of e-sports. In fact, exploring the relationship between e-sports and their traditional counterpart drives this work. In what follows, I argue that the sports media industrial complex has fundamentally E-Sports Broadcasting 11 shaped the current e-sports industry. Beyond this grounding, e-sports broadcasters constantly borrow from traditional televisual broadcasts, using models that they feel to be appropriate for their medium. Regardless of whether e-sports qualify as sports or not, they are constantly informed by sports broadcasting and follow a trajectory set out by traditional sports models. This work comes about at in an interesting moment in e-sports history. E-sports audiences have never been larger, Riot games boasted an impressive 27 million viewers for the League ofLegends World Championship in 2014 while the 2015 Intel Extreme Masters world championship saw over 1 million concurrent viewers across multiple live-streaming platforms (Riot Games 2014; ESL 2014). An old classic, CounterStrike, has re-emerged, albeit in a new package. The audience it continues to draw proves that some titles have staying power in this fickle industry. At the same time, a new title, League ofLegends, consistently pulls in over 100,000 concurrent viewers for its weekly shows in the U.S. and E.U. As the League ofLegends Championship Series moves into its fifth season, it has come to resemble a traditional sports broadcast more than it does its fellow e-sports shows. A new addition in Season 5, a segment called Prime Time League (PTL) is nearly indistinguishable from ESPN's Pardon the Interruption (PTI) at a glance. Figure 1-Left Image: Prime Time League; Right Image: Pardon the Interruption E-Sports Broadcasting 12 Comparing these two images reveals the level of sports emulation found in e-sports broadcasting today. From the stats and schedule ticker at the bottom of the screen to the show rundown along the edge of the screen, an uninitiated viewer would have difficulty distinguishing between the e- sports show and the traditional sports show. A steady influx of television producers and directors are starting to shape an industry that already has an identity crisis while still investigating how best to harness the new medium of live-streaming. These assertions are not meant to give the impression that we stand on the edge of wholly untouched land as pioneers in a new frontier. As shown in the e-sports literature review to follow, the e-sports industry has a history of evoking the feeling of standing on a precipice. Organization In the introduction, I first provide a brief history of e-sports and take note of the directions e-sports scholarship has pursued. Following this review, I introduce the sports media industrial complex to better situate e-sports broadcasting within the larger media landscape of sports broadcasting: the focus of chapter 1. The first chapter begins by looking at the long history of sports and media. By introducing the full gamut of sports media, I am better able to investigate how e-sports broadcasting stays in conversation with each of its predecessors. As evidenced in the reshuffling of sports media through history, we can see that e-sports make use of all of these forms of media while creating something new. During this chapter, I look to the transition moments in traditional sports broadcasting as the foundation ofthe e-sports industry. Moments of tension and doubt within the sports media industry as it shifted from one medium to another provide perfect lessons E-Sports Broadcasting 13 to be learned by the e-sports industry as they struggle with some of the same issues found in the reshuffling of media history. Indeed, while making use of the same media through journalism, public relations, and audiovisual broadcasts, the e-sports industry constantly wrangles with the use of the newly emerged medium of live-streaming. Television especially influences live- streamed broadcasts, which e-sports broadcasts tend to approach with the same framework as television. Chapter two focuses on e-sportscasters, also known as shoutcasters. I begin the chapter with a brief look at the history of shoutcasting. Considering that many of the early shoutcasters pull solely from traditional sportscasters, understanding their influences is crucial in understanding how e-sports has evolved in the way it has. As, I argue, the single most pointed signaling of the sportiness in e-sports, these individuals have pushed the e-sports industry towards a sports model. When first time viewers or listeners leave an e-sports broadcast with the distinct feeling of a sports broadcast in their mind, it is the shoutcasters doing their job. They rely heavily on conventions set by traditional sportscasters. Much like their predecessors when faced with something new, shoutcasters borrowed what they could and innovated when there was nothing to borrow. Chapter two also focuses on shoutcasters' formulation of their identity within the e-sports industry as personalities, professionals, and record-keepers. Shoutcasters are just now creating an identity separate from traditional sportscasting. Where veteran shoutcasters relied primarily on traditional sports broadcasts, newer casters look instead to other shoutcasters. These shoutcasters are reshaping their identity while attempting to fully embrace the new medium of live-streaming. The third and final chapter tackles the topic of economics in e-sports. As the history and trajectory of sports broadcasting has profoundly affected the e-sports industry, many of the E-Sports Broadcasting 14 economic models present in traditional sports bled into the e-sports industry as well. The e-sports industry in the US and Europe has yet to be analyzed as such. Some work (Taylor 2012) has focused on e-sports revenue streams including sponsorships, company models, and team ownership, but overall, the subject remains underexplored. Dal Yong Jin's (2010) analysis of the political economy of e-sports in South Korea offers a tool set for this chapter. While the South Korean e-sports model spawned out of an extremely particular set of circumstances that cannot be readily applied to the U.S. or E.U. e-sports scenes, Jin's investigation of the surrounding economic systems surrounding e-sports translates well to my own investigation of the U.S. and E.U. industries. As staggering prize pools continue to make headlines, it is easy to lose sight of the economic system working behind the scenes to keep e-sports financially salable, or in some cases not. The third chapter delves into traditional sports economics and their influence on the e- sports industry. In some areas, the models translate perfectly. In others, e-sports has been unable to tap into the same revenue generators as traditional sports. Unless some developments significantly alter the e-sports industry, it may be more tenable to pursue other models instead of the sports industry. Methods This thesis makes use of many qualitative methods including historical analysis, interviews, and fieldwork. To grasp the significance and situation of e-sports broadcasting in its current state fully, one must analyze the same developments in traditional sports broadcasting. As one takes a deeper look into the past of the professional sporting industry, its influences on e- sports become clear. A feedback loop has been created between the two. Historical analysis offers a glimpse at key moments which defined the incredibly successful global sports industry. E-Sports Broadcasting 15 Not only are similar situations appearing in e-sports, but e-sports pushes back into each of the investigated forms of media. A few of the issues currently facing e-sports could be resolved through following the path established by traditional sports, while other issues have been caused because so much has been borrowed. I also had the pleasure of conducting seven interviews with professional shoutcasters. I limited the selection of shoutcasters to full-time professionals, rather than amateurs, to get an insight into how these new professionals view their role within the industry. Roughly half the participants are veteran shoutcasters of five or more years. The other half have joined the scene more recently with one in particular having shoutcasted professionally for less than one year. As these informants are a few of only dozens of professional shoutcasters in the world, I have attempted to keep their identities anonymous. As professional personas, some of these casters may benefit from being associated with this work, but I do not want to run the risk of potentially linking these shoutcasters with their statements in the event that this information could somehow affect the community's perception of the individual or potentially harm their prospects within the e-sports industry. The conversations were all positive, but one can never truly assure their informants that information they have provided in confidence will have no repercussion in any foreseeable future. With these considerations in mind I decided before conducting the interviews that the informants would remain anonymous. Finally, I was also able to spend time working within the e-sports industry. My time spent working for a prominent e-sports company profoundly shaped this thesis. Working alongside industry professionals sparked countless conversations about the current climate of the e-sports industry and possible futures. These conversations have both helped and challenged my thinking about the e-sports industry. While I often refer to the e-sports industry or community as a E-Sports Broadcasting 16 homogenous whole, the professionals who live within the space are not all of one mind and it would be a mistake to present them that way. Within e-sports, there are many different games and communities vying for viewers, players, and attention. What follows is my best attempt at wrangling the many paths e-sports has started to follow. E-sports Literature Review E-sports is still a young industry and an even younger subject of critical inquiry. Most entries into e-sports scholarship have emerged within the last five years. E-sports literature tends to come from the much older tradition of games studies, but ties into many other fields including the social sciences, cultural studies, economics, and law. Professional-gaming literature is a veritable hotbed of potential research topics with more articles, theses, and dissertations appearing every year. Much of the growing body of e-sports literature focuses on the professionalization of gaming (Jin 2010; Mora and Heas 2005; Swalwell 2009; Taylor, Nicholas 2009; Taylor, T.L. 2012; Witkowski 2012). These histories offer much more than a rundown of the events that created the e-sports industry. They also offer insight into our contemporary social moment. The arrival of a professionalization of video gaming signals many significant developments within both western and non-western culture. The global nature of e-sports and its meshing together of complex and often conflicting identities continues to beg investigation. E-sports literature primarily resides within the social sciences. Many cultural analyses in e-sports (Chee and Smith 2005; Harper 2010 and 2014; Hinnant 2013; Swalwell 2009; Taylor 2011) have focused on the communities growing within different scenes. Todd Harper, for instance, investigates the culture of competitive fighting games, a fascinating community which stands both within and at odds with the rest of competitive gaming. Gender studies are also E-Sports Broadcasting 17 becoming increasingly common within e-sports literature (Chen 2006; Crawford 2005; Leonard 2008; Taylor 2009 and 2011; Taylor and Witkowski 2010; Witkowski 2013). With the fascinating and fraught formulation of masculinity within these spaces as well as the perceived absence of femininity, gender studies are incredibly important within e-sports literature. Nicholas Taylor (2011) offers insight into the ability of e-sports to create embodied performances of masculinity at live events which spread through communities specific to certain titles or genres. Taylor and Witkowski (2010) also show the conflicting versions of masculinity that appear in different e-sports genres. There has also been an increasing focus on e-sports as a spectator activity. Jeff Huang and Gifford Cheung (2012) found in a study that many of the e-sports fans they investigated prefer watching high-level play rather than playing a match themselves. Kaytou and Raissi (2012) also investigate spectatorship in e-sports with a focus on how best to measure live- streaming audiences. Others (Bowman 2013; Gommesen 2012; Kow and Young 2013) show that the audience in e-sports has a profound effect on performance for the players, akin to a traditional sports audience. These scholars also investigate the expertise apparent in e-sports players that is passed on through spectating as often as practicing. As the professional play of video games fascinates so many, e-sports literature has understandably focused primarily on professional players. Notable exceptions include Jin (2012) and Taylor (2012) who, while still heeding players, also investigate the surrounding factors which allow for play at a professional level. Without these other factors, professional players would not exist. It is from the tradition of these two authors, among others, that I base this work. This thesis, like many of the works listed above seeks to better understand the phenomenon of e- sports while analyzing a particular segment of the scene. With few investigations into the E-Sports Broadcasting 18 broadcasting of e-sports, I hope to contribute to e-sports literature in a way that is both unique and replicable to other systems found within the larger e-sports framework. Sports Media Industrial Complex As sport and media become increasingly intertwined, it becomes difficult to analyze one without at least acknowledging the impact of the other. Pointing to the inextricable link between sports and media, sports media scholar K. Lefever (2012) argues, "while sport provides valuable content and audiences for media operators, the media is a revenue source and promotional tool for sport." As such, the steady professionalization and, in turn, commercialization of sport relies heavily on its media counterpart. The subsequent interdependence between media outlets, sponsors, and sports leagues creates what is often referred to as the sports/media complex or sports media industrial complex (Jhally 1989, Rowe 1999, Maguire 1991). Wenner (1989) coined the neologism, MediaSport, to define the deeply rooted relationship between sports and media. The two can hardly be considered separate anymore. Stein (2013), a Comparative Media Studies alumni, building on the work of these earlier scholars created a model which could be applied to new arrivals in the sports media landscape. Thankfully, Stein provides a fairly replicable analysis of sports video games within the broader sports media landscape. His investigation of the relationship between televisual sports video games and sports media largely informs my own work. He notes an almost relentless stream of advertising and commercialization rhetoric appearing in sports video games. Building on the work of Wenner, Rowe, and Jhally, he argues that the commodification and capitalist trends found in traditional sports broadcasting bleed into newer media such as video games. This steady influx of advertising and commercialization can be found in e-sports as well. E-Sports Broadcasting 19 As e-sports broadcasters gain more experience and access to more robust technology, they have started to incorporate many of the same commercial opportunities Stein noticed in sports video games. Segments of the broadcast are occasionally sponsored, or one might see a sponsor make an appearance in an event's title such as the Intel Extreme Masters tournament. Where Stein argues that sports video games incorporate these advertisements as a signifier of their televisual legitimacy, I argue that e-sports broadcasters make use of the same strategies because they are informed by earlier forms of sports media. The steady commercialization found in e-sports reveals the influence that the sports media industrial complex has had on the e-sports industry. In documenting the dynamics of the sports media industrial complex, Jhally (1989) argues that sports are best viewed as commodities. Jhally's model focuses on the sporting industry in the US prior to the emergence of new media. More readily applicable to e-sports, Lefever's (2012) analysis of the sports media complex within new media details a phenomenon which has upended the former relationships between stakeholders in the sports media industrial complex. She claims that, "the sports/media complex has somehow changed, allowing the different stakeholders to take up new roles" (Lefever 2012, 13). The stakeholders, including sports franchises, sponsors, and media outlets, have had to adapt to a new media landscape with new roles. These new roles are more transient within the high-demand world of new media. Sports organizations and franchises have taken a more active role in connecting with fans, media outlets have taken a larger interest in sports franchises (often buying sports franchises if it is less expensive than purchasing media rights), and sponsors have taken advantage of new, innovative ways to reach consumers (Lefever 2012, 21). According to sports scholars Haynes and Boyle (2003), television sports viewers are no longer expected to just sit back and relax. Instead they are expected to follow their sport through E-Sports Broadcasting 20 social media, forums, blogs, and other digital outlets. This new, active fan fits well within the e- sports industry and live-streaming, but has changed the traditional sports media industrial complex. Before delving too far into the role of traditional sports economic models on e-sports, however, I will first situate live-streaming and e-sports within the larger sports media industrial complex. E-Sports Broadcasting 21 Chapter 1 Sports Media in Transition From Print to Live-Streaming Every day, millions of Americans are catching up with the latest sports news through print, radio, television, and online. Sports have saturated the entire spectrum of mass media in the US. With the emergence of each form of mass media, sports coverage has been at the forefront of adoption and innovation (Bryant and Holt 2006, 22). Each major medium shift in the US has been accompanied by a massive reshuffling of the sports media landscape. Often, this reshuffling opens a space for a particular sport to take up the new medium, create conventions, and carve a path for others to follow. These sports were not spawned by mass media, but their spike in popularity around the emergence of a new medium indicates very specific social moments in the US. Early sports magazines and print coverage of sports focused primarily on prize-fighting, radio ushered in the golden era of baseball, and television transformed football into a titanic entertainment industry. The rise and stabilization of sports media are as much a product of available technology as they are indicative of societal preoccupations of the time. If sports and sports media are indicative of our social moment, then what can we glean from the arrival of live-streaming and e-sports? The co-evolution of sports and media is the coalescence of many factors including changes in power structures, modes of production, and available technology. As Bryant and Holt argue in their investigation of the history of sports and media, "[e]ach epoch of social evolution has witnessed important sports-media developments that were affected by the evolving socio- cultural environment" (2006, 22). In what follows, I trace the co-evolution of sports and media with particular focus on the relationship between emerging mass media and the media ecology E-Sports Broadcasting 22 surrounding that emergence. By documenting these moments of turbulence, I establish the framework necessary to analyze live-streaming as a new medium with which e-sports has emerged as an early adopter and convention creator. Live-streaming did not emerge independently from its predecessors, but rather delivers on the preoccupations of our current social moment. It has once again started a reshuffling of the roles of media within the sports media complex. E-sports, while primarily viewed through live-streaming, relies on all of the previous forms of media to varying degrees. With this framework in mind, I argue that the feedback between live-streaming, e-sports, and traditional sports has spawned an industry which roots itself in traditional sports media while still investigating the full potential of live-streaming. I begin by briefly discussing sports media in antiquity with Thomas Scanlon's (2006) piece on ancient Mediterranean sports and media. After this introduction to sports media, I move to the US in the late eighteenth century with the emergence of the first sports-only publication, the sports magazine, as well as early print news coverage of prize fighting during the rise of industrialization and nationalism. The next section maps the push towards immediacy in sports coverage and the rise of radio. On the heels of radio and the golden age of baseball, I discuss the early issues with televised sport before the post-war era. Moving into the 1950s and 1960s, I detail the transformation of football into a televisual sport accompanied by a very specific social contingency. I then transition into an investigation of live-streaming and e-sports, particularly how both are in conversation with sports media history. Origins of Sports Media As classicist Thomas Scanlon (2006) posits, there is no history of sports without its media counterpart. Media in antiquity, he argues, "are a tool of society, a means of transmitting a message, primarily one from the rulers to the ruled" (Scanlon 2006, 17). While his definition is E-Sports Broadcasting 23 quite limited, Scanlon is correct in noting that media are inflected with the power structures of a society. Sports as media were classically used by those with power to reinforce the hierarchy. Sports events were "represented as a benevolent benefaction from the rich, noble, and empowered to those marginalized" (Scanlon 2006, 18). This reinforcement of power structures comes through not only in the production of sporting events, but also in the medium itself. Scanlon suggests that the most powerful sports 'medium' in classical times was Roman architecture. The massive circuses and arenas were meant to "provoke awe, admiration, and obedience in the citizens" (Scanlon 2006, 18). Scanlon establishes that the predominant sports medium in a given society correlates directly with their notions of power. Within the realm of more dispersed authority such as the Ancient Greeks, sports media reflected the high value of an individual and his merits. Depictions of athletics in Ancient Greek poetry and pottery, made by and for the common people, focus on a particular athlete's prowess more than the event itself. On the other hand, societies with incredibly rigid hierarchies and god-kings such as the Ancient Egyptians and Persians, tend to represent sports as a demonstration of the ruler's power over their people. Ancient Rome, with its centrally focused authority, used architecture to demonstrate the power of the nobility as both benefactors and arbiters, diminishing the role of the athlete to that of an entertainer. Moving into more recent history with media such as newspapers and radio, Scanlon concludes that sports media became an amalgamation of both the Roman and Greek styles: large spectacles with massive personalities. E-Sports Broadcasting 24 Establishing a Media Landscape: Early Sports Media in America The importance of the printing press on modem society cannot be overstated. While its precise effects are still being debated', the affordances of the printing press allowed individuals to produce and disseminate a massive amount of information far more efficiently than ever before. With a massive rise in literacy rates and increased access to print brought about by the printing press, the reading population of the world shifted (Eisenstein 1983). While early readership was restricted to a very small subset of society, the printing press paved the way for the coverage of more mundane topics such as sports. In their analysis of sports media in pre- industrial America, sports media scholars Jennings Bryant and Andrea Holt point to two major developments: first, the appearance of sports in newspapers as 'general news' and second the creation of a completely sports-centered publication: the sports magazine (2006, 22). The advent and success of sports magazines in the early nineteenth century stands as a marker for some of the intellectual shifts of the industrial era. During this time we see a professionalization of sport in the form of prize fighters. We also see a shift from sports as a local leisure activity to something that one follows from a distance. Sports contests began to take on implications beyond a mere matching of athletes. Many sports magazines started out as independent, one-person operations that began circulation in the 1820s and 1830s (Bryant and Holt 2006, 22). The Spiritof the Times, one of the earliest iterations of the sports magazine, actually reached a circulation of over 100,000 readers by the 1840s. The success of this initial sports-focused publication displays the roots of the American sports media tradition. While they note the significance of sports magazines in the overall climate of sports media in America, Bryant and Holt trace the advent of modem sports 1See Elizabeth Eisenstein. 1983. The Printing Revolution in Early Modern Europe. New York: Cambridge University Press. E-Sports Broadcasting 25 media to recaps of prize fighting in the Penny Press age of the 1830s. With increased circulation to the middle and lower classes, sports coverage increased substantially in the mid-nineteenth century. Sports coverage in the Penny Press era focused on creating spectacular depictions of sporting events. As McChesney, a media historian points out, James Gordon Bennett, owner of the New York Herald,was "one of the first exponents of 'sensationalism' as a means of generating circulation, and sport fit comfortably within this rubric" (1989, 51) Out of the sensationalism present in these early newspapers, sports began to take on more significant cultural meaning. There was particular focus on regionalism and nationalism. Sports media scholar J. Enriquez explains that sporting events were far more likely to be covered if they featured a contest which reflected the social preoccupations of the day such as a northern horse racing against a southern horse, or an American boxer fighting a European (2002, 201). Through these mediated depictions, sporting events were encoded with much more meaning than a simple contest. They reflected the contemporary hopes and anxieties of the people. Sports media built up athletes as representatives. Newspaper recaps did much more than simply describe the actions; they created dramas (McChesney 1989, 51). The hyped up imagery of athletes and their contests created through the Penny Press and sports magazines became the paradigm for sports coverage for decades while a new sport caught America's attention. Newspaper Sports Writing and the Rise of Team Sports The rise of baseball as a national pastime coincide with the period of time just after the American Civil War. McChesney explains, "The Civil War introduced baseball to an entire generation of Americans, as the troops on both sides played the game when time permitted. Indeed, baseball emerged as the preeminent national team sport during this period" (1989, 52). E-Sports Broadcasting 26 After the Civil War, baseball helped mediate conflict by providing common ground for northerners and southerners. This moment was one in which the country was seeking to heal its rift, looking for neutral things that could bind the nation together. Baseball filled a political agenda by giving people something to focus on without opening old wounds. Sports writing changed drastically in the years following baseball's spike in popularity. Sports coverage began to receive regular columns and increased coverage throughout the late nineteenth century, leading to a new kind of journalistic specialization: the sports-writer (Enriquez 2002, 202). This fixation on sport was a result of new socio-cultural environments. Mandelbaum (2004), a sports media scholar and historian, argues that the industrial revolution created a new sports landscape through several major developments. First, the notion of childhood had expanded. In the nineteenth century, the period between birth and entering the workforce increased substantially. The new notion of childhood permitted more people to engage with baseball, football, and basketball. This increased interest in team sports continued into adulthood. Watching and reading about sports in the newspaper or sports magazines became an acceptable way to recapture the "carefree years of their lives" (Mandelbaum 2004, 2). Mandelbaum also argues that baseball offered a renewed connection to pastoral America, creating a feeling of nostalgia for the new city dwellers and factory workers who desperately missed the pace and beauty of rural America. Baseball coverage created the first major feedback loop between sports and media in America. Bryant and Holt claim that the importance of sport was downplayed significantly in the puritan era, but, "regular, routine reporting of sports in newspapers and specialized magazines helped shift the cultural attitude towards sports in general" (Bryant and Holt 2006, 25). They argue that in the late 1870s through the 1890s, Americans adopted a new stance on sports as important for the development of mind, body, and society. This new cultural stance on sports E-Sports Broadcasting 27 was shaped and fostered by an increased media coverage of sports. As baseball and its media coverage became more professionalized, Americans began to consume sports media in completely different methods. Sports spectatorship became a regular and acceptable pastime for the industrial worker. The industrial revolution created the first opportunity in America for sports production and spectatorship to be commercially successful endeavors. The growth of cities and the massive developments in individual mobility allowed for sporting events to take on new significance (Mandelbaum 2004, 3). Cities provided large numbers of sports players as well as spectators to fill newly built stadiums and watch newly formed teams. Sports fandom in the U.S. fit neatly into the predominant forms of labor and leisure. Zillmann and Paulus (1993), two psychologists who wrote on sports spectatorship, explain, "spectatorship, as a significant form of recreation, is an outgrowth of the monotony of machine-dictated labor, sports events became the weekend love affair of all those whose workday was strictly regulated by production schedules" (601). Zillmann and Paulus' article further supports the feedback between sports media consumption and societal structures. Live spectatorship in America had previously been seen as a luxury for the rich and powerful, but with the increased circulation of newspapers, and in particular sports coverage, to the middle and lower classes, sports spectatorship became accessible to an entirely new sector of the population (Bryant and Holt 2006, 21). Architecture once again emerged as an important medium. Large concrete and steel stadiums were created, replacing the more organically created playing fields of the late nineteenth century (Mandelbaum 2004, 52). We see here an important transition into the production of sport as a money making opportunity. As I discuss in the third chapter, the introduction of investors and producers fundamentally alters sports and their media counterparts. E-Sports Broadcasting 28 The available media shaped the portrayal and perception of athletics in the industrial era as well. The idea may sound a bit romantic, but Benjamin Rader (1984), a sports scholar focused on the transformation of sports media in America, labels the period of sports media prior to television as an era of heroes. Whether speaking of prize-fighters or the Mighty Casey of folklore, sports media in the industrial era painted athletes as larger-than-life characters. Rader claims, "[t]hose standing on the assembly lines and those sitting at their desks in the bureaucracies increasingly found their greatest satisfaction in the athletic hero, who presented an image of all-conquering power" (1989, 16). To Rader, sports media before television presented the American ideal. Athletes were meritocratic role-models playing for the love of the game. Rader's analysis places the impetus on newspapers to depict dramatic stories with characters akin to David and Goliath. In addition to individual mobility, urbanization, and industrial work, Enriquez attributes the rise and legitimacy of sports journalism as the catalyst for the nationalization of sports in America (2002, 201). As all forms of communication and nationalization were transforming, sports coverage lead the charge. In the early twentieth century, most newspapers had dedicated sports writers on staff. These sports writers became famous through their innovative and entrancing writing. Writers like W. 0. McGeehan, who worked for many San Francisco papers, described athletes as sorrowful sages and their contests as the clashing of titans on a battlefield (Nyhistory.org 2015). In this period however, it is difficult to judge the difference between journalism and public relations (Bryant and Holt 2006, 30). In fact, the issue of PR penetrating journalism in the late nineteenth to early twentieth century is explicitly laid out in Michael Schudson's (1981) chapter, "Stories and Information: Two Journalisms in the 1890s". At the turn of the century, there existed a dichotomy between news as entertainment and news as E-Sports Broadcasting 29 information. As papers around the country struggled to define themselves, sports media also went through a defining period. Legitimate sports writing became known for its higher literary quality, but read more like advertisements with its exaggerated, often hyperbolic, language. Public relations soon became as much a part of sports journalism as describing the events themselves. Team owners understood the media's role in keeping attendance at sporting events up and began catering to sports journalists for coverage (Enriquez 2002, 206). The team owners expected sports journalists to act as publicists for their events. The gambit paid off as sports writing filled more and more of the daily papers and attendance at live events continued to rise. The sports writers added significance to the experience of watching a sporting event. Between the shifts in the American middle class, leisure activities, and the flowery language of sports journalism, watching a sporting event began to take on the significance of watching history unfold. We will see these same issues appear again in e-sports coverage as journalism becomes a legitimizing force within the e-sports landscape, torn between deep analysis and hyped-up depictions for the sake of generating publicity. Liveness continued to assert its role in sports media as new technologies emerged. The telegraph especially placed the impetus on news sources to provide timely information. In a fascinating illustration of the desire for timely sports news, the ChicagoTribuneran the following note on March 17, 1897, the day of the legendary boxing match between Jim Corbett and Rob Fitzsimmons: "The Tribune will display bulletins today on the prize fight. It has secured a telegraph wire to the ring in Carson City and a competent man will describe the progress of the fight, blow by blow, until the test is decided. The bulletins will be posted thirty seconds after they are written in the far Western city" (Bryant and Holt 2006, 29). This fixation on live updates for sporting events across the nation is another example of how sports media has shaped the E-Sports Broadcasting 30 media landscape of America. Information began traveling faster than ever via wireless transmissions, but it was actually a yacht race which saw one of the very first implementations of wireless for live information transmission. Sporting events saw some of the earliest uses of the telegraph for news reporting as well (Mott 1950, 597). As the telegraph allowed for a sense of liveness even for remote events, it paved the way for the most significant development in sports media prior to television: radio. A Fixation on Liveness: Radio and Sports Consumption Radio delivered on the push towards liveness established by the telegraph. The first broadcast of a Major League Baseball game occurred within a year of the commercial release of radio (Enriquez 2002, 206). Rader remarks, "Now the fan did not have to await his morning newspaper; he instantly shared the drama transpiring on the playing field" (Rader 1984, 23). For the first time, sports were perceived as home entertainment. Broadcasters as well as businesses capitalized on the shift. Sports coverage was integral to the rise in popularity of radio in the interwar period. In Rader's words, In the pre-television era, the heroes of sports assisted the public in coping with a rapidly changing society. The sports world made it possible for Americans to continue to believe in the traditional gospel of success: that hard work, frugality, and loyalty paid dividends; that the individual was potent and could play a large role in shaping his own destiny (1984, 15). By Rader's account, sports programming on radio delivered a much needed revitalization of the American ideals through the transient industrial period and The Great Depression. The rise of radio coincides with the golden age of baseball, but there was an awkward transitional phase into the new medium while newspapers and radio both tried to define their new boundaries. While consumers clearly desired liveness, initial radio broadcasts felt flat and emotionless (Bryant and Holt 2006, 27). Some of the greatest blow-by-blow sports writers were E-Sports Broadcasting 31 terrible at delivering a compelling radio broadcast. Sports writers were extremely adept at creating dramas through print, but they failed to capture audiences in the early days of radio. Oddly enough, their sports knowledge undermined their sports coverage in the new medium. Instead, a new role emerged: the sportscaster. In the era of radio, the performance of live sports broadcasts came with significant stakes. Adept sportscasters were cherished more for their voices than their sports knowledge. Delivering play-by-play depictions of sporting events takes little technical knowledge, instead the entertainment comes from the delivery. Mandelbaum writes of early radio sportscasters, "the broadcasters were akin to poets and troubadours who preserved and handed down the great tales oftheir cultures by committing them to memory and reciting them publicly" (2004, 80). Delivery was actually so important that sometimes sportscasters such as Graham McNamee, known especially for his baseball broadcasts, were not even present at the event but instead handed written play-by-play depictions of the game so that they could add their own dramatic and authorial tone to the live event (Mandelbaum 2004). Another issue during the emergence of radio was redefining the role of newspaper sports coverage. Radio could deliver the liveness desired by sports fans and was incredibly well suited for play-by-play commentary. Newspapers had traditionally covered the blow-by-blow report of an event, capturing the drama through flowery language and hyperbole. With radio, the sportscaster captured the audience's attention through the same means, bringing in even more emotion as his voice rose and fell with the action of the contest (Enriquez 2002, 202). Sports writers instead decided to focus on an area that radio broadcasters could not: strategy. Early sportscasters had to focus so much on the delivery of the action that they could not elaborate on the reasons behind certain maneuvers. Sports writers took advantage of this deficiency and began E-Sports Broadcasting 32 writing articles which focused on everything around the action. From in-depth analysis of strategy to the creation of larger than life athlete personalities, newspaper coverage of sports in the era of radio completely changed to remain relevant. Sports magazines also had to find a new space to occupy during radio's reign. Completely unable to keep up with the live coverage by radio and the strategic coverage of America's favorite sport, baseball, sports magazines instead began to focus on niche sports such as yacht racing. The other innovation of sports magazines in the early 1930s was their addition of full page color photographs of athletes, something that neither radio nor newspapers could offer (Enriquez 2002, 202). They remained as an important sports medium but had been supplanted by both radio and newspapers. Baseball's hold on the American public was so strong that the niche sports, which were typically covered in sports magazines, hardly seemed relevant. Football in particular rarely saw coverage anywhere other than sports magazines (Bryant and Holt 2006, 32). Football had traditionally been seen as a college sport reserved for the wealthy, but with an increasing number of college graduates in the U.S. and the rise of a new medium, its niche status was about to change (Oriard 2014, vii). The Televisual Transformation of Sport Television's initial debut into the sports world was a colossal failure. Reaching only a few hundred people, the first American televisual sports broadcast was a Columbia-Princeton baseball game on May 17, 1939. Just a few years after the commercial release of the television in the U.S., RCA's first foray into televised sport flopped. The New York Times' Orrin E. Dunlap Jr. recounted on the following Sunday, "The televiewer lacks freedom; seeing baseball on television is too confining, for the novelty would not hold up for more than an hour if it were not for the commentator" (Rader 1984, 17). He goes on to say, "To see the fresh green of the field as The E-Sports Broadcasting 33 Mighty Casey advances to the bat, and the dust fly as he defiantly digs in, is a thrill to the eye that cannot be electrified and flashed through space on a May day, no matter how clear the air." Bryant, Holt, Enriquez, and Rader attribute the failure of early televisual sports to several factors. First, television camera technology was rudimentary and receivers were even worse (Bryant and Holt 2006, 31; Rader 1984, 18). Viewers could hardly see the player, much less follow the ball or action on the field. Second, television was not a commercial success upon its release. Sets were expensive and did not offer nearly enough programming to warrant their price: an issue that created a sort of negative loop as the television industry needed more viewers to warrant more content yet could not supply enough content to attract more viewers. The third factor, described by Enriquez, is the failure for broadcasters to adapt to the new medium. Sportscasters could not actually see the video feed and casted the game as if they were still on radio; recounting every single action that occurred on the field despite what was on viewers' screens at home. Inexperienced camera operators had difficulty following the action and the image rarely matched what the sportscaster was describing. Radio sportscasters also had difficulty transitioning into the new visual medium because they could no longer provide the same level of drama through exaggeration and hyperbole. Where short infield ground balls could previously be described as laser-fast bullets, the viewers at home now saw that the play was just another ordinary event. Situated somewhere in between watching the game live at a stadium yet still sounding like radio, televisual sport had a difficult time defining itself in the late 1930s and early 1940s. According to Rader, televisual sport experimentation stopped completely during the Second World War (1984, 23). With the well-established roles of radio, newspapers, and sports magazines, the revival of televisual sport seemed to be impossible. The utter failure of televised sports in the late 1930s E-Sports Broadcasting 34 into the Second World War left televisual sport in a difficult position. Sports radio's popularity was at an all-time high in the 1940s. Baseball had captured the hearts and minds of the American people, and famous radio broadcasters such as Bill Stern and Jack Armstrong kept them listening with bated breath (Rader 1984, 30-3 1). Baseball and more generally live event sports spectatorship, however, could not keep the nation content for too long. In what has been dubbed the Sports Slump of the 1950s by Rader and others (Bryant and Holt 2006, McChesney 1989), spectatorship had finally started to dwindle. Television sets were making their way into homes in record numbers after World War 11. In the post-World War 11 era, pastimes shifted from inner-city, public forms of recreation to private, home-centered forms of recreation. Sports revenue was down and change was in the air. People could watch baseball on their television sets at home, but not many people wanted to. As shown by the earlier quote from The New York Times, television had difficulty containing the magic that baseball once held. Football, however, was poised to rise with the new medium. It had been long overlooked, but football was incredibly well suited for television broadcasts. The large, visually distinct ball and typically slow moving action provided an acceptable subject for contemporary television camera technology (Grano 2014, 13). College football had seen a bit of success in newspapers, but professional football had a negative reputation as a "perversion ofthe college game played for alma mater rather than a lousy paycheck" (Oriard 2014, vii). Radio broadcasts of football had never reached the same level of success as baseball. Professional football seemed to be a sport without a suitable medium. As sports media scholar Michael Oriard explains, "[o]nly television could give the professional game a national audience, and Pete Rozelle's defining act as the commissioner who ushered in the modem NFL was to market the league through a single television contract, rather than leaving clubs to work E-Sports Broadcasting 35 out their own deals" (2014, vii). This deal with broadcasting giant, NBC, led to the NFL's great breakout story and what would soon become the model for televised sports (Rader 1984, 85). With the NBC still losing money on a dwindling sports fanbase, they were ready to pull the plug on their deal with the budding NFL until the championship match between the Baltimore Colts and the New York Giants of 1958 (Grano 2014, 13). This match, still hailed as the 'Greatest Game Ever Played', would become the longstanding origin story of televised football. The game went into a second overtime, pushing the broadcast into prime time on the East Coast, a slot in which NBC never dared to place professional football. As millions of Americans tuned in for their regularly scheduled programming, they instead found John Unitas and his Baltimore Colts scoring the game winning touchdown after a long, hard-fought battle. Oriard, Rader, Grano, Oates, and Furness all trace the NFL's commercial success to this one defining moment. As compelling as origin stories often are, the truth is that many other factors lead to the success of football in the new mass medium. New technologies such as video tape were integral to the rise of football in America. Hitchcock argues that instant replay in particular helped with the rebranding of professional football: "The use of video-tape gave the game of football a whole new image... The instant replay changed football from brutal, quick collisions into graceful leaps, tumbles and falls. It gave football an aura of art in movement. It made football attractive to entirely new segments of the audience" (1989, 2). Where football players had once been seen as lethargic brutes, instant replay allowed broadcasters to slow down images, dissect plays, and highlight the athleticism of players (Rader 1984, 83-84). Sports, with football leading the charge, were once again on the cutting edge of media adoption. According to Dylan Mulvin, the first documented use of instant replay for review and training purposes was in 1957 during a game between the Los Angeles Rams and the San E-Sports Broadcasting 36 Francisco 49ers (2014, 49). By 1964, instant replay was a standard broadcasting technique across all sports. The NFL's willingness to adapt to the new medium set it apart from other sports at the time. In addition to these technological and legal advances, Bryant and Holt as well as McChesney argue that one particularly innovative producer reinvented sports broadcasting for television: Roone Arledge. With ABC's full support, Arledge established television broadcasting conventions still present today. After the 1958 Championship game between the Colts and the Giants, ABC was scrambling to catch up to the NBC's success in televised sports broadcasting. As Enriquez describes, "Television broadcasting affected different sports in different ways. It devastated boxing, had mixed effects on baseball, and proved a boon to college and professional football" (2002, 202). As NBC began to ride the wave created by the NFL, ABC looked to get in on the action. Arledge was given free rein to perform a complete overhaul of ABC Sports. Bryant and Holt argue that the single most important innovation Arledge brought was the notion that a televisual broadcast should be presented "from the perspective of what the typical fan would see if he or she attended the game live" (Bryant and Holt 2006, 33). Arledge (2003) believed that the broadcast should capture the essence of attending a game, not just the play on the field, but the roar of the crowd, the cheerleaders, the marching bands, and the coaches on the sidelines. As Enriquez describes, "under Arledge, television assumed every role previously played by print media; it served as the primary medium for experiencing events, it provided detailed analysis, and it gave human faces to the participants" (2002, 205). Through football, televised sports were able to set conventions which separated them from earlier forms of media. This transition lives E-Sports Broadcasting 37 on in live-streaming today as we will see later with live-streaming's adaptation rather than transformation of televised sport. The arrival of television meant that sports radio and print media had to redefine their role in sports coverage. Television could deliver the liveness of radio and, with the help of commentators and technology like instant replay, the drama and dissection of strategy found in print media. Newspaper coverage of sports was now relegated to simple recaps. Sports magazines on the other hand rode the success of television. As Bryant and Holt assert, "Sports Illustratedoffers a classic example of an old medium responding to a new one" (2006, 36). Rather than seeking out an area left uncovered by television, Sports Illustratedsupported televised sports by providing innovative action photography and updates on the most popular athletes and teams at the time. Sports broadcasts of the 1960s were infused with the hopes and fears of the Cold War era. R. Powers, a television sports scholar, suggests that sports filled a void in the American public, "shrugging off the darker morbidities of the Cold War and McCarthyism" (1984, 118). The re-found focus on sports as spectacle established by "the youthful theme of ABC, echoed the Kennedy idealism of the new frontier, the sporting emphasis echoed Kennedy's image of muscular athleticism..." (Whannel 2002, 34). Entertainment sports media, with its art-in-motion presentation, delivered a message of newness and regeneration to American. Through broadcasting and advertising deals, sports helped build and perpetuate the growing conspicuous consumption movement and the capitalist ideals of post-war America. Athletes resumed their star status. Sports stars began appearing in advertising everywhere. Merchandising became a key part of sports promotion. Anything from replica jerseys of sports stars to blankets and flags with team branding can be found almost anywhere in the U.S. E-Sports Broadcasting 38 Contemporary Sports fandom has come to mean much more than simply following a team. It means buying a team's products, playing sports video games, joining fantasy leagues, and watching sports entertainment television. Oates, a sports media scholar focused on the NFL, writes that fandom has been transformed by the presentation of athletes as commodities to be consumed selectively and self-consciously by sports fans (2014, 80). The previously subcultural hyper-fandom activities such as fantasy football and sports video games, Oates argues, have moved into mainstream prominence and profitability. Fans are invited to interact with athletes as vicarious managers in fantasy sports, offering a completely new, personally tailored form of interaction with sports organizations. This new drive for constant connection and feedback within the sports industry culminates with live-streaming. Live-Streaming: Constant Connection As Oates suggests, sports fandom has fundamentally changed to reflect an increased involvement on the part of the spectator. Athletes and personalities have become commodities for fans to interact with. Social media, fantasy sports, and video games have created a connection to sports stars that was never before available in other media. At any moment, a spectator can catch highlights on ESPN, head over to forums to discuss major sporting events, or load a stream of a match on their phone, all while tweeting at their favorite athletes with the expectation that their words will be received on the other end. Recent trends show a change in the sports media landscape as new platforms begin to vie for control over sports broadcasting in the US. The NFL has recently signed a deal with Google allowing for the streaming of games over the internet after their current contract with DirecTV ends in 2015. This deal reflects the changing media landscape in the internet era. The rise of new streaming platforms poses an interesting dilemma to the current media titans and new E-Sports Broadcasting 39 opportunities for new forms of media sports. Thus far, using the tradition established by McChesney, Bryant, Holt, and Rader among others, I have used sports media as a lens through which to view particular socio-cultural moments in America. I now turn that lens towards the contemporary sports media landscape. What can we learn about our own social moment by looking at the use of streaming platforms for traditional sports or the arrival of e-sports as an entirely new form of professional competition that makes use of older forms of media, but thrives in live-streams and video on demand? The MLB offers an early case study into the use of live-streaming for major league sports broadcasting. The regular season in the MLB consists of 2,430 games, a staggering number compared to the NFL's 256. The sheer number of regular season games held each year causes a problem with over-saturation. This inundation of content lowers the value of each individual game in the eyes of the major networks (Mondelo 2006, 283). The games that these networks choose not to air due to scheduling conflicts previously caused many games to go unseen by fans outside of the local media market for the two competing teams. To remedy the situation, the MLB streamed over 1,000 regular season games online starting in 2003. The launch of MLB.tv in 2002 allowed engaged MLB fans to continue watching content even when they did not have access to the games through the major networks. While not initially a huge commercial success, MLB.tv still runs today, over a decade later at a monthly subscription of $19.99 and as of 2014 incorporated both post-season games and the World Series as part of the package (MLB.tv 2015). While the MLB has not released the official revenue totals for its live-streaming service, with 3.7 million subscribers the platform generates well over $400 million per year (MLB.tv 2013). This little-known use of live-streaming shows a hunger for immediate interaction with sports media regardless of the available medium. E-Sports Broadcasting 40 Early live-streaming fundamentally looks and feels like television, but it filled a role which network television could not: all access and constant connection to media. It took form on a new platform, but did not truly differ from television. Early live-streaming is more like an adaptation of television than a new medium. Rather than creating something new, the early foray into live-streaming by the MLB simply adapted the already present broadcasting infrastructure and applied it through a different avenue. Television is often invoked in live-streaming. If we look at MLB.tv, the .tv signifies its connection to television, but that domain is actually the official domain for the country of Tuvalu. Other streaming platforms like ustream.tv, twitch.tv, MLG.tv, all based outside of Tuvalu, use the same domain to signal their televisual connection. Live-streaming emerged at a very particular moment in the evolution of sports media. With air-time limited on the major networks, the internet allows a near infinite amount of content to reach sports fans. As Oates would argue, from fantasy sports, to blogs, to live-streaming, the internet is, for many, the new space of the sports fan. Live-streaming goes beyond the ability of other media to reach viewers wherever and whenever, whether from a home computer or a mobile device. Live-streaming delivers on the constant connectedness expected by consumers today. At its roots, live-streaming is a televisual medium. So what separates it from television? Live-streaming today has created its own niche by blending other forms of media. Most live-streams host an internet relay chat (IRC) in addition to the audiovisual component of the broadcast. This IRC allows viewers to chat with other audience members and often the broadcaster, a functionality not currently available in television. This live audience connection in live-streaming is unparalleled in television. Hamilton et al., in their investigation of the significance of live-streaming for community creation, situate Twitch streams as an important 'third place' for community. Building on the work of both Oldenberg and McLuhan, Hamilton et E-Sports Broadcasting 41 al. (2014) suggest that "By combining hot and cool media, streams enable the sharing of rich ephemeral experiences in tandem with open participation through informal social interaction, the ingredients for a third place." The third place that the authors point to creates a rich connection akin to interpersonal interaction. The ephemeral nature of these interactions creates a deep sense of community even in streams with hundreds of thousands of viewers. Live-streaming and in turn, the IRC associated with streams creates a shared experience tantamount to the "roar of a stadium" (Hamilton et al. 2014). These streams also pull in a global audience, connecting isolated audiences into one hyper-connected community. Live-streaming draws on television for its look and feel, but delivers not only on the desire for liveness perpetuated in sports media but also the hyper-connectivity present in today's globalized world. E-sports, Live-streaming, and Sports Media Many factors contributed to the success of live-streaming for e-sports. It arrived at a moment when television seemed closed to e-sports, it was much less expensive to produce, and much easier to cultivate. Television broadcasts are prohibitively expensive to produce. Early attempts at airing e-sports on television have typically flopped, rarely surviving past a second season. E-sports are difficult to film when compared to traditional sports and conventions had not yet been set for the televisual presentation of e-sports (Taylor 2012). The action in traditional sports can typically be captured by one shot. E-sports broadcasts, in contrast, must synthesize one cohesive narrative out many different player viewpoints with varying levels of information. In a game like CounterStrike, broadcasters must wrangle with a large map with ten players in first-person perspective. The resulting audiovisual feed is a frantic attempt to capture the most relevant information from the players with an outside 'observer' controlling another viewpoint E-Sports Broadcasting 42 removed from the players' point of view. The observer functionality in the early days of e-sports broadcasting created a difficult barrier to overcome for commercial success on television. Observer functionality had not yet become a focus for game developers and commentary had not reached the level of competency it has in more contemporary broadcasts. Instead of finding success on television, e-sports pulls in millions of concurrent viewers on live-streaming sites such as Twitch.tv. With television seemingly out of reach and streaming requiring significant investment per event in the early 2000's, e-sports broadcasting remained relatively stagnant until the arrival of a reliable, and cheap, live-streaming platform. Justin.tv (and other similar sites like UStream and Stickam), which launched in 2007, delivered exactly what e-sports broadcasters needed to grow. The site allowed users to quickly and easily stream content online with the use of some relatively simple software. Both broadband internet reach and streaming technology had developed to a point that lowered the barrier of entry for broadcasters. Players from around the world streamed games from their bedrooms. E-sports broadcasters reached new, massive audiences. The success of gaming content on Justin.tv spurred a new streaming site dedicated solely to gaming. The games-centered streaming site, Twitch.tv, launched in 2011. Twitch.tv revolutionized the e-sports industry. Each of the casters I interviewed spent time detailing the importance of Twitch.tv without being prompted. As one explained, Twitch.tv is "the clearest driving factor that's grown e-sports over the past 2-3 years." As mentioned in the introduction, e- sports audiences have reached previously unheard of levels. Large scale e-sports events regularly see concurrent viewer numbers in the hundreds of thousands. These broadcasts still largely resemble televised sports however, rarely, if ever, making use of the IRC. E-Sports Broadcasting 43 Live-streaming is just one of the forms of media the e-sports industry makes use of. In fact, e-sports interacts with most media in the same ways that traditional sports have. The e- sports industry pushes back into almost all of the earlier forms of media discussed in this chapter. Print and radio typically fill a PR role in e-sports coverage. Large events or developments often make their way into publications like The New York Times. Local radio segments will occasionally feature summaries of e-sports events occurring nearby. Internet versions of both of print and radio sports coverage are fundamental segments of the e-sports media ecosystem. Podcasts, digital audio files available on the internet through downloads or streaming, vlogs, and video diaries fill essentially the same role for e-sports that radio currently plays for traditional sports. Experts weigh in on recent developments and players breakdown certain aspects of a game. E-sports journalism has also immerged as a legitimizing force within the industry. Sites like ongamers.com and esportsheaven.com keep fans abreast of any new developments in the professional scene for all of the major e-sports titles. Journalists like Richard Lewis add legitimacy to e-sports through their coverage of current events. Their recaps of developments as well as summaries of various tournaments and leagues closely resemble their print counterparts in sports coverage. It is clear that the e-sports industry is in conversation with many forms of media. Many of the forms and techniques are borrowed directly from sports coverage. These forms of media did not appear instantly however, they are the result of years of push and pull with the larger sports media landscape. Nowhere is this more apparent than in the commentating of e-sports live-streams. E-Sports Broadcasting 44 Chapter 2 Shoutcasters Collecting Conventions E-sportscasters, often referred to as shoutcasters, both look and sound like professional sportscasters. Their attire and cadence both create an instant connection to televisual sports. Having never seen a game of Starcraft 2 before, you may watch the flashing lights and explosions with a perplexed look on your face. As you continue to watch, you hear two commentators provide a narrative, stats fly across the screen, and you start to piece together the game in front of you. After a few minutes, you know the two players who are facing off against one another, you feel the excitement as they engage each other's armies, and a slight sting as the player you were rooting for concedes the match with a polite "GG." The whole presentation feels like a variant of Monday Night Football with virtual armies instead of football teams. From the stat-tickers to the sound of the commentator's voice, you can almost imagine the ESPN or CBS logo gracing the bottom corner of the screen. Shoutcasters have become a staple in e-sports. One of the main signifiers of the 'sports' moniker professional gaming has taken on, shoutcasters lend an air of professionalism to a scene which often struggles to define itself. By adopting the 'sport' title, a precedent has been set for e-sports broadcasters which informs their style and conventions. Shoutcasters are important to investigate because they form a fundamental grounding for e-sports which helps it to create its identity in the face of blistering turnover rates and constant field shifts. E-sports stand in a unique position compared to traditional sports. Where players and coaches in traditional sports often have careers that last for several years, e-sports personalities E-Sports Broadcasting 45 suffer from intense turnover rates where professional careers can end within a year. E-sports players burn out quickly and coaches rarely make a lasting name in the industry. The recognizable personalities in e-sports are the few innovators and commentators who turned their passion into a career. In this chapter, I analyze the role of shoutcasters within the larger framework of the e-sports industry. I build much of this analysis on the foundation that Taylor (2012) established in her investigation of the rise of e-sports. Much of Taylor's analysis still holds true today, but some other developments in the field have created new dynamics within shoutcasting that were not present during her initial encounters with shoutcasters. Understanding how shoutcasters borrow from earlier forms of media, the issues they perceive within the industry, and how they cultivate their own identity as shoutcasters while grappling with the hyper-connection found in live-streaming as a medium allows us to grasp the relationship e- sports broadcasting has with earlier forms of media while still creating its own identity. I begin with a very brief look at the history of shoutcasting. Shoutcasting History One can see that even early attempts at broadcasting competitive gaming borrowed heavily from its media contemporaries. Starcade,a 1982 show that ran for two years, marks one of the first forays into e-sports broadcasting. Though the term e-sports had not yet emerged, the show featured two opponents attempting to outscore each other on various arcade machines. If we look to Starcade as an early example of e-sports, then the origins of e-sports commentating resemble game show commentary found in Jeapordy! or The Price is Right. Watching Starcade for the hosting alone reveals many similarities to other game shows: the host wears typical game- show host garb, pleasantly explains every aspect of the competition, and speaks with the E-Sports Broadcasting 46 broadcast voice we all recognize. Starcadealso shows the constant evolution of competitive gaming coverage as it continued to refine its camera angles, presentation, and format over its two year run. The model which more closely resembles our modern vision of shoutcasting gained momentum at the turn of the twenty-first century. The title shoutcaster comes from the early streaming software used for e-sports broadcasting, SHOUTcast. While many people familiar with e-sports may have no idea where the term comes from, a prominent shoutcaster, djWHEAT (2012), claims that the title remains due to its signaling of the history of e-sports. SHOUTcast, a media streaming program, arrived in 1998, allowing interested parties to broadcast audio recordings to various 'radio' channels for free. SHOUTcast allowed for video streaming, but as one early shoutcaster I interviewed lamented, the bandwidth and equipment required for video streaming was prohibitively expensive. Instead of the audiovisual broadcast we regularly associate with e-sports live-streams today, early shoutcasters relied on audio recordings akin to early radio coverage of traditional sports. These early broadcasts only streamed audio to a few hundred dedicated fans on internet radio. Early shoutcasts follow the form of traditional play-by-play radio broadcasts, focused primarily on presenting every development in the game. In interviews, veteran shoutcasters were not shy about admitting the influence radio sportscasters had on their own style. One mentioned that he spent hours listening to live sports radio to hone his own skills. Early shoutcasters also performed many aspects of the production that they are no longer required to perform in the more mature e-sports industry. They would attend events, set up their own station, typically with their own laptop and microphone. It was a very grassroots affair. E-Sports Broadcasting 47 With little experience in the technical aspects of broadcasting, the productions emulated as much as they could from sports broadcasting to lend an air of professionalism. With the arrival of Twitch.tv, and other reliable streaming platforms, much of the onus of production was taken off of shoutcasters. Instead of acting as producers, directors, editors, and on-air talent all at once as they had in the early audio-only streams, shoutcasters are now more able to focus on the portion of their work from which they get their name. Shoutcasting after the early days of internet radio has come to not only sound like traditional sportscasting, but also look like traditional sportscasting. Something Borrowed: Influences from Sportscasting Wardrobe Many ofthe shoutcasters I interviewed talked about wardrobe as a huge change within shoutcasting, one that was spurred entirely by looking at traditional sportscasting. Most shoutcasters got their start wearing t-shirts and jeans at various e-sports events. Today, you will rarely find a shoutcaster not wearing a shirt with a blazer. Looking at the image below shows the incredible shift in shoutcasting just within the last six years. Both images feature the same Figure 2-Left: Joe Miller at 2009 Intel Friday Game London; Right: Joe Miller at 2015 Intel Extreme Masters World Championship in Katowice Poland. Image credit: ESL, Philip Soedler and Helena Kristiansson. Flickr.com/eslphotos E-Sports Broadcasting 48 shoutcaster: Joe Miller. The left-hand image comes from the 2009 Intel Friday Game London while the right-hand image comes from the 2015 Intel Extreme Masters World Championship. While the images are quite similar, the professionalism apparent in the right-hand image resembles a professional sportscaster. The gamer/geek vibe found in the left-hand image has been removed from the shoutcasting image. As a few of the shoutcasters I spoke with admitted, the drive to rework the shoutcaster wardrobe came purely from traditional sports. On top of that, they pointed to a desire to shed the gamer/geek stereotypes that e-sports had come to inhabit. By adopting professional attire, they felt that they could get rid of the old image and emulate the professionalism of a sports broadcast. Wardrobe is not the only aspect of traditional sportscasting that has made its way into shoutcasting. Style One of the more elusive aspects borrowed from traditional sports is the actual commentary style. I use the term elusive here to signal the difficulty in pinning down exactly why shoutcasters remind us so vividly of traditional sportscasters. Early shoutcasters had no models outside of traditional sportscasting so they took as much as they could: "So as a broadcaster we look at traditional sportscasting. We pull from that and then make sure it fits in game casting." As it turns out, many sports commentary conventions translate well into game casting. As such, the first generation of casters share many similarities with television sportscasters. Most of these early shoutcasters admit to being influenced almost entirely by traditional sportscasters. One caster explains, "Television is where we grew up, it's what we watched. So clearly that's where we're going to pull from." E-Sports Broadcasting 49 Shoutcasters typically have no media training, instead relying on mimicry of earlier conventions to get by. As with most positions in e-sports, and similar to early sports writers and radio casters, shoutcasters are just passionate fans turned professional. In conversations, they each revealed a bit of their own personal history that pushed them towards broadcasting, but only one ever mentioned having received any sort of formal training. Years into his shoutcasting career, he "went back and did a journalism and broadcasting course for 6-9 months." Of particular note, he mentions, "they did one really good project which was 'how to be a news presenter'. They taught me the basics of that." The rest, he says, he learned on-air through experience. The other shoutcasters I interviewed echoed this story. Most of the shoutcasters I interviewed fell into shoutcasting through happenstance and had to learn their craft on-air. Shoutcasters are akin to the very early television sportscasters who had to reinvent their style during broadcasts like Bob Stanton, a radio sportscaster turned television sportscaster who would send his friends to sports bars to gather feedback and suggestions from audience members (Rader 1984). Echoing this inexperience and improvisation, one shoutcaster I interviewed confided, "the first time I had ever been on camera, I sat down and I was like, 'I have no idea how to do this.' I had done two and a half years of audio casting, but I had never done video." Another caster recalls of his first show, "All I knew going into my first broadcast was that I know this game. I know how it works, I know these players, and I play against these kinds of players. I don't know how commentary works, but I can do this." After these first, trial broadcasts, both of the above-mentioned shoutcasters admitted to going back and watching traditional sportscasters to learn more about their craft. Other broadcasting style conventions such as how to handle dead-air, how to end a segment, or how to transition into gameplay were lifted directly from sportscasting. Paul E-Sports Broadcasting 50 "ReDeYe" Chaloner, a prominent personality within the e-sports industry, addresses each of these techniques in his primer on becoming a professional shoutcaster, constantly pointing to various examples from traditional sports broadcasting to illustrate his points. In his section on dead-air, Chaloner writes, "[o]ne of the best pieces of advice I had for TV was from legendary sports producer Mike Burks (11 time Emmy award winner for sports production) who told me 'A great commentator knows when to shut up and say nothing"' (2009, 9). Chaloner uses traditional sports broadcasting as a way to explain shoutcasting, a clear indication of its influence on e- sports broadcasting. Content Analysis: Play-by-play and Color Commentary in the NFL andLCS Another convention lifted directly from traditional sports broadcasts is the arrangement of the casting team. Traditional television sportscasters fall into one of two roles: play-by-play or color commentary. Shoutcasters use these same two roles. Both sports broadcasts and e-sports broadcasts feature one of each type. The play-by-play commentator narrates the action, putting together the complicated and unconnected segments of the game into a cohesive narrative. The color commentator provides their in-depth analysis of the game, typically from the stance of a professional player. Shoutcasters have adopted the two-person team directly from traditional sports broadcasts. The path to each role follows the same pattern as well. An ex-professional player almost always fills the role of color commentary in both traditional sports and e-sports. Their insight is unparalleled. Color commentators attempt to breakdown complex series of events or highly technical maneuvers as if they were still a professional player. In the words of one e- sports color commentator, "I'm not pretending to be a professional player, but I'm doing my best E-Sports Broadcasting 51 to emulate them." He goes on to say, "You can read up on it and study it as much as you like, but unless you've lived it, you can't really comment on it." In comparison, a play-by-play commentator does not need to have the technical depth, but relies more on presentation. Even though a play-by-play commentator has most likely played hundreds of hours of whichever game they cast, they cannot fill the role of the color commentator. This dynamic allows for play-by- play commentators to switch games with relative ease whereas color commentators, both in traditional sports and e-sports, are locked into one game. To illustrate the emulation of sports broadcasting found in e-sports, I now turn to a brief content analysis of the commentary found in a regular season NFL game and a regular season League of Legends Championship Series game. I start with the commentary from one play in an NFL game. After presenting the traditional model, I move to the commentary from one team fight in League of Legends to demonstrate how the convention has been adapted for e-sports commentary. In both cases, I have removed the names of players, commentators, and teams to cut down on jargon and clutter. Each case exhibits the dynamic present in the two man commentary team. NFL With both teams lined up, the play begins and the play-by-play commentator comes in immediately. Play-by-play: Here's [player 1] out to midfield, a yard shy of a first down. [player 2] on the tackle. After the play has ended, the color commentator takes over. Color: It's been [team 1] on both sides of the ball. Whether it be defense and the way that they dominated this ball game and then offensively, the early going had the interception, didn't get much going over the next couple of possessions offensively but since that time, [player 3] has been very precise in how he has thrown the football and they just attacked this defense every which way. E-Sports Broadcasting 52 LCS Three members ofthe Red Team engage Blue Team atRed Team's turret Play-by-play: This is going to be dangerous. Doing what he can to hold out. They're going to grab the turret, the fight will continue after the shield onto [player 1] is already broken. He gets hit, the ignite is completely killing the ultimate! He gets hit by [player 2] who turns around again and heads back to [player 3]. With the action overfor the moment, the colorcommentatorbegins to speak Color: I thought he finished a camp here too... The color commentatoris cut off as two more members ofBlue Team attempt to attack. Play-by-Play Heyo, as the top side comes in here too. [player 1], will he hit a good ultimate!? Oh! They were staring right at him but now he's just left to get shredded apart here. They couldn't have thought that this was going to go well for them. With thefightconcluded, thecolorcommentatorcontinuesagain. Color: Is this just the week of chaos? Because that was a really really uncharacteristic lapse in judgement from [Blue Team]: Not calling everybody into position at the right time, and [Red Team] with the advantage make them pay for it. They didn't expect the ignite from Nautilus. I think they expected Nautilus to have exhaust instead, but [player 1] pops the ignite, and as we said there is no armor so [player 2] just... and it continues! The color commentator is cut off once again as the two teams engage one another for a third time. If we look at these examples for their content rather than the specific moment in the game we can catch a full illustration of the two-caster dynamic. As we can see by the NFL example, the play- by-play commentator provides a running narration of the action in the game. When the action ends, the color commentator provides the meta-level analysis of the unfolding events. In the LCS example, we see that the same dynamic is present, however, due to the continuous action in the game, the transition into color commentary becomes difficult. In the first lull, the LCS color E-Sports Broadcasting 53 commentator tries to insert his analysis, but he is cut off by a second engagement. The color commentator stops talking immediately and allows the play-by-play commentator to continue describing the action. After the engagement ends, we hear the color commentator pick up again, explaining why the fight developed the way it did as well as his insight into why the teams played the way they did. Entertainment and Narrative Entertainment value was a repeated concept in my interviews with shoutcasters. Some went so far as to claim that their role was only to entertain. One stated, "I want to get you excited. I want to get you to watch the game as if it was a show on television." Many would point to good sportscasters as an example to follow. If we recall the example of the early days of radio sportscasting, casters had a difficult time making the transition to the new medium. Their broadcasts felt flat when compared with their print counterparts (Bryant and Holt 2006, 27). Early sportscasters got locked into the idea that their responsibility was to provide the basic play- by-play depiction of a match. The golden age of sports radio was brought in by popular sportscasters, such as Graham McNamee, who were so popular that they'd be asked to cast games remotely. McNamee, like a live version of his print counterparts, was famous for creating florid depictions of the game, athletes became heroes and their play became combat as told by McNamee. While the presentation of live and accurate information was still essential, popular radio sportscasters shifted sports media from news reports to entertainment. Sportscasters are responsible for this shift. Without their expert embellishment, play-by-play depictions lack entertainment value. E-Sports Broadcasting 54 Even non-sports fans can feel the excitement from a particularly good sportscaster. The game they portray is far more intriguing than any actual events happening on the field (Bryant, Brown, Comisky, and Zillmann 1982). This disconnect forms one of the primary reasons that the transition to casting televised sport was so difficult. The small liberties that sportscasters took were no longer acceptable in the visual medium. Once the home viewer could see the game, commentary had to shift to accommodate more scrutiny. Radio sportscasters were notorious for their embellishment. As Bryant, Comisky, and Zillman note from one of their several investigations of sportscasting, roughly forty percent of commentary is dramatic embellishment (1977). In 1977, the authors tracked the amount of hyperbole and exaggeration in sports broadcasting and found that over half of the speech was dedicated to drama. E-sports shoutcasters, by comparison, rarely use dramatic embellishment of action. A few of the informants noted that they feel that embellishing actions is not possible due to their audience. The e-sports audience as pictured by shoutcasters, includes mostly dedicated players. While many sports fans may play their sport casually, e-sports fans engage with the games they watch regularly. As one shoutcaster explains, "we've only ever gone out to a hardcore audience." He acknowledges that the current audience is in flux, but the primary base of e-sports fans are intensely dedicated viewers and players. Because of this dynamic, shoutcasters feel that embellishment of the actions on screen would be difficult to slip past a discerning eye. Their belief that dramatic embellishment isn't possible may say more about their understanding of traditional sports fans than it does about their formulation of their role as commentators. While unacknowledged in interviews, the possibility for shoutcasters to add embellishment exists. Their choice not to use embellishment speaks more to their formulation of the e-sports audience than it E-Sports Broadcasting 55 does to their casting quality. Instead of embellishment of action, shoutscasters rely on another convention found in traditional sportscasting: narrative. Studies that focus on the media effects of sportscasting suggest that sportscasters fundamentally alter the audience perception of the telecast through story-telling and narrative (Krein and Martin 2006). Sportscasters take many liberties in their descriptions of the game to add a dramatic flair. In several empirical studies, Bryant, Brown, Comisky, and Zillman (1979) found that when sportscasters created a narrative of animosity between players, viewers felt an increased amount of tension and engagement. They conclude that the narrative scope of the sportscaster is critical in the perception of sports broadcasting. This narrative creation has bled into shoutcasting as many shoutcasters attempt to amplify the emotional content of their games by highlighting underdog stories or hyping up animosity between players. One caster I interviewed connected his work to the narrative creation in sports commentary by stating, "Emotion is one of the key words in commentary. You need to be able to connect a certain emotion to the words you're saying. You need to be able to make someone scared for their favorite player or overjoyed when they win. Create greatest enemies. You need to be able to make these feelings through what you say or how you say it. Emotion is everything." This caster goes to great lengths to dig up statistics from previous matchups to provide a narrative for the match he casts. Through this investigation, the shoutcaster is able to contextualize a match with a rich history. Perhaps two players have met three times before and each time the result has been the same. Will viewers be able to share in the momentous victory of the underdog? As part of their preparation, shoutcasters will research all of the previous meetings between two players to create a history between them, a tactic which they acknowledge has been used in traditional sports for decades. E-Sports Broadcasting 56 Production Stream production is another realm where e-sports have started to borrow heavily. While e-sports producers may have gotten a head start on streaming live events, they often rely on the expertise of television producers to put a show together. Multiple shoutcasters pointed to a steady influx of television producers making their way into e-sports, "the way we approach a production is very much like television. A lot of the production guys that are getting into it are from television." In fact, the executive producer of the League of Legends Championship Series, an immensely popular e-sports program, is former emmy-winner Ariel Horn. Horn won his Emmy as an associate producer of the 2004 Olympics for NBC. Likewise, Mike Burks, executive producer for the Championship Gaming Series mentioned in the above quote from Paul Chaloner, had an immense amount of experience in televised sports before migrating to e- sports. These are just two of the many experienced television producers making their way into e- sports. Their style is beginning to show as e-sports events become more polished every year. If we recall the image of Prime Time League in the introduction to this thesis, we can see the influx of television conventions in e-sports from the production side. The shoutcasters benefit from the experience of working with television producers to refine their style. As the field has grown, however, we begin to see minor tweaks in style and delivery. Spending a significant time with e- sports casting, in comparison with sportscasting, reveals several distinctions. Much of this difference comes with the age of the field, but just as Starcadeevolved over its short lifespan, shoutcasters have found ways to make themselves unique. Their understanding of their role within the overall e-sports industry informs us of some of the key differences here. E-Sports Broadcasting 57 Something New: Shoutcaster Identity Shoutcasters are situated somewhere between fan and professional. As evidenced by the above investigation of how shoutcasters are informed by their traditional predecessors, the role of shoutcasters is still very much in flux. Shoutcasters are just recently creating their own identity separate from their sportscasting roots. In particular, the less experienced shoutcasters I spoke with use markedly different models to inform their own casting. The Second Generation of Professional Shoutcasters A second generation of casters is just now coming into the scene. Instead of looking to traditional sportscasters as their models, they emulate veteran shoutcasters: "my influences are the streamers that I watched. I watched everyone who casts and commentates...my commentary style comes from those guys. I don't know how much is conscious or just mimicry." This new caster has been on the scene for only a fraction of the time that the veterans have. In that time he has honed his shoutcasting skills not by finding sports commentary and seeing which aspects apply to shoutcasting, but by absorbing as much information as he could from other shoutcasters. Another fresh shoutcaster offers a fascinating disconnect from the older casters: "I definitely bounce off more e-sportscasters than sports. I just watch more e-sports than sports. Sports are so different than e-sports, there's so little that I can actually use from them." Where his predecessors admit to borrowing primarily from traditional sportscasters, this new generation has left the realm of traditional sportscasting behind. The professional casters provide material for an amateur level of shoutcasters to pull from. The shoutcasters I interviewed were all professionals who typically work on major events with massive support and budgets. With a robust network of shoutcasters to pull from, however, E-Sports Broadcasting 58 we may see much more support for the grassroots level of e-sports that many early fans are accustomed to. Current shoutcasters also provide a model for potential careers. Through the hard-fought struggle of years-worth of unpaid events, the shoutcasters I spoke with have created a legitimate profession worth pursuing. Most warned me that the path is no longer as easy as they once had it. Most of them pursued shoutcasting for the love of e-sports. They had years to fumble through persona creation, broadcast techniques, and conventions. New, potential shoutcasters are automatically held to a higher standard. A senior caster offered the following advice, "With how casting has changed, you need to be open to casting multiple games. You have to be willing to learn. There is a lot we can teach a caster, but you have to have some skills within you alone. You have to have some camera presence." The mention of camera presence signals a significant jump from early shoutcasting. Just a few years ago, the shoutcasters I interviewed sat down in front of a camera for the first time armed with nothing but game knowledge; camera presence was a foreign word to them. Perhaps the most significant change to casters is their overall level of experience. Some of the shoutcasters I spoke with have been broadcasting for over a decade. Time has allowed these casters to experiment and find their own style. As mentioned earlier, many of the minutia involved in running a show take time to learn. Most casters got their start casually. They may have been passionate about e-sports and created a role for themselves within the industry. Some are former players who made the hard decision to give up on their hopes of winning big to instead cultivate a community. As new professionals, shoutcasters are just now coming together with the support of e- sports companies under legitimate full-time contracts. The professional casters I spoke with all acknowledged a significant change in their commentary since making the transition into full-time E-Sports Broadcasting 59 casting with other casters around for feedback and training. One explained that he had never been sure how to handle dead-air, moments when both casters are silent and there is little action in the game. Through feedback sessions with other casters, he learned that there are some appropriate times to let the viewer formulate their own opinions on the match. Heeding the advice of veteran casters like Paul Chaloner, he went on to explain that one of the problems he sees in shoutcasting more generally is that shoutcasters are afraid to just be quiet during a stream. Part of the emotional build-up of a game, he explains, is letting the natural flow of a game take its course without any input from the casters. It will be fascinating to watch as these expert networks inform e-sports broadcasts across the world. One informant remarked, "Now that we're all working together, we're learning a lot off of one another, which hasn't happened in commentary before." Beyond allowing veteran shoutcasters to compare notes, the professional status of shoutcasting provides training to new shoutcasters. One veteran claimed, "All the junior people are learning so much faster than we ever did. They're taking everything we learned over 5-10 years and doing it in months." These veteran casters can now pass on their experience and their style. Techniques like hand-offs at the end of a segment or transitions from the desk to gameplay often came up in my interviews as issues which take years to learn, but newer shoutcasters are able to pick these cues up from earlier shoutcasters instead of taking what they can from a sports show and hoping that everything translates well. Beyond the expected roles that shoutcasters fill, they also perform many secondary tasks which don't typically fall to traditional sportscasters. In the very early days of live-streaming, shoutcasters were often responsible for every aspect of the broadcast from set-up to teardown. Some shoutcasters still regularly assist on production aspects of the broadcast such as graphics E-Sports Broadcasting 60 packages, camera set-up, and audio checks, but others leave the production aspects of the stream to more experienced hands while focusing instead on updating websites, answering tweets, creating content, or streaming their own play sessionss. No two casters seem to fill exactly the same role within the broadcast team. They do, however, share some similarities which seem to form the shoutcaster identity. Record-keepers and Community Managers All of the casters pointed to stats-tracking as part of their roles outside of their air-time responsibilities. Most of them keep highly detailed databases full of every possible stat they can get a hold of from game clients and public databases. These stats can be as simple as wins and losses from remote regions or LAN tournaments that do not post their results online. The stats can also get as minute as the number of units a particular Starcraft 2 player built in one particular match. When the data isn't readily available, shoutcasters go out of their way to curate the database themselves. While some keep their database secret to provide a personal flair to their casting, others find it important to share this information with their e-sports communities. One shoutcaster recalled his surprise when he first worked with a major South Korean e-sports company with its own dedicated stats team. He expressed that he had never realized how much he needed a dedicated stats team like you find in traditional sports until that moment. It was then that he realized how much of his daily routine stats curation filled. While he was grateful for the help, he also felt personally responsible for stats collection and did not entirely trust the figures from the professional statisticians. This example shows the difficult position e-sports fills, constantly stuck between borrowing from traditional sports while not fully able to cope with the maturity of the sports media industry. E-Sports Broadcasting 61 Another role which tends to fill a shoutcaster's daily routine is community maintenance. Whether the caster creates their own content on gaming sites, responds to fans on social media, or spends their time streaming and interacting with the community, they all mentioned some form of community maintenance as part of their duties as a shoutcaster. This particular focus on community maintenance most likely results from the grassroots origins of shoutcasters. These casters were a part of an e-sports community long before they became shoutcasters. Whether they view it as their professional responsibility or a social responsibility remains unclear. They all admit to some level of e-sports advocacy, however. They view PR, and the proliferation of e- sports as part of their responsibilities. The most effective way to tackle this issue, many of them have decided, is through community engagement. The community aspect of shoutcasting identity leads me to a discussion of the affordances of the hyper-connectivity in live-streaming. Grappling with the Hyper-Connectivity in Live-streaming and E-sports Shoutcaster Connection I have yet to meet anyone in the e-sports industry who has not remarked on the unique level of connection present in e-sports. Shoutcasters especially, tap into the network created in these online communities. In a representative summary of my conversations, one shoutcaster explained, "the connectedness is so unique in e-sports. The way that we can interact with fans instantly. The players at the end of the day are gamers, they know exactly where to look. They've got Twitter, they go on Facebook, they post on Reddit." Audience members connect ephemerally in the IRC of a Twitch stream, but they constantly scour the social media outlets of their favorite stars, e-sports companies, and shoutcasters, creating a deeply connected community. Professional shoutcasters understand that the e-sports communities operate in a E-Sports Broadcasting 62 unique way when compared to traditional sports fandom. E-sports fans have an odd connection to franchises or teams within their chosen e-sport. As mentioned before, turnover rates and general industry growth force entire communities to radically reform from one season to another. Where traditional sports fans often follow a team based on geographic loyalty, or familial connections, e-sports fans do not have that option. While you will often hear of fans cheering for teams in their geographic region (North America, Europe, South-East Asia, etc) if they make it to the last few rounds of an international tournament, they may also base their fandom off of a team logo, or a particular player instead. Shoutcasters recognize this dynamic and use it to cultivate the community. Communication, they claim, separates them from traditional sports broadcasts or even news anchors: "We communicate more with our audience than you'll see TV news anchors or celebrities, but it's part of our job to get more information out there." The focus on communication seems to be unique to shoutcasters as the majority of it happens outside of their broadcasts. While many shoutcasters define their role on-screen as an educator of sorts, the notion of spreading information about e-sports falls outside of their screen time. This double role of broadcaster and community manager extends what media scholars have dubbed the broadcasting persona beyond the point typically associated with sportscasters or news anchors. Shoutcasters and Persona Horton and Wohl (1956), two social scientists who study mass media, make the assertion that mass media performers make a conscious decision to create and maintain parasocial interactions through the creation of a persona. Social scientists have coined the term parasocial interaction for the intangible connection which most of us feel to some form of media or another. E-Sports Broadcasting 63 Standing in contrast to interpersonal interaction, a person to person exchange between two real and cognizant human beings, parasocial interaction is instead a unidirectional relationship (Miller and Steinberg 1970). The feeling of connection we create with fictional characters, news anchors, or sports stars does not fall within the definition of an interpersonal interaction. Whether mediated through a screen or the pages of a book, a parasocial interaction does not manifest in an exchange of thoughts or words between individuals. Rather, it is embodied and lived through one individual. Schiappa et al. (2007) conducted a meta-analysis of parasocial interaction literature to better understand how broadcasters 'hook' viewers to a certain show. They concluded that parasocial interactions can create and prolong connection to television programming. While Schiappa et al. concede that there are a few opportunities for a parasocial interaction to result in interpersonal relationships in the physical world, the compelling issue is the establishment of intimacy mediated through means well outside of a person to person context. Horton and Wohl set out with the goal of creating a term for the relationship between performers and their audience in mass media. The authors suggest that the emergence of mass media created an illusion of connection to performers which was previously unavailable. They argue that the connection people feel to mass media stars is analogous to primary social engagement. If this type of engagement takes place in radio and television, where users have no opportunity to interact with audience members who are not co-present, it follows that the interaction between broadcasters, their audience, and one another in a Twitch stream is a particularly deep connection even beyond the level noticed by Horton and Wohl. Shoutcasters create a familiar face and personality for audience members to connect with. Mark Levy (1979), another proponent of parasocial interaction who focused his work on news anchors, suggests that both news anchors and sportscasters help to create and maintain E-Sports Broadcasting 64 communities through regular scheduling, conversational tones, and the creation of a broadcasting persona. Shoutcasters perform this same role to even greater effect due to the constant changes surrounding the e-sports industry. The regularity and consistency of shoutcasters' broadcasts helps to foster a feeling of genuine connectedness within the community. Although difficult to quantify, many conversations with shoutcasters turned to the odd feeling of connection that e-sports fans feel towards one another. One shoutcaster attempted to explain this connection by stating, "[w]henever I go to an event, I realize that fans are just friends I haven't met yet." I found this statement to be particularly poignant. It hints to the sort of intangible connection e-sports industry personalities and fans feel to one another through live- streams. Anecdotally, this air of friendship permeated e-sports events that I have attended and went well beyond what I have felt at traditional sporting events or concerts. Previously, persona creation and maintenance occurred on-screen or at events only. Social media has forced many media personalities to extend their personas beyond the long-held notions of broadcaster-fan interaction. In many ways, shoutcasters must go beyond even these extended boundaries into a near constant persona maintenance because of their roles in live- streaming and community maintenance. Many shoutcasters give up their personal, off-air time to stream their own gameplay or to create video content which necessarily prolongs the amount of time they embody their broadcast persona. I found that shoutcasters create a variation on the broadcast persona. Rather than a full- blown broadcasting personality which they inhabit while on-air, most shoutcasters have found that between community management, social media interactions, and broadcasts, they almost never get an opportunity to step out of their role as a shoutcaster. Due to this near constant connection, most shoutcasters acknowledge that they act differently on air, but they tend to E-Sports Broadcasting 65 simply invoke a more upbeat and charismatic version of themselves. Echoed in each of the interviews, the casters point to the idea of excitement, "you have to get excited for the person out there watching." Even if they are not in the mood to shoutcast, or they have had a bad day, shoutcasters must leave their personal issues out of the broadcast. This aspect of the shoutcaster's personality comes out in all of their interactions on social media as well. Most of the shoutcasters I interviewed situated their role in e-sports as somewhere between Public Relations, Marketing, and Community Management. One of the casters explained the importance of invoking the broadcast persona when speaking about sponsor expectations: "We're working in an industry with companies behind us, we can't always say exactly what we want to say." Shoutcasters' acknowledgement of their involvement in securing sponsorships signals an interesting shift in the e-sports industry: the focus of the broadcast team on potential revenue generation. I turn now to an analysis of the revenue streams found in both traditional sports and e-sports broadcasting. E-Sports Broadcasting 66 Chapter 3 Revenue Funding Professional Play After situating e-sports broadcasting within the greater sports media landscape, particularly in conventions, casting, and use of medium, it is important to analyze the portions of sports media production that have made their way into e-sports broadcasting. If we acknowledge the influence that traditional sports broadcasting has had on e-sports broadcasting in the realms of conventions and casting, we must also understand the importance of this relationship at the production and economic levels. In this chapter I discuss how the history and development of the sports media industrial complex in the U.S. has bled into the economics of the e-sports industry. In particular, I focus on how sports media models inform the e-sports industry while portions of the sports industry's revenue streams remain out of reach for e-sports broadcasters. Despite the reshuffling of the sports media industrial complex mentioned in the introduction to this thesis, traditional sports broadcasting still relies on the same revenue streams that it had in the past. Traditional sports producers have fully capitalized on the commodification of their content. E- sports producers, in contrast, are still shaping their revenue streams within live-streaming. The commercialization found in the sports media industrial complex has taken hold of the e-sports industry in several notable ways. Following in the example set by Stein's thesis work, it is not enough to just acknowledge the relationship between e-sports and traditional sports media, we must also understand the path which brought e-sports broadcasting to its current state. Using ONLY the context block/prompt to guide your answer, provide a comprehensive comparison of the subjects mentioned in the question. Do not use any previous knowledge or outside sources to inform your answer. How e-sports broadcasts compare with traditional sports broadcasts?
Using ONLY the context block/prompt to guide your answer, provide a comprehensive comparison of the subjects mentioned in the question. Do not use any previous knowledge or outside sources to inform your answer. EVIDENCE: E-Sports Broadcasting 8 Introduction Sportscasters on a Digital Field Sitting at a desk underbright lights, two announcerstalk at afast clip. After a weekend full of commentating, theirvoices are scratchyandfading, yet theirexcitement never wanes. No one watchingcan see the two men, though a camerasitsjust afew feet infront ofthem. Instead, the live audience andhome viewers see the Europeanchampions, Fnatic,going head to head with SK Gaming on a virtualbattlefield. They're 55 minutes into an absoluteslugfest, the two announcers'voices rise andfallwith the action ofthe game. Over the PA, the audience hears that this game is mere seconds awayfrom ending. The SK team has Fnaticon the ropes after brilliantlydefending their base. Fnatic'sstarplayer, Xpeke stays, attempting to win the game singlehandedly. The casters initiallydismiss the lastditch effort while the bulk of SK's team move to end thegameontheothersideofthemap.However,thecamerastaysonXpeke whoisina showdown with one memberofSK. NanosecondsawayfromdefeatXpeke dodgesa deadly ability. The casters erupt in nearly unintelligible,frantic excitement as the 25,000 live attendees atSpodek Arena in Katowice, Polandcheerat the sudden Fnaticvictory. Back in the realworld, theentireFnaticteamjumpsawayfrom theircomputersandpileontoXpeke whilewe hear, "I do not believe it! Xpeke's done it!" Over 643,000 online viewers around the world watch the camerapan acrossthe SK team, stunnedin theirdefeat. From theirhome computers, these viewers have just witnessed e-sports history. E-Sports Broadcasting 9 The above scene unfolded at the 2014 Intel Extreme Masters World Championships in League of Legends, a popular e-sports title. The solo maneuver that Xpeke performed on that stage has since made its way into common LeagueofLegends vernacular, being invoked in any match, casual or professional, where a player deftly ends a game singlehandedly. E-sports, which encompasses many more titles than League of Legends, has become a cultural phenomenon of sorts. People may wonder whether the whole scene is just a flash in the pan or something more significant. I begin this thesis in much the same way that I have begun many conversations over the past two years: defining e-sports. In most of those conversations, I simply say "professional video-gaming" and move on to other topics. Here, though, I fully elaborate on what e-sports means. More than just professional gaming, e-sports is an entire industry created around competitive gaming at all levels of play. An e-sport is not a just a sports video game like the title might suggest, though some e-sports titles are sports video games. Instead, e-sports titles are meticulously balanced, competitive, multiplayer games. Many games would fall into this category, but it takes a community of people to take an e-sport to the level of the classics like Counter Strike and Starcraft. Such communities are core to the identity of e-sports. Indeed, this identity itself is an oxymoronic collision of geek and jock culture; a mixture that media would have us believe acts like oil and water. Even within e-sports communities lines are hazy and misdrawn. As Taylor and Witkowski (2010) show in their study of a mega-LAN event, the e-sports scene is fraught with identity issues not only from outside, but within as well. The jock-like first-person-shooter (FPS) players competing at the same event as the nerdy, enigmatic World of Warcraft players E-Sports Broadcasting 10 shows the conflicting, lived masculinities in e-sports. Players are unsure whether to act like superstar athletes or tech-geeks. Can you be both? The word e-sports alone evokes such a conflicting image. Electronic sports seems almost paradoxical in nature. Have we moved beyond a physical match of skill and extended our contests to avatars in a digital world? How can two players sitting at a desk be sporting? As e- sports continue to grow not only as a segment of the gaming industry, but as a spectator affair, we begin to see the 'sports' side of e-sports both challenged and invoked more frequently. In a telling case, Twitter erupted after a Dota 2 tournament made an appearance on ESPN 2 in 2014. With $10 million at stake, many e-sports fans thought the event warranted the attention of the all-sports network. Plenty of viewers took to social media to praise the move made by ESPN. Others were shocked: "Espn2 is seriously airing an online gaming championship? Wtf man. This is our society now. That is not a sport" (Hernandez 2014). The sports status of e-sports has been both defended and attacked by journalists, academics, and fans alike. The debate about the status of e-sports has been raging for many years. Witkowski's piece, "Probing the Sportiness of E-Sports", presents both sides of the argument pulling from games studies scholars and assessing e-sports on their terms. Ultimately though, I believe she shelves the debate deftly when she states, "sport is a personal experience... as many a sporting scholar has written before - if an individual considers the sporting activity they are engaged in to be a sport... then it is a sport" (2009, 56). I do not wish to rehash this debate. I have no stake in it. As Witkowski asserts, the attempt would be futile. Instead, I accept the role traditional sports have played in the shaping of e-sports. In fact, exploring the relationship between e-sports and their traditional counterpart drives this work. In what follows, I argue that the sports media industrial complex has fundamentally E-Sports Broadcasting 11 shaped the current e-sports industry. Beyond this grounding, e-sports broadcasters constantly borrow from traditional televisual broadcasts, using models that they feel to be appropriate for their medium. Regardless of whether e-sports qualify as sports or not, they are constantly informed by sports broadcasting and follow a trajectory set out by traditional sports models. This work comes about at in an interesting moment in e-sports history. E-sports audiences have never been larger, Riot games boasted an impressive 27 million viewers for the League ofLegends World Championship in 2014 while the 2015 Intel Extreme Masters world championship saw over 1 million concurrent viewers across multiple live-streaming platforms (Riot Games 2014; ESL 2014). An old classic, CounterStrike, has re-emerged, albeit in a new package. The audience it continues to draw proves that some titles have staying power in this fickle industry. At the same time, a new title, League ofLegends, consistently pulls in over 100,000 concurrent viewers for its weekly shows in the U.S. and E.U. As the League ofLegends Championship Series moves into its fifth season, it has come to resemble a traditional sports broadcast more than it does its fellow e-sports shows. A new addition in Season 5, a segment called Prime Time League (PTL) is nearly indistinguishable from ESPN's Pardon the Interruption (PTI) at a glance. Figure 1-Left Image: Prime Time League; Right Image: Pardon the Interruption E-Sports Broadcasting 12 Comparing these two images reveals the level of sports emulation found in e-sports broadcasting today. From the stats and schedule ticker at the bottom of the screen to the show rundown along the edge of the screen, an uninitiated viewer would have difficulty distinguishing between the e- sports show and the traditional sports show. A steady influx of television producers and directors are starting to shape an industry that already has an identity crisis while still investigating how best to harness the new medium of live-streaming. These assertions are not meant to give the impression that we stand on the edge of wholly untouched land as pioneers in a new frontier. As shown in the e-sports literature review to follow, the e-sports industry has a history of evoking the feeling of standing on a precipice. Organization In the introduction, I first provide a brief history of e-sports and take note of the directions e-sports scholarship has pursued. Following this review, I introduce the sports media industrial complex to better situate e-sports broadcasting within the larger media landscape of sports broadcasting: the focus of chapter 1. The first chapter begins by looking at the long history of sports and media. By introducing the full gamut of sports media, I am better able to investigate how e-sports broadcasting stays in conversation with each of its predecessors. As evidenced in the reshuffling of sports media through history, we can see that e-sports make use of all of these forms of media while creating something new. During this chapter, I look to the transition moments in traditional sports broadcasting as the foundation ofthe e-sports industry. Moments of tension and doubt within the sports media industry as it shifted from one medium to another provide perfect lessons E-Sports Broadcasting 13 to be learned by the e-sports industry as they struggle with some of the same issues found in the reshuffling of media history. Indeed, while making use of the same media through journalism, public relations, and audiovisual broadcasts, the e-sports industry constantly wrangles with the use of the newly emerged medium of live-streaming. Television especially influences live- streamed broadcasts, which e-sports broadcasts tend to approach with the same framework as television. Chapter two focuses on e-sportscasters, also known as shoutcasters. I begin the chapter with a brief look at the history of shoutcasting. Considering that many of the early shoutcasters pull solely from traditional sportscasters, understanding their influences is crucial in understanding how e-sports has evolved in the way it has. As, I argue, the single most pointed signaling of the sportiness in e-sports, these individuals have pushed the e-sports industry towards a sports model. When first time viewers or listeners leave an e-sports broadcast with the distinct feeling of a sports broadcast in their mind, it is the shoutcasters doing their job. They rely heavily on conventions set by traditional sportscasters. Much like their predecessors when faced with something new, shoutcasters borrowed what they could and innovated when there was nothing to borrow. Chapter two also focuses on shoutcasters' formulation of their identity within the e-sports industry as personalities, professionals, and record-keepers. Shoutcasters are just now creating an identity separate from traditional sportscasting. Where veteran shoutcasters relied primarily on traditional sports broadcasts, newer casters look instead to other shoutcasters. These shoutcasters are reshaping their identity while attempting to fully embrace the new medium of live-streaming. The third and final chapter tackles the topic of economics in e-sports. As the history and trajectory of sports broadcasting has profoundly affected the e-sports industry, many of the E-Sports Broadcasting 14 economic models present in traditional sports bled into the e-sports industry as well. The e-sports industry in the US and Europe has yet to be analyzed as such. Some work (Taylor 2012) has focused on e-sports revenue streams including sponsorships, company models, and team ownership, but overall, the subject remains underexplored. Dal Yong Jin's (2010) analysis of the political economy of e-sports in South Korea offers a tool set for this chapter. While the South Korean e-sports model spawned out of an extremely particular set of circumstances that cannot be readily applied to the U.S. or E.U. e-sports scenes, Jin's investigation of the surrounding economic systems surrounding e-sports translates well to my own investigation of the U.S. and E.U. industries. As staggering prize pools continue to make headlines, it is easy to lose sight of the economic system working behind the scenes to keep e-sports financially salable, or in some cases not. The third chapter delves into traditional sports economics and their influence on the e- sports industry. In some areas, the models translate perfectly. In others, e-sports has been unable to tap into the same revenue generators as traditional sports. Unless some developments significantly alter the e-sports industry, it may be more tenable to pursue other models instead of the sports industry. Methods This thesis makes use of many qualitative methods including historical analysis, interviews, and fieldwork. To grasp the significance and situation of e-sports broadcasting in its current state fully, one must analyze the same developments in traditional sports broadcasting. As one takes a deeper look into the past of the professional sporting industry, its influences on e- sports become clear. A feedback loop has been created between the two. Historical analysis offers a glimpse at key moments which defined the incredibly successful global sports industry. E-Sports Broadcasting 15 Not only are similar situations appearing in e-sports, but e-sports pushes back into each of the investigated forms of media. A few of the issues currently facing e-sports could be resolved through following the path established by traditional sports, while other issues have been caused because so much has been borrowed. I also had the pleasure of conducting seven interviews with professional shoutcasters. I limited the selection of shoutcasters to full-time professionals, rather than amateurs, to get an insight into how these new professionals view their role within the industry. Roughly half the participants are veteran shoutcasters of five or more years. The other half have joined the scene more recently with one in particular having shoutcasted professionally for less than one year. As these informants are a few of only dozens of professional shoutcasters in the world, I have attempted to keep their identities anonymous. As professional personas, some of these casters may benefit from being associated with this work, but I do not want to run the risk of potentially linking these shoutcasters with their statements in the event that this information could somehow affect the community's perception of the individual or potentially harm their prospects within the e-sports industry. The conversations were all positive, but one can never truly assure their informants that information they have provided in confidence will have no repercussion in any foreseeable future. With these considerations in mind I decided before conducting the interviews that the informants would remain anonymous. Finally, I was also able to spend time working within the e-sports industry. My time spent working for a prominent e-sports company profoundly shaped this thesis. Working alongside industry professionals sparked countless conversations about the current climate of the e-sports industry and possible futures. These conversations have both helped and challenged my thinking about the e-sports industry. While I often refer to the e-sports industry or community as a E-Sports Broadcasting 16 homogenous whole, the professionals who live within the space are not all of one mind and it would be a mistake to present them that way. Within e-sports, there are many different games and communities vying for viewers, players, and attention. What follows is my best attempt at wrangling the many paths e-sports has started to follow. E-sports Literature Review E-sports is still a young industry and an even younger subject of critical inquiry. Most entries into e-sports scholarship have emerged within the last five years. E-sports literature tends to come from the much older tradition of games studies, but ties into many other fields including the social sciences, cultural studies, economics, and law. Professional-gaming literature is a veritable hotbed of potential research topics with more articles, theses, and dissertations appearing every year. Much of the growing body of e-sports literature focuses on the professionalization of gaming (Jin 2010; Mora and Heas 2005; Swalwell 2009; Taylor, Nicholas 2009; Taylor, T.L. 2012; Witkowski 2012). These histories offer much more than a rundown of the events that created the e-sports industry. They also offer insight into our contemporary social moment. The arrival of a professionalization of video gaming signals many significant developments within both western and non-western culture. The global nature of e-sports and its meshing together of complex and often conflicting identities continues to beg investigation. E-sports literature primarily resides within the social sciences. Many cultural analyses in e-sports (Chee and Smith 2005; Harper 2010 and 2014; Hinnant 2013; Swalwell 2009; Taylor 2011) have focused on the communities growing within different scenes. Todd Harper, for instance, investigates the culture of competitive fighting games, a fascinating community which stands both within and at odds with the rest of competitive gaming. Gender studies are also E-Sports Broadcasting 17 becoming increasingly common within e-sports literature (Chen 2006; Crawford 2005; Leonard 2008; Taylor 2009 and 2011; Taylor and Witkowski 2010; Witkowski 2013). With the fascinating and fraught formulation of masculinity within these spaces as well as the perceived absence of femininity, gender studies are incredibly important within e-sports literature. Nicholas Taylor (2011) offers insight into the ability of e-sports to create embodied performances of masculinity at live events which spread through communities specific to certain titles or genres. Taylor and Witkowski (2010) also show the conflicting versions of masculinity that appear in different e-sports genres. There has also been an increasing focus on e-sports as a spectator activity. Jeff Huang and Gifford Cheung (2012) found in a study that many of the e-sports fans they investigated prefer watching high-level play rather than playing a match themselves. Kaytou and Raissi (2012) also investigate spectatorship in e-sports with a focus on how best to measure live- streaming audiences. Others (Bowman 2013; Gommesen 2012; Kow and Young 2013) show that the audience in e-sports has a profound effect on performance for the players, akin to a traditional sports audience. These scholars also investigate the expertise apparent in e-sports players that is passed on through spectating as often as practicing. As the professional play of video games fascinates so many, e-sports literature has understandably focused primarily on professional players. Notable exceptions include Jin (2012) and Taylor (2012) who, while still heeding players, also investigate the surrounding factors which allow for play at a professional level. Without these other factors, professional players would not exist. It is from the tradition of these two authors, among others, that I base this work. This thesis, like many of the works listed above seeks to better understand the phenomenon of e- sports while analyzing a particular segment of the scene. With few investigations into the E-Sports Broadcasting 18 broadcasting of e-sports, I hope to contribute to e-sports literature in a way that is both unique and replicable to other systems found within the larger e-sports framework. Sports Media Industrial Complex As sport and media become increasingly intertwined, it becomes difficult to analyze one without at least acknowledging the impact of the other. Pointing to the inextricable link between sports and media, sports media scholar K. Lefever (2012) argues, "while sport provides valuable content and audiences for media operators, the media is a revenue source and promotional tool for sport." As such, the steady professionalization and, in turn, commercialization of sport relies heavily on its media counterpart. The subsequent interdependence between media outlets, sponsors, and sports leagues creates what is often referred to as the sports/media complex or sports media industrial complex (Jhally 1989, Rowe 1999, Maguire 1991). Wenner (1989) coined the neologism, MediaSport, to define the deeply rooted relationship between sports and media. The two can hardly be considered separate anymore. Stein (2013), a Comparative Media Studies alumni, building on the work of these earlier scholars created a model which could be applied to new arrivals in the sports media landscape. Thankfully, Stein provides a fairly replicable analysis of sports video games within the broader sports media landscape. His investigation of the relationship between televisual sports video games and sports media largely informs my own work. He notes an almost relentless stream of advertising and commercialization rhetoric appearing in sports video games. Building on the work of Wenner, Rowe, and Jhally, he argues that the commodification and capitalist trends found in traditional sports broadcasting bleed into newer media such as video games. This steady influx of advertising and commercialization can be found in e-sports as well. E-Sports Broadcasting 19 As e-sports broadcasters gain more experience and access to more robust technology, they have started to incorporate many of the same commercial opportunities Stein noticed in sports video games. Segments of the broadcast are occasionally sponsored, or one might see a sponsor make an appearance in an event's title such as the Intel Extreme Masters tournament. Where Stein argues that sports video games incorporate these advertisements as a signifier of their televisual legitimacy, I argue that e-sports broadcasters make use of the same strategies because they are informed by earlier forms of sports media. The steady commercialization found in e-sports reveals the influence that the sports media industrial complex has had on the e-sports industry. In documenting the dynamics of the sports media industrial complex, Jhally (1989) argues that sports are best viewed as commodities. Jhally's model focuses on the sporting industry in the US prior to the emergence of new media. More readily applicable to e-sports, Lefever's (2012) analysis of the sports media complex within new media details a phenomenon which has upended the former relationships between stakeholders in the sports media industrial complex. She claims that, "the sports/media complex has somehow changed, allowing the different stakeholders to take up new roles" (Lefever 2012, 13). The stakeholders, including sports franchises, sponsors, and media outlets, have had to adapt to a new media landscape with new roles. These new roles are more transient within the high-demand world of new media. Sports organizations and franchises have taken a more active role in connecting with fans, media outlets have taken a larger interest in sports franchises (often buying sports franchises if it is less expensive than purchasing media rights), and sponsors have taken advantage of new, innovative ways to reach consumers (Lefever 2012, 21). According to sports scholars Haynes and Boyle (2003), television sports viewers are no longer expected to just sit back and relax. Instead they are expected to follow their sport through E-Sports Broadcasting 20 social media, forums, blogs, and other digital outlets. This new, active fan fits well within the e- sports industry and live-streaming, but has changed the traditional sports media industrial complex. Before delving too far into the role of traditional sports economic models on e-sports, however, I will first situate live-streaming and e-sports within the larger sports media industrial complex. E-Sports Broadcasting 21 Chapter 1 Sports Media in Transition From Print to Live-Streaming Every day, millions of Americans are catching up with the latest sports news through print, radio, television, and online. Sports have saturated the entire spectrum of mass media in the US. With the emergence of each form of mass media, sports coverage has been at the forefront of adoption and innovation (Bryant and Holt 2006, 22). Each major medium shift in the US has been accompanied by a massive reshuffling of the sports media landscape. Often, this reshuffling opens a space for a particular sport to take up the new medium, create conventions, and carve a path for others to follow. These sports were not spawned by mass media, but their spike in popularity around the emergence of a new medium indicates very specific social moments in the US. Early sports magazines and print coverage of sports focused primarily on prize-fighting, radio ushered in the golden era of baseball, and television transformed football into a titanic entertainment industry. The rise and stabilization of sports media are as much a product of available technology as they are indicative of societal preoccupations of the time. If sports and sports media are indicative of our social moment, then what can we glean from the arrival of live-streaming and e-sports? The co-evolution of sports and media is the coalescence of many factors including changes in power structures, modes of production, and available technology. As Bryant and Holt argue in their investigation of the history of sports and media, "[e]ach epoch of social evolution has witnessed important sports-media developments that were affected by the evolving socio- cultural environment" (2006, 22). In what follows, I trace the co-evolution of sports and media with particular focus on the relationship between emerging mass media and the media ecology E-Sports Broadcasting 22 surrounding that emergence. By documenting these moments of turbulence, I establish the framework necessary to analyze live-streaming as a new medium with which e-sports has emerged as an early adopter and convention creator. Live-streaming did not emerge independently from its predecessors, but rather delivers on the preoccupations of our current social moment. It has once again started a reshuffling of the roles of media within the sports media complex. E-sports, while primarily viewed through live-streaming, relies on all of the previous forms of media to varying degrees. With this framework in mind, I argue that the feedback between live-streaming, e-sports, and traditional sports has spawned an industry which roots itself in traditional sports media while still investigating the full potential of live-streaming. I begin by briefly discussing sports media in antiquity with Thomas Scanlon's (2006) piece on ancient Mediterranean sports and media. After this introduction to sports media, I move to the US in the late eighteenth century with the emergence of the first sports-only publication, the sports magazine, as well as early print news coverage of prize fighting during the rise of industrialization and nationalism. The next section maps the push towards immediacy in sports coverage and the rise of radio. On the heels of radio and the golden age of baseball, I discuss the early issues with televised sport before the post-war era. Moving into the 1950s and 1960s, I detail the transformation of football into a televisual sport accompanied by a very specific social contingency. I then transition into an investigation of live-streaming and e-sports, particularly how both are in conversation with sports media history. Origins of Sports Media As classicist Thomas Scanlon (2006) posits, there is no history of sports without its media counterpart. Media in antiquity, he argues, "are a tool of society, a means of transmitting a message, primarily one from the rulers to the ruled" (Scanlon 2006, 17). While his definition is E-Sports Broadcasting 23 quite limited, Scanlon is correct in noting that media are inflected with the power structures of a society. Sports as media were classically used by those with power to reinforce the hierarchy. Sports events were "represented as a benevolent benefaction from the rich, noble, and empowered to those marginalized" (Scanlon 2006, 18). This reinforcement of power structures comes through not only in the production of sporting events, but also in the medium itself. Scanlon suggests that the most powerful sports 'medium' in classical times was Roman architecture. The massive circuses and arenas were meant to "provoke awe, admiration, and obedience in the citizens" (Scanlon 2006, 18). Scanlon establishes that the predominant sports medium in a given society correlates directly with their notions of power. Within the realm of more dispersed authority such as the Ancient Greeks, sports media reflected the high value of an individual and his merits. Depictions of athletics in Ancient Greek poetry and pottery, made by and for the common people, focus on a particular athlete's prowess more than the event itself. On the other hand, societies with incredibly rigid hierarchies and god-kings such as the Ancient Egyptians and Persians, tend to represent sports as a demonstration of the ruler's power over their people. Ancient Rome, with its centrally focused authority, used architecture to demonstrate the power of the nobility as both benefactors and arbiters, diminishing the role of the athlete to that of an entertainer. Moving into more recent history with media such as newspapers and radio, Scanlon concludes that sports media became an amalgamation of both the Roman and Greek styles: large spectacles with massive personalities. E-Sports Broadcasting 24 Establishing a Media Landscape: Early Sports Media in America The importance of the printing press on modem society cannot be overstated. While its precise effects are still being debated', the affordances of the printing press allowed individuals to produce and disseminate a massive amount of information far more efficiently than ever before. With a massive rise in literacy rates and increased access to print brought about by the printing press, the reading population of the world shifted (Eisenstein 1983). While early readership was restricted to a very small subset of society, the printing press paved the way for the coverage of more mundane topics such as sports. In their analysis of sports media in pre- industrial America, sports media scholars Jennings Bryant and Andrea Holt point to two major developments: first, the appearance of sports in newspapers as 'general news' and second the creation of a completely sports-centered publication: the sports magazine (2006, 22). The advent and success of sports magazines in the early nineteenth century stands as a marker for some of the intellectual shifts of the industrial era. During this time we see a professionalization of sport in the form of prize fighters. We also see a shift from sports as a local leisure activity to something that one follows from a distance. Sports contests began to take on implications beyond a mere matching of athletes. Many sports magazines started out as independent, one-person operations that began circulation in the 1820s and 1830s (Bryant and Holt 2006, 22). The Spiritof the Times, one of the earliest iterations of the sports magazine, actually reached a circulation of over 100,000 readers by the 1840s. The success of this initial sports-focused publication displays the roots of the American sports media tradition. While they note the significance of sports magazines in the overall climate of sports media in America, Bryant and Holt trace the advent of modem sports 1See Elizabeth Eisenstein. 1983. The Printing Revolution in Early Modern Europe. New York: Cambridge University Press. E-Sports Broadcasting 25 media to recaps of prize fighting in the Penny Press age of the 1830s. With increased circulation to the middle and lower classes, sports coverage increased substantially in the mid-nineteenth century. Sports coverage in the Penny Press era focused on creating spectacular depictions of sporting events. As McChesney, a media historian points out, James Gordon Bennett, owner of the New York Herald,was "one of the first exponents of 'sensationalism' as a means of generating circulation, and sport fit comfortably within this rubric" (1989, 51) Out of the sensationalism present in these early newspapers, sports began to take on more significant cultural meaning. There was particular focus on regionalism and nationalism. Sports media scholar J. Enriquez explains that sporting events were far more likely to be covered if they featured a contest which reflected the social preoccupations of the day such as a northern horse racing against a southern horse, or an American boxer fighting a European (2002, 201). Through these mediated depictions, sporting events were encoded with much more meaning than a simple contest. They reflected the contemporary hopes and anxieties of the people. Sports media built up athletes as representatives. Newspaper recaps did much more than simply describe the actions; they created dramas (McChesney 1989, 51). The hyped up imagery of athletes and their contests created through the Penny Press and sports magazines became the paradigm for sports coverage for decades while a new sport caught America's attention. Newspaper Sports Writing and the Rise of Team Sports The rise of baseball as a national pastime coincide with the period of time just after the American Civil War. McChesney explains, "The Civil War introduced baseball to an entire generation of Americans, as the troops on both sides played the game when time permitted. Indeed, baseball emerged as the preeminent national team sport during this period" (1989, 52). E-Sports Broadcasting 26 After the Civil War, baseball helped mediate conflict by providing common ground for northerners and southerners. This moment was one in which the country was seeking to heal its rift, looking for neutral things that could bind the nation together. Baseball filled a political agenda by giving people something to focus on without opening old wounds. Sports writing changed drastically in the years following baseball's spike in popularity. Sports coverage began to receive regular columns and increased coverage throughout the late nineteenth century, leading to a new kind of journalistic specialization: the sports-writer (Enriquez 2002, 202). This fixation on sport was a result of new socio-cultural environments. Mandelbaum (2004), a sports media scholar and historian, argues that the industrial revolution created a new sports landscape through several major developments. First, the notion of childhood had expanded. In the nineteenth century, the period between birth and entering the workforce increased substantially. The new notion of childhood permitted more people to engage with baseball, football, and basketball. This increased interest in team sports continued into adulthood. Watching and reading about sports in the newspaper or sports magazines became an acceptable way to recapture the "carefree years of their lives" (Mandelbaum 2004, 2). Mandelbaum also argues that baseball offered a renewed connection to pastoral America, creating a feeling of nostalgia for the new city dwellers and factory workers who desperately missed the pace and beauty of rural America. Baseball coverage created the first major feedback loop between sports and media in America. Bryant and Holt claim that the importance of sport was downplayed significantly in the puritan era, but, "regular, routine reporting of sports in newspapers and specialized magazines helped shift the cultural attitude towards sports in general" (Bryant and Holt 2006, 25). They argue that in the late 1870s through the 1890s, Americans adopted a new stance on sports as important for the development of mind, body, and society. This new cultural stance on sports E-Sports Broadcasting 27 was shaped and fostered by an increased media coverage of sports. As baseball and its media coverage became more professionalized, Americans began to consume sports media in completely different methods. Sports spectatorship became a regular and acceptable pastime for the industrial worker. The industrial revolution created the first opportunity in America for sports production and spectatorship to be commercially successful endeavors. The growth of cities and the massive developments in individual mobility allowed for sporting events to take on new significance (Mandelbaum 2004, 3). Cities provided large numbers of sports players as well as spectators to fill newly built stadiums and watch newly formed teams. Sports fandom in the U.S. fit neatly into the predominant forms of labor and leisure. Zillmann and Paulus (1993), two psychologists who wrote on sports spectatorship, explain, "spectatorship, as a significant form of recreation, is an outgrowth of the monotony of machine-dictated labor, sports events became the weekend love affair of all those whose workday was strictly regulated by production schedules" (601). Zillmann and Paulus' article further supports the feedback between sports media consumption and societal structures. Live spectatorship in America had previously been seen as a luxury for the rich and powerful, but with the increased circulation of newspapers, and in particular sports coverage, to the middle and lower classes, sports spectatorship became accessible to an entirely new sector of the population (Bryant and Holt 2006, 21). Architecture once again emerged as an important medium. Large concrete and steel stadiums were created, replacing the more organically created playing fields of the late nineteenth century (Mandelbaum 2004, 52). We see here an important transition into the production of sport as a money making opportunity. As I discuss in the third chapter, the introduction of investors and producers fundamentally alters sports and their media counterparts. E-Sports Broadcasting 28 The available media shaped the portrayal and perception of athletics in the industrial era as well. The idea may sound a bit romantic, but Benjamin Rader (1984), a sports scholar focused on the transformation of sports media in America, labels the period of sports media prior to television as an era of heroes. Whether speaking of prize-fighters or the Mighty Casey of folklore, sports media in the industrial era painted athletes as larger-than-life characters. Rader claims, "[t]hose standing on the assembly lines and those sitting at their desks in the bureaucracies increasingly found their greatest satisfaction in the athletic hero, who presented an image of all-conquering power" (1989, 16). To Rader, sports media before television presented the American ideal. Athletes were meritocratic role-models playing for the love of the game. Rader's analysis places the impetus on newspapers to depict dramatic stories with characters akin to David and Goliath. In addition to individual mobility, urbanization, and industrial work, Enriquez attributes the rise and legitimacy of sports journalism as the catalyst for the nationalization of sports in America (2002, 201). As all forms of communication and nationalization were transforming, sports coverage lead the charge. In the early twentieth century, most newspapers had dedicated sports writers on staff. These sports writers became famous through their innovative and entrancing writing. Writers like W. 0. McGeehan, who worked for many San Francisco papers, described athletes as sorrowful sages and their contests as the clashing of titans on a battlefield (Nyhistory.org 2015). In this period however, it is difficult to judge the difference between journalism and public relations (Bryant and Holt 2006, 30). In fact, the issue of PR penetrating journalism in the late nineteenth to early twentieth century is explicitly laid out in Michael Schudson's (1981) chapter, "Stories and Information: Two Journalisms in the 1890s". At the turn of the century, there existed a dichotomy between news as entertainment and news as E-Sports Broadcasting 29 information. As papers around the country struggled to define themselves, sports media also went through a defining period. Legitimate sports writing became known for its higher literary quality, but read more like advertisements with its exaggerated, often hyperbolic, language. Public relations soon became as much a part of sports journalism as describing the events themselves. Team owners understood the media's role in keeping attendance at sporting events up and began catering to sports journalists for coverage (Enriquez 2002, 206). The team owners expected sports journalists to act as publicists for their events. The gambit paid off as sports writing filled more and more of the daily papers and attendance at live events continued to rise. The sports writers added significance to the experience of watching a sporting event. Between the shifts in the American middle class, leisure activities, and the flowery language of sports journalism, watching a sporting event began to take on the significance of watching history unfold. We will see these same issues appear again in e-sports coverage as journalism becomes a legitimizing force within the e-sports landscape, torn between deep analysis and hyped-up depictions for the sake of generating publicity. Liveness continued to assert its role in sports media as new technologies emerged. The telegraph especially placed the impetus on news sources to provide timely information. In a fascinating illustration of the desire for timely sports news, the ChicagoTribuneran the following note on March 17, 1897, the day of the legendary boxing match between Jim Corbett and Rob Fitzsimmons: "The Tribune will display bulletins today on the prize fight. It has secured a telegraph wire to the ring in Carson City and a competent man will describe the progress of the fight, blow by blow, until the test is decided. The bulletins will be posted thirty seconds after they are written in the far Western city" (Bryant and Holt 2006, 29). This fixation on live updates for sporting events across the nation is another example of how sports media has shaped the E-Sports Broadcasting 30 media landscape of America. Information began traveling faster than ever via wireless transmissions, but it was actually a yacht race which saw one of the very first implementations of wireless for live information transmission. Sporting events saw some of the earliest uses of the telegraph for news reporting as well (Mott 1950, 597). As the telegraph allowed for a sense of liveness even for remote events, it paved the way for the most significant development in sports media prior to television: radio. A Fixation on Liveness: Radio and Sports Consumption Radio delivered on the push towards liveness established by the telegraph. The first broadcast of a Major League Baseball game occurred within a year of the commercial release of radio (Enriquez 2002, 206). Rader remarks, "Now the fan did not have to await his morning newspaper; he instantly shared the drama transpiring on the playing field" (Rader 1984, 23). For the first time, sports were perceived as home entertainment. Broadcasters as well as businesses capitalized on the shift. Sports coverage was integral to the rise in popularity of radio in the interwar period. In Rader's words, In the pre-television era, the heroes of sports assisted the public in coping with a rapidly changing society. The sports world made it possible for Americans to continue to believe in the traditional gospel of success: that hard work, frugality, and loyalty paid dividends; that the individual was potent and could play a large role in shaping his own destiny (1984, 15). By Rader's account, sports programming on radio delivered a much needed revitalization of the American ideals through the transient industrial period and The Great Depression. The rise of radio coincides with the golden age of baseball, but there was an awkward transitional phase into the new medium while newspapers and radio both tried to define their new boundaries. While consumers clearly desired liveness, initial radio broadcasts felt flat and emotionless (Bryant and Holt 2006, 27). Some of the greatest blow-by-blow sports writers were E-Sports Broadcasting 31 terrible at delivering a compelling radio broadcast. Sports writers were extremely adept at creating dramas through print, but they failed to capture audiences in the early days of radio. Oddly enough, their sports knowledge undermined their sports coverage in the new medium. Instead, a new role emerged: the sportscaster. In the era of radio, the performance of live sports broadcasts came with significant stakes. Adept sportscasters were cherished more for their voices than their sports knowledge. Delivering play-by-play depictions of sporting events takes little technical knowledge, instead the entertainment comes from the delivery. Mandelbaum writes of early radio sportscasters, "the broadcasters were akin to poets and troubadours who preserved and handed down the great tales oftheir cultures by committing them to memory and reciting them publicly" (2004, 80). Delivery was actually so important that sometimes sportscasters such as Graham McNamee, known especially for his baseball broadcasts, were not even present at the event but instead handed written play-by-play depictions of the game so that they could add their own dramatic and authorial tone to the live event (Mandelbaum 2004). Another issue during the emergence of radio was redefining the role of newspaper sports coverage. Radio could deliver the liveness desired by sports fans and was incredibly well suited for play-by-play commentary. Newspapers had traditionally covered the blow-by-blow report of an event, capturing the drama through flowery language and hyperbole. With radio, the sportscaster captured the audience's attention through the same means, bringing in even more emotion as his voice rose and fell with the action of the contest (Enriquez 2002, 202). Sports writers instead decided to focus on an area that radio broadcasters could not: strategy. Early sportscasters had to focus so much on the delivery of the action that they could not elaborate on the reasons behind certain maneuvers. Sports writers took advantage of this deficiency and began E-Sports Broadcasting 32 writing articles which focused on everything around the action. From in-depth analysis of strategy to the creation of larger than life athlete personalities, newspaper coverage of sports in the era of radio completely changed to remain relevant. Sports magazines also had to find a new space to occupy during radio's reign. Completely unable to keep up with the live coverage by radio and the strategic coverage of America's favorite sport, baseball, sports magazines instead began to focus on niche sports such as yacht racing. The other innovation of sports magazines in the early 1930s was their addition of full page color photographs of athletes, something that neither radio nor newspapers could offer (Enriquez 2002, 202). They remained as an important sports medium but had been supplanted by both radio and newspapers. Baseball's hold on the American public was so strong that the niche sports, which were typically covered in sports magazines, hardly seemed relevant. Football in particular rarely saw coverage anywhere other than sports magazines (Bryant and Holt 2006, 32). Football had traditionally been seen as a college sport reserved for the wealthy, but with an increasing number of college graduates in the U.S. and the rise of a new medium, its niche status was about to change (Oriard 2014, vii). The Televisual Transformation of Sport Television's initial debut into the sports world was a colossal failure. Reaching only a few hundred people, the first American televisual sports broadcast was a Columbia-Princeton baseball game on May 17, 1939. Just a few years after the commercial release of the television in the U.S., RCA's first foray into televised sport flopped. The New York Times' Orrin E. Dunlap Jr. recounted on the following Sunday, "The televiewer lacks freedom; seeing baseball on television is too confining, for the novelty would not hold up for more than an hour if it were not for the commentator" (Rader 1984, 17). He goes on to say, "To see the fresh green of the field as The E-Sports Broadcasting 33 Mighty Casey advances to the bat, and the dust fly as he defiantly digs in, is a thrill to the eye that cannot be electrified and flashed through space on a May day, no matter how clear the air." Bryant, Holt, Enriquez, and Rader attribute the failure of early televisual sports to several factors. First, television camera technology was rudimentary and receivers were even worse (Bryant and Holt 2006, 31; Rader 1984, 18). Viewers could hardly see the player, much less follow the ball or action on the field. Second, television was not a commercial success upon its release. Sets were expensive and did not offer nearly enough programming to warrant their price: an issue that created a sort of negative loop as the television industry needed more viewers to warrant more content yet could not supply enough content to attract more viewers. The third factor, described by Enriquez, is the failure for broadcasters to adapt to the new medium. Sportscasters could not actually see the video feed and casted the game as if they were still on radio; recounting every single action that occurred on the field despite what was on viewers' screens at home. Inexperienced camera operators had difficulty following the action and the image rarely matched what the sportscaster was describing. Radio sportscasters also had difficulty transitioning into the new visual medium because they could no longer provide the same level of drama through exaggeration and hyperbole. Where short infield ground balls could previously be described as laser-fast bullets, the viewers at home now saw that the play was just another ordinary event. Situated somewhere in between watching the game live at a stadium yet still sounding like radio, televisual sport had a difficult time defining itself in the late 1930s and early 1940s. According to Rader, televisual sport experimentation stopped completely during the Second World War (1984, 23). With the well-established roles of radio, newspapers, and sports magazines, the revival of televisual sport seemed to be impossible. The utter failure of televised sports in the late 1930s E-Sports Broadcasting 34 into the Second World War left televisual sport in a difficult position. Sports radio's popularity was at an all-time high in the 1940s. Baseball had captured the hearts and minds of the American people, and famous radio broadcasters such as Bill Stern and Jack Armstrong kept them listening with bated breath (Rader 1984, 30-3 1). Baseball and more generally live event sports spectatorship, however, could not keep the nation content for too long. In what has been dubbed the Sports Slump of the 1950s by Rader and others (Bryant and Holt 2006, McChesney 1989), spectatorship had finally started to dwindle. Television sets were making their way into homes in record numbers after World War 11. In the post-World War 11 era, pastimes shifted from inner-city, public forms of recreation to private, home-centered forms of recreation. Sports revenue was down and change was in the air. People could watch baseball on their television sets at home, but not many people wanted to. As shown by the earlier quote from The New York Times, television had difficulty containing the magic that baseball once held. Football, however, was poised to rise with the new medium. It had been long overlooked, but football was incredibly well suited for television broadcasts. The large, visually distinct ball and typically slow moving action provided an acceptable subject for contemporary television camera technology (Grano 2014, 13). College football had seen a bit of success in newspapers, but professional football had a negative reputation as a "perversion ofthe college game played for alma mater rather than a lousy paycheck" (Oriard 2014, vii). Radio broadcasts of football had never reached the same level of success as baseball. Professional football seemed to be a sport without a suitable medium. As sports media scholar Michael Oriard explains, "[o]nly television could give the professional game a national audience, and Pete Rozelle's defining act as the commissioner who ushered in the modem NFL was to market the league through a single television contract, rather than leaving clubs to work E-Sports Broadcasting 35 out their own deals" (2014, vii). This deal with broadcasting giant, NBC, led to the NFL's great breakout story and what would soon become the model for televised sports (Rader 1984, 85). With the NBC still losing money on a dwindling sports fanbase, they were ready to pull the plug on their deal with the budding NFL until the championship match between the Baltimore Colts and the New York Giants of 1958 (Grano 2014, 13). This match, still hailed as the 'Greatest Game Ever Played', would become the longstanding origin story of televised football. The game went into a second overtime, pushing the broadcast into prime time on the East Coast, a slot in which NBC never dared to place professional football. As millions of Americans tuned in for their regularly scheduled programming, they instead found John Unitas and his Baltimore Colts scoring the game winning touchdown after a long, hard-fought battle. Oriard, Rader, Grano, Oates, and Furness all trace the NFL's commercial success to this one defining moment. As compelling as origin stories often are, the truth is that many other factors lead to the success of football in the new mass medium. New technologies such as video tape were integral to the rise of football in America. Hitchcock argues that instant replay in particular helped with the rebranding of professional football: "The use of video-tape gave the game of football a whole new image... The instant replay changed football from brutal, quick collisions into graceful leaps, tumbles and falls. It gave football an aura of art in movement. It made football attractive to entirely new segments of the audience" (1989, 2). Where football players had once been seen as lethargic brutes, instant replay allowed broadcasters to slow down images, dissect plays, and highlight the athleticism of players (Rader 1984, 83-84). Sports, with football leading the charge, were once again on the cutting edge of media adoption. According to Dylan Mulvin, the first documented use of instant replay for review and training purposes was in 1957 during a game between the Los Angeles Rams and the San E-Sports Broadcasting 36 Francisco 49ers (2014, 49). By 1964, instant replay was a standard broadcasting technique across all sports. The NFL's willingness to adapt to the new medium set it apart from other sports at the time. In addition to these technological and legal advances, Bryant and Holt as well as McChesney argue that one particularly innovative producer reinvented sports broadcasting for television: Roone Arledge. With ABC's full support, Arledge established television broadcasting conventions still present today. After the 1958 Championship game between the Colts and the Giants, ABC was scrambling to catch up to the NBC's success in televised sports broadcasting. As Enriquez describes, "Television broadcasting affected different sports in different ways. It devastated boxing, had mixed effects on baseball, and proved a boon to college and professional football" (2002, 202). As NBC began to ride the wave created by the NFL, ABC looked to get in on the action. Arledge was given free rein to perform a complete overhaul of ABC Sports. Bryant and Holt argue that the single most important innovation Arledge brought was the notion that a televisual broadcast should be presented "from the perspective of what the typical fan would see if he or she attended the game live" (Bryant and Holt 2006, 33). Arledge (2003) believed that the broadcast should capture the essence of attending a game, not just the play on the field, but the roar of the crowd, the cheerleaders, the marching bands, and the coaches on the sidelines. As Enriquez describes, "under Arledge, television assumed every role previously played by print media; it served as the primary medium for experiencing events, it provided detailed analysis, and it gave human faces to the participants" (2002, 205). Through football, televised sports were able to set conventions which separated them from earlier forms of media. This transition lives E-Sports Broadcasting 37 on in live-streaming today as we will see later with live-streaming's adaptation rather than transformation of televised sport. The arrival of television meant that sports radio and print media had to redefine their role in sports coverage. Television could deliver the liveness of radio and, with the help of commentators and technology like instant replay, the drama and dissection of strategy found in print media. Newspaper coverage of sports was now relegated to simple recaps. Sports magazines on the other hand rode the success of television. As Bryant and Holt assert, "Sports Illustratedoffers a classic example of an old medium responding to a new one" (2006, 36). Rather than seeking out an area left uncovered by television, Sports Illustratedsupported televised sports by providing innovative action photography and updates on the most popular athletes and teams at the time. Sports broadcasts of the 1960s were infused with the hopes and fears of the Cold War era. R. Powers, a television sports scholar, suggests that sports filled a void in the American public, "shrugging off the darker morbidities of the Cold War and McCarthyism" (1984, 118). The re-found focus on sports as spectacle established by "the youthful theme of ABC, echoed the Kennedy idealism of the new frontier, the sporting emphasis echoed Kennedy's image of muscular athleticism..." (Whannel 2002, 34). Entertainment sports media, with its art-in-motion presentation, delivered a message of newness and regeneration to American. Through broadcasting and advertising deals, sports helped build and perpetuate the growing conspicuous consumption movement and the capitalist ideals of post-war America. Athletes resumed their star status. Sports stars began appearing in advertising everywhere. Merchandising became a key part of sports promotion. Anything from replica jerseys of sports stars to blankets and flags with team branding can be found almost anywhere in the U.S. E-Sports Broadcasting 38 Contemporary Sports fandom has come to mean much more than simply following a team. It means buying a team's products, playing sports video games, joining fantasy leagues, and watching sports entertainment television. Oates, a sports media scholar focused on the NFL, writes that fandom has been transformed by the presentation of athletes as commodities to be consumed selectively and self-consciously by sports fans (2014, 80). The previously subcultural hyper-fandom activities such as fantasy football and sports video games, Oates argues, have moved into mainstream prominence and profitability. Fans are invited to interact with athletes as vicarious managers in fantasy sports, offering a completely new, personally tailored form of interaction with sports organizations. This new drive for constant connection and feedback within the sports industry culminates with live-streaming. Live-Streaming: Constant Connection As Oates suggests, sports fandom has fundamentally changed to reflect an increased involvement on the part of the spectator. Athletes and personalities have become commodities for fans to interact with. Social media, fantasy sports, and video games have created a connection to sports stars that was never before available in other media. At any moment, a spectator can catch highlights on ESPN, head over to forums to discuss major sporting events, or load a stream of a match on their phone, all while tweeting at their favorite athletes with the expectation that their words will be received on the other end. Recent trends show a change in the sports media landscape as new platforms begin to vie for control over sports broadcasting in the US. The NFL has recently signed a deal with Google allowing for the streaming of games over the internet after their current contract with DirecTV ends in 2015. This deal reflects the changing media landscape in the internet era. The rise of new streaming platforms poses an interesting dilemma to the current media titans and new E-Sports Broadcasting 39 opportunities for new forms of media sports. Thus far, using the tradition established by McChesney, Bryant, Holt, and Rader among others, I have used sports media as a lens through which to view particular socio-cultural moments in America. I now turn that lens towards the contemporary sports media landscape. What can we learn about our own social moment by looking at the use of streaming platforms for traditional sports or the arrival of e-sports as an entirely new form of professional competition that makes use of older forms of media, but thrives in live-streams and video on demand? The MLB offers an early case study into the use of live-streaming for major league sports broadcasting. The regular season in the MLB consists of 2,430 games, a staggering number compared to the NFL's 256. The sheer number of regular season games held each year causes a problem with over-saturation. This inundation of content lowers the value of each individual game in the eyes of the major networks (Mondelo 2006, 283). The games that these networks choose not to air due to scheduling conflicts previously caused many games to go unseen by fans outside of the local media market for the two competing teams. To remedy the situation, the MLB streamed over 1,000 regular season games online starting in 2003. The launch of MLB.tv in 2002 allowed engaged MLB fans to continue watching content even when they did not have access to the games through the major networks. While not initially a huge commercial success, MLB.tv still runs today, over a decade later at a monthly subscription of $19.99 and as of 2014 incorporated both post-season games and the World Series as part of the package (MLB.tv 2015). While the MLB has not released the official revenue totals for its live-streaming service, with 3.7 million subscribers the platform generates well over $400 million per year (MLB.tv 2013). This little-known use of live-streaming shows a hunger for immediate interaction with sports media regardless of the available medium. E-Sports Broadcasting 40 Early live-streaming fundamentally looks and feels like television, but it filled a role which network television could not: all access and constant connection to media. It took form on a new platform, but did not truly differ from television. Early live-streaming is more like an adaptation of television than a new medium. Rather than creating something new, the early foray into live-streaming by the MLB simply adapted the already present broadcasting infrastructure and applied it through a different avenue. Television is often invoked in live-streaming. If we look at MLB.tv, the .tv signifies its connection to television, but that domain is actually the official domain for the country of Tuvalu. Other streaming platforms like ustream.tv, twitch.tv, MLG.tv, all based outside of Tuvalu, use the same domain to signal their televisual connection. Live-streaming emerged at a very particular moment in the evolution of sports media. With air-time limited on the major networks, the internet allows a near infinite amount of content to reach sports fans. As Oates would argue, from fantasy sports, to blogs, to live-streaming, the internet is, for many, the new space of the sports fan. Live-streaming goes beyond the ability of other media to reach viewers wherever and whenever, whether from a home computer or a mobile device. Live-streaming delivers on the constant connectedness expected by consumers today. At its roots, live-streaming is a televisual medium. So what separates it from television? Live-streaming today has created its own niche by blending other forms of media. Most live-streams host an internet relay chat (IRC) in addition to the audiovisual component of the broadcast. This IRC allows viewers to chat with other audience members and often the broadcaster, a functionality not currently available in television. This live audience connection in live-streaming is unparalleled in television. Hamilton et al., in their investigation of the significance of live-streaming for community creation, situate Twitch streams as an important 'third place' for community. Building on the work of both Oldenberg and McLuhan, Hamilton et E-Sports Broadcasting 41 al. (2014) suggest that "By combining hot and cool media, streams enable the sharing of rich ephemeral experiences in tandem with open participation through informal social interaction, the ingredients for a third place." The third place that the authors point to creates a rich connection akin to interpersonal interaction. The ephemeral nature of these interactions creates a deep sense of community even in streams with hundreds of thousands of viewers. Live-streaming and in turn, the IRC associated with streams creates a shared experience tantamount to the "roar of a stadium" (Hamilton et al. 2014). These streams also pull in a global audience, connecting isolated audiences into one hyper-connected community. Live-streaming draws on television for its look and feel, but delivers not only on the desire for liveness perpetuated in sports media but also the hyper-connectivity present in today's globalized world. E-sports, Live-streaming, and Sports Media Many factors contributed to the success of live-streaming for e-sports. It arrived at a moment when television seemed closed to e-sports, it was much less expensive to produce, and much easier to cultivate. Television broadcasts are prohibitively expensive to produce. Early attempts at airing e-sports on television have typically flopped, rarely surviving past a second season. E-sports are difficult to film when compared to traditional sports and conventions had not yet been set for the televisual presentation of e-sports (Taylor 2012). The action in traditional sports can typically be captured by one shot. E-sports broadcasts, in contrast, must synthesize one cohesive narrative out many different player viewpoints with varying levels of information. In a game like CounterStrike, broadcasters must wrangle with a large map with ten players in first-person perspective. The resulting audiovisual feed is a frantic attempt to capture the most relevant information from the players with an outside 'observer' controlling another viewpoint E-Sports Broadcasting 42 removed from the players' point of view. The observer functionality in the early days of e-sports broadcasting created a difficult barrier to overcome for commercial success on television. Observer functionality had not yet become a focus for game developers and commentary had not reached the level of competency it has in more contemporary broadcasts. Instead of finding success on television, e-sports pulls in millions of concurrent viewers on live-streaming sites such as Twitch.tv. With television seemingly out of reach and streaming requiring significant investment per event in the early 2000's, e-sports broadcasting remained relatively stagnant until the arrival of a reliable, and cheap, live-streaming platform. Justin.tv (and other similar sites like UStream and Stickam), which launched in 2007, delivered exactly what e-sports broadcasters needed to grow. The site allowed users to quickly and easily stream content online with the use of some relatively simple software. Both broadband internet reach and streaming technology had developed to a point that lowered the barrier of entry for broadcasters. Players from around the world streamed games from their bedrooms. E-sports broadcasters reached new, massive audiences. The success of gaming content on Justin.tv spurred a new streaming site dedicated solely to gaming. The games-centered streaming site, Twitch.tv, launched in 2011. Twitch.tv revolutionized the e-sports industry. Each of the casters I interviewed spent time detailing the importance of Twitch.tv without being prompted. As one explained, Twitch.tv is "the clearest driving factor that's grown e-sports over the past 2-3 years." As mentioned in the introduction, e- sports audiences have reached previously unheard of levels. Large scale e-sports events regularly see concurrent viewer numbers in the hundreds of thousands. These broadcasts still largely resemble televised sports however, rarely, if ever, making use of the IRC. E-Sports Broadcasting 43 Live-streaming is just one of the forms of media the e-sports industry makes use of. In fact, e-sports interacts with most media in the same ways that traditional sports have. The e- sports industry pushes back into almost all of the earlier forms of media discussed in this chapter. Print and radio typically fill a PR role in e-sports coverage. Large events or developments often make their way into publications like The New York Times. Local radio segments will occasionally feature summaries of e-sports events occurring nearby. Internet versions of both of print and radio sports coverage are fundamental segments of the e-sports media ecosystem. Podcasts, digital audio files available on the internet through downloads or streaming, vlogs, and video diaries fill essentially the same role for e-sports that radio currently plays for traditional sports. Experts weigh in on recent developments and players breakdown certain aspects of a game. E-sports journalism has also immerged as a legitimizing force within the industry. Sites like ongamers.com and esportsheaven.com keep fans abreast of any new developments in the professional scene for all of the major e-sports titles. Journalists like Richard Lewis add legitimacy to e-sports through their coverage of current events. Their recaps of developments as well as summaries of various tournaments and leagues closely resemble their print counterparts in sports coverage. It is clear that the e-sports industry is in conversation with many forms of media. Many of the forms and techniques are borrowed directly from sports coverage. These forms of media did not appear instantly however, they are the result of years of push and pull with the larger sports media landscape. Nowhere is this more apparent than in the commentating of e-sports live-streams. E-Sports Broadcasting 44 Chapter 2 Shoutcasters Collecting Conventions E-sportscasters, often referred to as shoutcasters, both look and sound like professional sportscasters. Their attire and cadence both create an instant connection to televisual sports. Having never seen a game of Starcraft 2 before, you may watch the flashing lights and explosions with a perplexed look on your face. As you continue to watch, you hear two commentators provide a narrative, stats fly across the screen, and you start to piece together the game in front of you. After a few minutes, you know the two players who are facing off against one another, you feel the excitement as they engage each other's armies, and a slight sting as the player you were rooting for concedes the match with a polite "GG." The whole presentation feels like a variant of Monday Night Football with virtual armies instead of football teams. From the stat-tickers to the sound of the commentator's voice, you can almost imagine the ESPN or CBS logo gracing the bottom corner of the screen. Shoutcasters have become a staple in e-sports. One of the main signifiers of the 'sports' moniker professional gaming has taken on, shoutcasters lend an air of professionalism to a scene which often struggles to define itself. By adopting the 'sport' title, a precedent has been set for e-sports broadcasters which informs their style and conventions. Shoutcasters are important to investigate because they form a fundamental grounding for e-sports which helps it to create its identity in the face of blistering turnover rates and constant field shifts. E-sports stand in a unique position compared to traditional sports. Where players and coaches in traditional sports often have careers that last for several years, e-sports personalities E-Sports Broadcasting 45 suffer from intense turnover rates where professional careers can end within a year. E-sports players burn out quickly and coaches rarely make a lasting name in the industry. The recognizable personalities in e-sports are the few innovators and commentators who turned their passion into a career. In this chapter, I analyze the role of shoutcasters within the larger framework of the e-sports industry. I build much of this analysis on the foundation that Taylor (2012) established in her investigation of the rise of e-sports. Much of Taylor's analysis still holds true today, but some other developments in the field have created new dynamics within shoutcasting that were not present during her initial encounters with shoutcasters. Understanding how shoutcasters borrow from earlier forms of media, the issues they perceive within the industry, and how they cultivate their own identity as shoutcasters while grappling with the hyper-connection found in live-streaming as a medium allows us to grasp the relationship e- sports broadcasting has with earlier forms of media while still creating its own identity. I begin with a very brief look at the history of shoutcasting. Shoutcasting History One can see that even early attempts at broadcasting competitive gaming borrowed heavily from its media contemporaries. Starcade,a 1982 show that ran for two years, marks one of the first forays into e-sports broadcasting. Though the term e-sports had not yet emerged, the show featured two opponents attempting to outscore each other on various arcade machines. If we look to Starcade as an early example of e-sports, then the origins of e-sports commentating resemble game show commentary found in Jeapordy! or The Price is Right. Watching Starcade for the hosting alone reveals many similarities to other game shows: the host wears typical game- show host garb, pleasantly explains every aspect of the competition, and speaks with the E-Sports Broadcasting 46 broadcast voice we all recognize. Starcadealso shows the constant evolution of competitive gaming coverage as it continued to refine its camera angles, presentation, and format over its two year run. The model which more closely resembles our modern vision of shoutcasting gained momentum at the turn of the twenty-first century. The title shoutcaster comes from the early streaming software used for e-sports broadcasting, SHOUTcast. While many people familiar with e-sports may have no idea where the term comes from, a prominent shoutcaster, djWHEAT (2012), claims that the title remains due to its signaling of the history of e-sports. SHOUTcast, a media streaming program, arrived in 1998, allowing interested parties to broadcast audio recordings to various 'radio' channels for free. SHOUTcast allowed for video streaming, but as one early shoutcaster I interviewed lamented, the bandwidth and equipment required for video streaming was prohibitively expensive. Instead of the audiovisual broadcast we regularly associate with e-sports live-streams today, early shoutcasters relied on audio recordings akin to early radio coverage of traditional sports. These early broadcasts only streamed audio to a few hundred dedicated fans on internet radio. Early shoutcasts follow the form of traditional play-by-play radio broadcasts, focused primarily on presenting every development in the game. In interviews, veteran shoutcasters were not shy about admitting the influence radio sportscasters had on their own style. One mentioned that he spent hours listening to live sports radio to hone his own skills. Early shoutcasters also performed many aspects of the production that they are no longer required to perform in the more mature e-sports industry. They would attend events, set up their own station, typically with their own laptop and microphone. It was a very grassroots affair. E-Sports Broadcasting 47 With little experience in the technical aspects of broadcasting, the productions emulated as much as they could from sports broadcasting to lend an air of professionalism. With the arrival of Twitch.tv, and other reliable streaming platforms, much of the onus of production was taken off of shoutcasters. Instead of acting as producers, directors, editors, and on-air talent all at once as they had in the early audio-only streams, shoutcasters are now more able to focus on the portion of their work from which they get their name. Shoutcasting after the early days of internet radio has come to not only sound like traditional sportscasting, but also look like traditional sportscasting. Something Borrowed: Influences from Sportscasting Wardrobe Many ofthe shoutcasters I interviewed talked about wardrobe as a huge change within shoutcasting, one that was spurred entirely by looking at traditional sportscasting. Most shoutcasters got their start wearing t-shirts and jeans at various e-sports events. Today, you will rarely find a shoutcaster not wearing a shirt with a blazer. Looking at the image below shows the incredible shift in shoutcasting just within the last six years. Both images feature the same Figure 2-Left: Joe Miller at 2009 Intel Friday Game London; Right: Joe Miller at 2015 Intel Extreme Masters World Championship in Katowice Poland. Image credit: ESL, Philip Soedler and Helena Kristiansson. Flickr.com/eslphotos E-Sports Broadcasting 48 shoutcaster: Joe Miller. The left-hand image comes from the 2009 Intel Friday Game London while the right-hand image comes from the 2015 Intel Extreme Masters World Championship. While the images are quite similar, the professionalism apparent in the right-hand image resembles a professional sportscaster. The gamer/geek vibe found in the left-hand image has been removed from the shoutcasting image. As a few of the shoutcasters I spoke with admitted, the drive to rework the shoutcaster wardrobe came purely from traditional sports. On top of that, they pointed to a desire to shed the gamer/geek stereotypes that e-sports had come to inhabit. By adopting professional attire, they felt that they could get rid of the old image and emulate the professionalism of a sports broadcast. Wardrobe is not the only aspect of traditional sportscasting that has made its way into shoutcasting. Style One of the more elusive aspects borrowed from traditional sports is the actual commentary style. I use the term elusive here to signal the difficulty in pinning down exactly why shoutcasters remind us so vividly of traditional sportscasters. Early shoutcasters had no models outside of traditional sportscasting so they took as much as they could: "So as a broadcaster we look at traditional sportscasting. We pull from that and then make sure it fits in game casting." As it turns out, many sports commentary conventions translate well into game casting. As such, the first generation of casters share many similarities with television sportscasters. Most of these early shoutcasters admit to being influenced almost entirely by traditional sportscasters. One caster explains, "Television is where we grew up, it's what we watched. So clearly that's where we're going to pull from." E-Sports Broadcasting 49 Shoutcasters typically have no media training, instead relying on mimicry of earlier conventions to get by. As with most positions in e-sports, and similar to early sports writers and radio casters, shoutcasters are just passionate fans turned professional. In conversations, they each revealed a bit of their own personal history that pushed them towards broadcasting, but only one ever mentioned having received any sort of formal training. Years into his shoutcasting career, he "went back and did a journalism and broadcasting course for 6-9 months." Of particular note, he mentions, "they did one really good project which was 'how to be a news presenter'. They taught me the basics of that." The rest, he says, he learned on-air through experience. The other shoutcasters I interviewed echoed this story. Most of the shoutcasters I interviewed fell into shoutcasting through happenstance and had to learn their craft on-air. Shoutcasters are akin to the very early television sportscasters who had to reinvent their style during broadcasts like Bob Stanton, a radio sportscaster turned television sportscaster who would send his friends to sports bars to gather feedback and suggestions from audience members (Rader 1984). Echoing this inexperience and improvisation, one shoutcaster I interviewed confided, "the first time I had ever been on camera, I sat down and I was like, 'I have no idea how to do this.' I had done two and a half years of audio casting, but I had never done video." Another caster recalls of his first show, "All I knew going into my first broadcast was that I know this game. I know how it works, I know these players, and I play against these kinds of players. I don't know how commentary works, but I can do this." After these first, trial broadcasts, both of the above-mentioned shoutcasters admitted to going back and watching traditional sportscasters to learn more about their craft. Other broadcasting style conventions such as how to handle dead-air, how to end a segment, or how to transition into gameplay were lifted directly from sportscasting. Paul E-Sports Broadcasting 50 "ReDeYe" Chaloner, a prominent personality within the e-sports industry, addresses each of these techniques in his primer on becoming a professional shoutcaster, constantly pointing to various examples from traditional sports broadcasting to illustrate his points. In his section on dead-air, Chaloner writes, "[o]ne of the best pieces of advice I had for TV was from legendary sports producer Mike Burks (11 time Emmy award winner for sports production) who told me 'A great commentator knows when to shut up and say nothing"' (2009, 9). Chaloner uses traditional sports broadcasting as a way to explain shoutcasting, a clear indication of its influence on e- sports broadcasting. Content Analysis: Play-by-play and Color Commentary in the NFL andLCS Another convention lifted directly from traditional sports broadcasts is the arrangement of the casting team. Traditional television sportscasters fall into one of two roles: play-by-play or color commentary. Shoutcasters use these same two roles. Both sports broadcasts and e-sports broadcasts feature one of each type. The play-by-play commentator narrates the action, putting together the complicated and unconnected segments of the game into a cohesive narrative. The color commentator provides their in-depth analysis of the game, typically from the stance of a professional player. Shoutcasters have adopted the two-person team directly from traditional sports broadcasts. The path to each role follows the same pattern as well. An ex-professional player almost always fills the role of color commentary in both traditional sports and e-sports. Their insight is unparalleled. Color commentators attempt to breakdown complex series of events or highly technical maneuvers as if they were still a professional player. In the words of one e- sports color commentator, "I'm not pretending to be a professional player, but I'm doing my best E-Sports Broadcasting 51 to emulate them." He goes on to say, "You can read up on it and study it as much as you like, but unless you've lived it, you can't really comment on it." In comparison, a play-by-play commentator does not need to have the technical depth, but relies more on presentation. Even though a play-by-play commentator has most likely played hundreds of hours of whichever game they cast, they cannot fill the role of the color commentator. This dynamic allows for play-by- play commentators to switch games with relative ease whereas color commentators, both in traditional sports and e-sports, are locked into one game. To illustrate the emulation of sports broadcasting found in e-sports, I now turn to a brief content analysis of the commentary found in a regular season NFL game and a regular season League of Legends Championship Series game. I start with the commentary from one play in an NFL game. After presenting the traditional model, I move to the commentary from one team fight in League of Legends to demonstrate how the convention has been adapted for e-sports commentary. In both cases, I have removed the names of players, commentators, and teams to cut down on jargon and clutter. Each case exhibits the dynamic present in the two man commentary team. NFL With both teams lined up, the play begins and the play-by-play commentator comes in immediately. Play-by-play: Here's [player 1] out to midfield, a yard shy of a first down. [player 2] on the tackle. After the play has ended, the color commentator takes over. Color: It's been [team 1] on both sides of the ball. Whether it be defense and the way that they dominated this ball game and then offensively, the early going had the interception, didn't get much going over the next couple of possessions offensively but since that time, [player 3] has been very precise in how he has thrown the football and they just attacked this defense every which way. E-Sports Broadcasting 52 LCS Three members ofthe Red Team engage Blue Team atRed Team's turret Play-by-play: This is going to be dangerous. Doing what he can to hold out. They're going to grab the turret, the fight will continue after the shield onto [player 1] is already broken. He gets hit, the ignite is completely killing the ultimate! He gets hit by [player 2] who turns around again and heads back to [player 3]. With the action overfor the moment, the colorcommentatorbegins to speak Color: I thought he finished a camp here too... The color commentatoris cut off as two more members ofBlue Team attempt to attack. Play-by-Play Heyo, as the top side comes in here too. [player 1], will he hit a good ultimate!? Oh! They were staring right at him but now he's just left to get shredded apart here. They couldn't have thought that this was going to go well for them. With thefightconcluded, thecolorcommentatorcontinuesagain. Color: Is this just the week of chaos? Because that was a really really uncharacteristic lapse in judgement from [Blue Team]: Not calling everybody into position at the right time, and [Red Team] with the advantage make them pay for it. They didn't expect the ignite from Nautilus. I think they expected Nautilus to have exhaust instead, but [player 1] pops the ignite, and as we said there is no armor so [player 2] just... and it continues! The color commentator is cut off once again as the two teams engage one another for a third time. If we look at these examples for their content rather than the specific moment in the game we can catch a full illustration of the two-caster dynamic. As we can see by the NFL example, the play- by-play commentator provides a running narration of the action in the game. When the action ends, the color commentator provides the meta-level analysis of the unfolding events. In the LCS example, we see that the same dynamic is present, however, due to the continuous action in the game, the transition into color commentary becomes difficult. In the first lull, the LCS color E-Sports Broadcasting 53 commentator tries to insert his analysis, but he is cut off by a second engagement. The color commentator stops talking immediately and allows the play-by-play commentator to continue describing the action. After the engagement ends, we hear the color commentator pick up again, explaining why the fight developed the way it did as well as his insight into why the teams played the way they did. Entertainment and Narrative Entertainment value was a repeated concept in my interviews with shoutcasters. Some went so far as to claim that their role was only to entertain. One stated, "I want to get you excited. I want to get you to watch the game as if it was a show on television." Many would point to good sportscasters as an example to follow. If we recall the example of the early days of radio sportscasting, casters had a difficult time making the transition to the new medium. Their broadcasts felt flat when compared with their print counterparts (Bryant and Holt 2006, 27). Early sportscasters got locked into the idea that their responsibility was to provide the basic play- by-play depiction of a match. The golden age of sports radio was brought in by popular sportscasters, such as Graham McNamee, who were so popular that they'd be asked to cast games remotely. McNamee, like a live version of his print counterparts, was famous for creating florid depictions of the game, athletes became heroes and their play became combat as told by McNamee. While the presentation of live and accurate information was still essential, popular radio sportscasters shifted sports media from news reports to entertainment. Sportscasters are responsible for this shift. Without their expert embellishment, play-by-play depictions lack entertainment value. E-Sports Broadcasting 54 Even non-sports fans can feel the excitement from a particularly good sportscaster. The game they portray is far more intriguing than any actual events happening on the field (Bryant, Brown, Comisky, and Zillmann 1982). This disconnect forms one of the primary reasons that the transition to casting televised sport was so difficult. The small liberties that sportscasters took were no longer acceptable in the visual medium. Once the home viewer could see the game, commentary had to shift to accommodate more scrutiny. Radio sportscasters were notorious for their embellishment. As Bryant, Comisky, and Zillman note from one of their several investigations of sportscasting, roughly forty percent of commentary is dramatic embellishment (1977). In 1977, the authors tracked the amount of hyperbole and exaggeration in sports broadcasting and found that over half of the speech was dedicated to drama. E-sports shoutcasters, by comparison, rarely use dramatic embellishment of action. A few of the informants noted that they feel that embellishing actions is not possible due to their audience. The e-sports audience as pictured by shoutcasters, includes mostly dedicated players. While many sports fans may play their sport casually, e-sports fans engage with the games they watch regularly. As one shoutcaster explains, "we've only ever gone out to a hardcore audience." He acknowledges that the current audience is in flux, but the primary base of e-sports fans are intensely dedicated viewers and players. Because of this dynamic, shoutcasters feel that embellishment of the actions on screen would be difficult to slip past a discerning eye. Their belief that dramatic embellishment isn't possible may say more about their understanding of traditional sports fans than it does about their formulation of their role as commentators. While unacknowledged in interviews, the possibility for shoutcasters to add embellishment exists. Their choice not to use embellishment speaks more to their formulation of the e-sports audience than it E-Sports Broadcasting 55 does to their casting quality. Instead of embellishment of action, shoutscasters rely on another convention found in traditional sportscasting: narrative. Studies that focus on the media effects of sportscasting suggest that sportscasters fundamentally alter the audience perception of the telecast through story-telling and narrative (Krein and Martin 2006). Sportscasters take many liberties in their descriptions of the game to add a dramatic flair. In several empirical studies, Bryant, Brown, Comisky, and Zillman (1979) found that when sportscasters created a narrative of animosity between players, viewers felt an increased amount of tension and engagement. They conclude that the narrative scope of the sportscaster is critical in the perception of sports broadcasting. This narrative creation has bled into shoutcasting as many shoutcasters attempt to amplify the emotional content of their games by highlighting underdog stories or hyping up animosity between players. One caster I interviewed connected his work to the narrative creation in sports commentary by stating, "Emotion is one of the key words in commentary. You need to be able to connect a certain emotion to the words you're saying. You need to be able to make someone scared for their favorite player or overjoyed when they win. Create greatest enemies. You need to be able to make these feelings through what you say or how you say it. Emotion is everything." This caster goes to great lengths to dig up statistics from previous matchups to provide a narrative for the match he casts. Through this investigation, the shoutcaster is able to contextualize a match with a rich history. Perhaps two players have met three times before and each time the result has been the same. Will viewers be able to share in the momentous victory of the underdog? As part of their preparation, shoutcasters will research all of the previous meetings between two players to create a history between them, a tactic which they acknowledge has been used in traditional sports for decades. E-Sports Broadcasting 56 Production Stream production is another realm where e-sports have started to borrow heavily. While e-sports producers may have gotten a head start on streaming live events, they often rely on the expertise of television producers to put a show together. Multiple shoutcasters pointed to a steady influx of television producers making their way into e-sports, "the way we approach a production is very much like television. A lot of the production guys that are getting into it are from television." In fact, the executive producer of the League of Legends Championship Series, an immensely popular e-sports program, is former emmy-winner Ariel Horn. Horn won his Emmy as an associate producer of the 2004 Olympics for NBC. Likewise, Mike Burks, executive producer for the Championship Gaming Series mentioned in the above quote from Paul Chaloner, had an immense amount of experience in televised sports before migrating to e- sports. These are just two of the many experienced television producers making their way into e- sports. Their style is beginning to show as e-sports events become more polished every year. If we recall the image of Prime Time League in the introduction to this thesis, we can see the influx of television conventions in e-sports from the production side. The shoutcasters benefit from the experience of working with television producers to refine their style. As the field has grown, however, we begin to see minor tweaks in style and delivery. Spending a significant time with e- sports casting, in comparison with sportscasting, reveals several distinctions. Much of this difference comes with the age of the field, but just as Starcadeevolved over its short lifespan, shoutcasters have found ways to make themselves unique. Their understanding of their role within the overall e-sports industry informs us of some of the key differences here. E-Sports Broadcasting 57 Something New: Shoutcaster Identity Shoutcasters are situated somewhere between fan and professional. As evidenced by the above investigation of how shoutcasters are informed by their traditional predecessors, the role of shoutcasters is still very much in flux. Shoutcasters are just recently creating their own identity separate from their sportscasting roots. In particular, the less experienced shoutcasters I spoke with use markedly different models to inform their own casting. The Second Generation of Professional Shoutcasters A second generation of casters is just now coming into the scene. Instead of looking to traditional sportscasters as their models, they emulate veteran shoutcasters: "my influences are the streamers that I watched. I watched everyone who casts and commentates...my commentary style comes from those guys. I don't know how much is conscious or just mimicry." This new caster has been on the scene for only a fraction of the time that the veterans have. In that time he has honed his shoutcasting skills not by finding sports commentary and seeing which aspects apply to shoutcasting, but by absorbing as much information as he could from other shoutcasters. Another fresh shoutcaster offers a fascinating disconnect from the older casters: "I definitely bounce off more e-sportscasters than sports. I just watch more e-sports than sports. Sports are so different than e-sports, there's so little that I can actually use from them." Where his predecessors admit to borrowing primarily from traditional sportscasters, this new generation has left the realm of traditional sportscasting behind. The professional casters provide material for an amateur level of shoutcasters to pull from. The shoutcasters I interviewed were all professionals who typically work on major events with massive support and budgets. With a robust network of shoutcasters to pull from, however, E-Sports Broadcasting 58 we may see much more support for the grassroots level of e-sports that many early fans are accustomed to. Current shoutcasters also provide a model for potential careers. Through the hard-fought struggle of years-worth of unpaid events, the shoutcasters I spoke with have created a legitimate profession worth pursuing. Most warned me that the path is no longer as easy as they once had it. Most of them pursued shoutcasting for the love of e-sports. They had years to fumble through persona creation, broadcast techniques, and conventions. New, potential shoutcasters are automatically held to a higher standard. A senior caster offered the following advice, "With how casting has changed, you need to be open to casting multiple games. You have to be willing to learn. There is a lot we can teach a caster, but you have to have some skills within you alone. You have to have some camera presence." The mention of camera presence signals a significant jump from early shoutcasting. Just a few years ago, the shoutcasters I interviewed sat down in front of a camera for the first time armed with nothing but game knowledge; camera presence was a foreign word to them. Perhaps the most significant change to casters is their overall level of experience. Some of the shoutcasters I spoke with have been broadcasting for over a decade. Time has allowed these casters to experiment and find their own style. As mentioned earlier, many of the minutia involved in running a show take time to learn. Most casters got their start casually. They may have been passionate about e-sports and created a role for themselves within the industry. Some are former players who made the hard decision to give up on their hopes of winning big to instead cultivate a community. As new professionals, shoutcasters are just now coming together with the support of e- sports companies under legitimate full-time contracts. The professional casters I spoke with all acknowledged a significant change in their commentary since making the transition into full-time E-Sports Broadcasting 59 casting with other casters around for feedback and training. One explained that he had never been sure how to handle dead-air, moments when both casters are silent and there is little action in the game. Through feedback sessions with other casters, he learned that there are some appropriate times to let the viewer formulate their own opinions on the match. Heeding the advice of veteran casters like Paul Chaloner, he went on to explain that one of the problems he sees in shoutcasting more generally is that shoutcasters are afraid to just be quiet during a stream. Part of the emotional build-up of a game, he explains, is letting the natural flow of a game take its course without any input from the casters. It will be fascinating to watch as these expert networks inform e-sports broadcasts across the world. One informant remarked, "Now that we're all working together, we're learning a lot off of one another, which hasn't happened in commentary before." Beyond allowing veteran shoutcasters to compare notes, the professional status of shoutcasting provides training to new shoutcasters. One veteran claimed, "All the junior people are learning so much faster than we ever did. They're taking everything we learned over 5-10 years and doing it in months." These veteran casters can now pass on their experience and their style. Techniques like hand-offs at the end of a segment or transitions from the desk to gameplay often came up in my interviews as issues which take years to learn, but newer shoutcasters are able to pick these cues up from earlier shoutcasters instead of taking what they can from a sports show and hoping that everything translates well. Beyond the expected roles that shoutcasters fill, they also perform many secondary tasks which don't typically fall to traditional sportscasters. In the very early days of live-streaming, shoutcasters were often responsible for every aspect of the broadcast from set-up to teardown. Some shoutcasters still regularly assist on production aspects of the broadcast such as graphics E-Sports Broadcasting 60 packages, camera set-up, and audio checks, but others leave the production aspects of the stream to more experienced hands while focusing instead on updating websites, answering tweets, creating content, or streaming their own play sessionss. No two casters seem to fill exactly the same role within the broadcast team. They do, however, share some similarities which seem to form the shoutcaster identity. Record-keepers and Community Managers All of the casters pointed to stats-tracking as part of their roles outside of their air-time responsibilities. Most of them keep highly detailed databases full of every possible stat they can get a hold of from game clients and public databases. These stats can be as simple as wins and losses from remote regions or LAN tournaments that do not post their results online. The stats can also get as minute as the number of units a particular Starcraft 2 player built in one particular match. When the data isn't readily available, shoutcasters go out of their way to curate the database themselves. While some keep their database secret to provide a personal flair to their casting, others find it important to share this information with their e-sports communities. One shoutcaster recalled his surprise when he first worked with a major South Korean e-sports company with its own dedicated stats team. He expressed that he had never realized how much he needed a dedicated stats team like you find in traditional sports until that moment. It was then that he realized how much of his daily routine stats curation filled. While he was grateful for the help, he also felt personally responsible for stats collection and did not entirely trust the figures from the professional statisticians. This example shows the difficult position e-sports fills, constantly stuck between borrowing from traditional sports while not fully able to cope with the maturity of the sports media industry. E-Sports Broadcasting 61 Another role which tends to fill a shoutcaster's daily routine is community maintenance. Whether the caster creates their own content on gaming sites, responds to fans on social media, or spends their time streaming and interacting with the community, they all mentioned some form of community maintenance as part of their duties as a shoutcaster. This particular focus on community maintenance most likely results from the grassroots origins of shoutcasters. These casters were a part of an e-sports community long before they became shoutcasters. Whether they view it as their professional responsibility or a social responsibility remains unclear. They all admit to some level of e-sports advocacy, however. They view PR, and the proliferation of e- sports as part of their responsibilities. The most effective way to tackle this issue, many of them have decided, is through community engagement. The community aspect of shoutcasting identity leads me to a discussion of the affordances of the hyper-connectivity in live-streaming. Grappling with the Hyper-Connectivity in Live-streaming and E-sports Shoutcaster Connection I have yet to meet anyone in the e-sports industry who has not remarked on the unique level of connection present in e-sports. Shoutcasters especially, tap into the network created in these online communities. In a representative summary of my conversations, one shoutcaster explained, "the connectedness is so unique in e-sports. The way that we can interact with fans instantly. The players at the end of the day are gamers, they know exactly where to look. They've got Twitter, they go on Facebook, they post on Reddit." Audience members connect ephemerally in the IRC of a Twitch stream, but they constantly scour the social media outlets of their favorite stars, e-sports companies, and shoutcasters, creating a deeply connected community. Professional shoutcasters understand that the e-sports communities operate in a E-Sports Broadcasting 62 unique way when compared to traditional sports fandom. E-sports fans have an odd connection to franchises or teams within their chosen e-sport. As mentioned before, turnover rates and general industry growth force entire communities to radically reform from one season to another. Where traditional sports fans often follow a team based on geographic loyalty, or familial connections, e-sports fans do not have that option. While you will often hear of fans cheering for teams in their geographic region (North America, Europe, South-East Asia, etc) if they make it to the last few rounds of an international tournament, they may also base their fandom off of a team logo, or a particular player instead. Shoutcasters recognize this dynamic and use it to cultivate the community. Communication, they claim, separates them from traditional sports broadcasts or even news anchors: "We communicate more with our audience than you'll see TV news anchors or celebrities, but it's part of our job to get more information out there." The focus on communication seems to be unique to shoutcasters as the majority of it happens outside of their broadcasts. While many shoutcasters define their role on-screen as an educator of sorts, the notion of spreading information about e-sports falls outside of their screen time. This double role of broadcaster and community manager extends what media scholars have dubbed the broadcasting persona beyond the point typically associated with sportscasters or news anchors. Shoutcasters and Persona Horton and Wohl (1956), two social scientists who study mass media, make the assertion that mass media performers make a conscious decision to create and maintain parasocial interactions through the creation of a persona. Social scientists have coined the term parasocial interaction for the intangible connection which most of us feel to some form of media or another. E-Sports Broadcasting 63 Standing in contrast to interpersonal interaction, a person to person exchange between two real and cognizant human beings, parasocial interaction is instead a unidirectional relationship (Miller and Steinberg 1970). The feeling of connection we create with fictional characters, news anchors, or sports stars does not fall within the definition of an interpersonal interaction. Whether mediated through a screen or the pages of a book, a parasocial interaction does not manifest in an exchange of thoughts or words between individuals. Rather, it is embodied and lived through one individual. Schiappa et al. (2007) conducted a meta-analysis of parasocial interaction literature to better understand how broadcasters 'hook' viewers to a certain show. They concluded that parasocial interactions can create and prolong connection to television programming. While Schiappa et al. concede that there are a few opportunities for a parasocial interaction to result in interpersonal relationships in the physical world, the compelling issue is the establishment of intimacy mediated through means well outside of a person to person context. Horton and Wohl set out with the goal of creating a term for the relationship between performers and their audience in mass media. The authors suggest that the emergence of mass media created an illusion of connection to performers which was previously unavailable. They argue that the connection people feel to mass media stars is analogous to primary social engagement. If this type of engagement takes place in radio and television, where users have no opportunity to interact with audience members who are not co-present, it follows that the interaction between broadcasters, their audience, and one another in a Twitch stream is a particularly deep connection even beyond the level noticed by Horton and Wohl. Shoutcasters create a familiar face and personality for audience members to connect with. Mark Levy (1979), another proponent of parasocial interaction who focused his work on news anchors, suggests that both news anchors and sportscasters help to create and maintain E-Sports Broadcasting 64 communities through regular scheduling, conversational tones, and the creation of a broadcasting persona. Shoutcasters perform this same role to even greater effect due to the constant changes surrounding the e-sports industry. The regularity and consistency of shoutcasters' broadcasts helps to foster a feeling of genuine connectedness within the community. Although difficult to quantify, many conversations with shoutcasters turned to the odd feeling of connection that e-sports fans feel towards one another. One shoutcaster attempted to explain this connection by stating, "[w]henever I go to an event, I realize that fans are just friends I haven't met yet." I found this statement to be particularly poignant. It hints to the sort of intangible connection e-sports industry personalities and fans feel to one another through live- streams. Anecdotally, this air of friendship permeated e-sports events that I have attended and went well beyond what I have felt at traditional sporting events or concerts. Previously, persona creation and maintenance occurred on-screen or at events only. Social media has forced many media personalities to extend their personas beyond the long-held notions of broadcaster-fan interaction. In many ways, shoutcasters must go beyond even these extended boundaries into a near constant persona maintenance because of their roles in live- streaming and community maintenance. Many shoutcasters give up their personal, off-air time to stream their own gameplay or to create video content which necessarily prolongs the amount of time they embody their broadcast persona. I found that shoutcasters create a variation on the broadcast persona. Rather than a full- blown broadcasting personality which they inhabit while on-air, most shoutcasters have found that between community management, social media interactions, and broadcasts, they almost never get an opportunity to step out of their role as a shoutcaster. Due to this near constant connection, most shoutcasters acknowledge that they act differently on air, but they tend to E-Sports Broadcasting 65 simply invoke a more upbeat and charismatic version of themselves. Echoed in each of the interviews, the casters point to the idea of excitement, "you have to get excited for the person out there watching." Even if they are not in the mood to shoutcast, or they have had a bad day, shoutcasters must leave their personal issues out of the broadcast. This aspect of the shoutcaster's personality comes out in all of their interactions on social media as well. Most of the shoutcasters I interviewed situated their role in e-sports as somewhere between Public Relations, Marketing, and Community Management. One of the casters explained the importance of invoking the broadcast persona when speaking about sponsor expectations: "We're working in an industry with companies behind us, we can't always say exactly what we want to say." Shoutcasters' acknowledgement of their involvement in securing sponsorships signals an interesting shift in the e-sports industry: the focus of the broadcast team on potential revenue generation. I turn now to an analysis of the revenue streams found in both traditional sports and e-sports broadcasting. E-Sports Broadcasting 66 Chapter 3 Revenue Funding Professional Play After situating e-sports broadcasting within the greater sports media landscape, particularly in conventions, casting, and use of medium, it is important to analyze the portions of sports media production that have made their way into e-sports broadcasting. If we acknowledge the influence that traditional sports broadcasting has had on e-sports broadcasting in the realms of conventions and casting, we must also understand the importance of this relationship at the production and economic levels. In this chapter I discuss how the history and development of the sports media industrial complex in the U.S. has bled into the economics of the e-sports industry. In particular, I focus on how sports media models inform the e-sports industry while portions of the sports industry's revenue streams remain out of reach for e-sports broadcasters. Despite the reshuffling of the sports media industrial complex mentioned in the introduction to this thesis, traditional sports broadcasting still relies on the same revenue streams that it had in the past. Traditional sports producers have fully capitalized on the commodification of their content. E- sports producers, in contrast, are still shaping their revenue streams within live-streaming. The commercialization found in the sports media industrial complex has taken hold of the e-sports industry in several notable ways. Following in the example set by Stein's thesis work, it is not enough to just acknowledge the relationship between e-sports and traditional sports media, we must also understand the path which brought e-sports broadcasting to its current state. USER: How e-sports broadcasts compare with traditional sports broadcasts? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
33
8
17,773
null
649
You are given a reference document. You must only use information found in the reference document to answer the question asked.
What is the best co sleeper for me and my new baby?
❚ MadeForMums reviews are independent and based on expertise and testing. When you buy through links on our site, we may earn an affiliate commission, but this never influences our product choices. 8 of the best bedside cribs and cosleepers for safe sleeping for your baby We've tried, tested and reviewed the best bedside cribs, for a brilliant way to sleep closely and safely with your baby Gemma Cartwright Published: March 5, 2024 at 3:20 PM Save A bedside crib is one of the most popular choices for newborn sleep, as it allows you to keep your baby close while still following safe sleep We value your privacy We need your consent so that we and our 172 trusted partners can store and access cookies, unique identifiers, personal data, and information on your browsing behaviour on this device. This only applies to Immediate Media. You can change your preferences at any time by clicking on ‘Manage Privacy Settings’ located at the bottom of any page. You don’t have to agree, but some personalised content and advertising may not work if you don’t. We and our partners use your data for the following purposes: Store and/or access information on a device Precise geolocation data, and identification through device scanning Personalised advertising and content, advertising and content measurement, audience research and services development. Google Consent Mode framework To view our list of partners and see how your data may be used, click or tap ‘More Options’ below. You can also review where our partners claim a legitimate interest to use your data and, if you wish, object to them using it. MORE OPTIONS AGREE guidelines. In the first 6 months, when the risk of sudden infant death syndrome (SIDS) is at its highest, the safest place for a baby to sleep is on their back in their own sleep space, be that a cot, crib or moses basket. Advertisement A bedside crib fastens to the frame of your bed on one side, so you're effectively lying next to your baby. The side can usually be dropped down so you can see and reach over to your child. They're sometimes referred to as side-sleepers or co-sleepers, but the key difference is that you're not sharing a sleep surface or bedding. You and your baby can maximise the soothing benefits that proximity brings while minimising the risks associated with bed sharing. Having your baby at arm's reach also makes night feeds much easier. Best bedside cribs and co-sleepers at a glance Jump to our list of the best bedside cribs and cosleepers • Best bedside crib with an easy drop-down side: Chicco Next2Me Magic, £189 • Best bedside crib with a removable bassinet: SnuzPod 4 Bedside Crib, £199.95 • Best bedside crib for smooth rocking: Tutti Bambini CoZee Air Bedside Crib, £225 • Best bedside crib for longevity: Shnuggle Air Bedside Crib, £180 • There are a wide range of options, so at MadeForMums we’ve analysed the bedside crib market closely to bring you the very best choices. We’ve used feedback from our expert journalist reviewers and parent testers, combined with results from in-house MadeForMums testing, which looked at key features such as breathability, mattress firmness, ease of building as well as functionality. For each bedside crib we’ve listed the key technical features to help you compare across brands and models so you can find the best design to suit your needs. If your baby is struggling to sleep through the night, take a look at our best sleep aids and white noise machines, best nightlights and best baby swaddles. More like this Silver Cross Voyager Co-Sleeper Bedside Crib review What is the new safety standard for bedside cribs? All new bedside cribs manufactured since November 2020 have to meet a new safety standard (with the catchy name BS EN 1130:2019) that introduced new and more rigorous safety requirements for bedside cribs. However, you may find some older versions of cribs are still on sale that only match the previous safety standard. Slowly these will disappear from stores and the only ones available will meet the new standard. The most significant new requirement for BS EN 1130:2019 is for a 120mm Best bedside crib for extra storage: Maxi-Cosie Iora Bedside Sleeper, £149 • Best bedside crib for one-handed operation: Joie Roomie GO, £180 • Best value bedside crib: Red Kite Cozysleep Bedside Crib, £84.99 • Best bedside crib with 360° swivel: Halo BassiNest Premiere Swivel Sleeper, £248.29 • high barrier to be present around the sides of the crib, to ensure your baby is not able to roll off their own mattress onto yours. This means that new bedside cribs can no longer have complete drop-down sides – many now have 'half-height' walls instead. This allows your baby to be positioned next to you with the crib lined up to your bed, but their mattress will be sunk a little lower, providing more of a protective barrier. All the cribs featured in our list comply with these new BS EN 1130:2019 safety requirements. What to look for when buying a bedside crib Will it work with your bed? – Certain bed frames can be trickier to use with a bedside crib. For example, if you have a divan bed you will need longer straps, and may not be able to tuck the legs of the crib underneath the bed and may need to look for a model that has foldable legs or works with your bed style. Height of your bed – Most bedside cribs have adjustable heights to give you an almost perfect fit on most bed frames, but if your bed is particularly low or high, do check the measurements. Also check the size of the crib and whether it will fit next to your bed while allowing you to get in and out easily and safely. This is particularly important for those first few days and weeks after giving birth when your body is still recovering. Mattress – The mattress needs to be firm, flat and breathable – this is a key safety feature. Don’t be tempted by a super soft mattress – your baby will sleep deeply and most importantly safely on a firm mattress. Drop-down side – How easy is it to remove the side? Can you do it with one hand? As you may be doing this in the middle of the night, are there lots of noisy zips and clips? Can it safely be left down while you sleep? Do check this as the rules differ depending on the product. How easy is it to assemble – Are there lots of parts to screw together? Will you need 2 people to build it? We’ve tested how easy different bedside cribs are to build in our reviews. How easy is it to keep clean – Does the mattress have a waterproof cover to protect from leaky nappies, baby sick and dribbles? Is the fabric machine washable or will you have to hand wash it? Portability – Is the crib light enough to move around your house? If you want to take it away with you does it crib fold flat and/or come with a storage bag? Extra features – Does it rock (useful for fussy sleepers), tilt (remember to use tilting with care), detach to become a moses basket or turn into an older baby cot or playpen? These extra features may not be necessary, but they could be useful. For more safety information we've also covered breathability, bedding and how to use the tilting function here. What are the benefits of using a bedside crib? Safe sleep charity The Lullaby Trust, advises that the safest place for your baby to sleep is on their own sleep surface, in the same room as you, for at least the first 6 months. Bedside cribs allow you to have your baby sleeping right next to you at night, but in the safety of their own crib. This means you can still be close to your baby without bed-sharing, which carries a risk of suffocation and overheating. Bedside cribs enable you to lean over and easily pick up your baby when feeding at night. This is especially useful if you’ve had a difficult birth or a c-section and find getting out of bed painful. You can also easily comfort your baby if they are fussing and have a good view of them while they are sleeping. How to do the baby mattress firmness test Press your hand on the centre and the sides of the mattress • A firm mattress shouldn’t mould to the shape of your hand and you’ll feel resistance – it will obviously move beneath the pressure but your hand shouldn’t sink in • When you remove your hand, the mattress should snap back and regain its shape • From a practical perspective, bedside cribs are smaller and more compact than most cots, which means they take up less space in your bedroom than a full-sized cot or cotbed. Do I need a bedside crib for my baby? You don’t have to buy a bedside crib. It's completely safe to put a baby in a regular cot from birth. But they’re a great option if you want your baby as close to you as possible at night, and for saving space. The downside is that most of these cribs only last up to 6 months and you’ll then need to move your baby into a full-sized cot or cotbed. A moses basket is a more economical option, but these can last even less time, and do not have the added features of a bedside crib such as a drop-down side, tilt, or multiple heights. How much does a bedside crib cost? It is possible to buy budget bedside cribs for under £100 but the majority we have reviewed are between £150-£300. Certain features, such as a rocking function or one-handed drop down side, tend to push the price up slightly. How did we choose these bedside cribs? Our 10 of the Best lists are compiled by qualified and experienced parenting journalists. They rely on a number of sources, including our independent reviews, testing undertaken during the MadeForMums Awards, and feedback from our home testing panel and Top Testers Club. Each year thousands of products are put through their paces by hundreds of parents across the country on behalf of MadeForMums, to ensure we’re bringing you honest and true reviews and recommendations. When testing bedside cribs, we consider size, ease of build and fitting, mattress quality and breathability, ease and safety of the drop-down side mechanism and other features, comfort for baby, design and quality, and whether it's worth the money. Our list is not an ordered ranking from 1-10, instead it is a carefully Here are our top 10 bedside cribs for 2024 1. Chicco Next2Me Magic, £189 – Best for easy drop-down side Suitable from: Birth to 6 months/9kg | Weight: 13.1kg | Crib size: H66.5- 82.4cm x W73cm x L99.5cm | Mattress size: L83cm x W50.5cm | Tilt: Yes | Rocks: Yes | Height positions: 11 | Washable mattress cover: Hand wash The Chicco Next2Me Magic is the latest update to the original Next2Me side-sleeping crib, which has won fans for its versatility. It can be used from birth as a bedside co-sleeper, as a standalone crib or possibly as a travel cot, but at over 13kg it's not a light carry. It is slightly more expensive than some other models, but standout features include a really easy drop-side that can be operated with one hand, 11 height levels, a lockable rocking function, 4 tilt options to help reduce reflux, and wheels to make it easy to move around your home. selected group of tried-and-tested products, each of which we believe is best for a different situation or requirement. We don’t just tell you what is best, we help you discover what is best for your family. A large sleeping area means more room for a bigger baby, plus a travel bag is included. MFM tester Lucy said, “I found the Chicco Next2Me Magic a breeze to move around and set up, but also substantial and sturdy. The clever onehanded drop-down mechanism on the side panel can be used while holding your baby in your arms, which is brilliant. "I've even used the Chicco in my kitchen for safe day naps when I need to be more focused on my older child.” Pros: Firm and breathable mattress, retractable legs to fit any bed, quiet side zip, easy to transport Cons: Tricky to initially assemble, mattress cover is hand wash only Read our full MadeForMums Chicco Next2Me Magic bedside crib review Available from: John Lewis and Mamas & Papas John Lewis & Partners £229.00 Buy now Mamas & Papas £229.00 Buy now 2. SnuzPod 4 Bedside Crib, £199.95 – Best for removable bassinet Suitable from: Birth to 6 months/9kg | Weight: 11.5kg | Crib size: H95cm x W49cm x L100cm | Mattress size: L75cm x W40cm | Tilt: Yes | Rocks: Yes | Height positions: 7 | Washable mattress cover: Machine washable The latest iteration of Snuz's much-loved bedside crib, the Snuzpod4 features a new breathable system (called ComfortAir) that aids the flow of air around the crib and your baby. It offers more side vents, breathable mesh liner and mattress, plus a ventilated base. But the key thing that we're delighted to see is that the Snuzpod4 has a firmer mattress than previous versions – as well as good breathability. Plus Snuz claims that the SnuzPod4 fits more bed heights than any rival, as it will now work with beds up to a maximum adult mattress height of 73cm. It's also designed to be compatible with a range of bed types – divan, ottoman and framed bed bases. Made from sustainably sourced beech solid wood, the Snuzpod4 looks good. MFM mum home tester Mehack commented on "how stylish and contemporary the design is," praising how it "fits perfectly with the room decor". We love its versatility – the two-part design includes a lift-off bassinet that can be moved around the house so you have a portable safe sleeping space for your baby, whichever room you're in. The bassinet also has a manual rocking function, as does the crib and the bassinet. There's an optional riser that can be added to create a slight incline to help babies with reflux, but for safety reasons, when the cot is tilted this stops the rocking function from working. Pros: Stylish, removable bassinet, great storage Cons: Can be difficult to put together Read our full MadeForMums SnuzPod 4 bedside crib review Available from: Snuz, Samuel Johnston and Amazon Very.co.uk £159.99 Buy now Samuel Johnston £190.18 Buy now Amazon UK £199.95 Buy now John Lewis & Partners £199.95 Buy now 3. Tutti Bambini CoZee Air Bedside Crib, £225 – Best for smooth rocking Suitable from: Birth to 6 months/9kg | Weight: 11kg | Crib size: H92cm x W12cm x L56cm | Mattress size: L80.5cm x W51cm | Tilt: Yes | Rocks: Yes | Height positions: 6 | Washable mattress cover: Sponge, only machine wash if necessary While it is at the more expensive end of the market, what makes the CoZee Air stand out from the competition is its smooth rocking function. It comes with easy-to-remove caster wheels that you can switch with rocking bars, which easily attach to the legs of the crib. As a safety feature, the CoZee can also only be rocked when it is set up as a standalone crib – when used as a bedside crib, it has flip-out feet that prevent it from doing so. “The rocking feature is fantastic and really helped me to settle my baby when she was overtired and fussing,” said MFM tester Tara. MFM testers also rated the crib highly for its portability – it is ideal as a travel cot, as despite its large size, it is compact when folded. A 30-second open-fold mechanism allows for a quick set up and it comes with a travel bag for easy transportation. While the multiple mesh windows are great for breathability and being able to see your little one, there's a curtain attached to one side of the crib that you can roll down to protect your baby from draughts during colder months. This still leaves one mesh side open to allow for plenty of air flow. When it comes to cleaning, the fabric lining can be removed and put in the washing machine, while the foam mattress can be machine washed if necessary. We also like the addition of a storage shelf that is useful for holding essentials such as baby wipes, nappies, clothes and muslins. Pros: Smooth rocking, quick to collapse down, storage shelf Cons: Higher price point Read our full MadeForMums Tutti Bambini CoZee Air Bedside Crib review Available from: Boots, Kiddies Kingdom and Tutti Bambini Kiddies Kingdom £165.00 Buy now For Your Little One £180.00 Buy now Wayfair £186.63 Buy now Dunelm £219.00 Buy now 4. Shnuggle Air Bedside crib, £180 – Best for longevity Suitable from: Birth to 6 months/9kg (up to 2 years with conversion kit) | Weight: 13.4kg | Crib size: H68.5–83cm x W56cm x L94cm | Mattress size: L83cm x W50cm | Tilt: Yes | Rocks: No | Height positions: 7 | Washable mattress cover: Hand wash While most bedside cribs on the market are only suitable for babies up to 6 months old, the Shnuggle Air stands out by offering 3 products in 1. It can be used as a standalone cot or bedside sleeper and then it transforms after 6 months into a full-sized cot when you buy the additional conversion kit (£109.95) and cot mattress (£50), which will last your child up until around 2 years old. This makes it a great long-term investment. MFM judges and testers were particularly impressed with the firmness of its hypo-allergenic airflow mattress. This crib has dual-view mesh sides, giving it maximum breathability; this also means you can easily see your baby when both sides are up. This was also a feature that stood out to MFM reviewer Tara, who used it with her 6-month-old daughter Elodie. She said, “Elodie slept very soundly and she loved being able to see through the mesh sides.” The drop-down sides are easily removed for nighttime access by releasing the safety catch on the top bar and undoing the zips. However, during the awards testing, it was noted that the safety catch makes a loud click. This was echoed by a MFM user reviewer who said: “The side makes a noise when you click it back in and that can wake up baby!” Unlike most of the others on this list, the side of the Shnuggle Air cannot be left down during sleep, it's simply there for access. The Shnuggle Air is relatively heavy at 13.4kg, and doesn't have wheels, so it's not easy to move around your home. “I’d say once the Shnuggle Air is set up, it’s staying put,” Tara added. Pros: Long-lasting, highly breathable, spacious Cons: Not easily portable, side is noisy when released, hand wash only Read our full MadeForMums Shnuggle Air Bedside Crib review Available from: Amazon, John Lewis and Shnuggle John Lewis & Partners £180.00 Buy now Amazon UK £199.95 Buy now Kiddies Kingdom £299.00 Buy now 5. Maxi-Cosi Iora bedside sleeper, £149 – Best for extra storage Suitable from: Birth to 6 months/9kg | Weight: 10.8kg | Crib size: H74.5cm x W55.5cm x L93cm | Mattress size: L80cm x W58.5cm | Tilt: Yes | Rocks: No | Height positions: 5 | Washable mattress cover: Hand wash With its choice of muted colours, sleek design and quality materials, the Maxi-Cosi Iora is sure to fit in with most room schemes. The large storage basket at the bottom of the crib is great for parents who are short on space as it can easily hold numerous blankets, baby sleeping bags, nappies, wipes and spare clothes. The Iora’s easy-to-adjust height (5 positions in total) and slide function (2 positions in total) also means it can fit snugly against most types of bed when used with the straps. “Our iron-frame bed is somewhat lower than average,” said MFM reviewer Georgina. “But the Iora also sat in the correct position with our mattress.” One feature that our reviewer Georgina particularly liked was that when the side is down, there is a 7-inch (18cm) barrier to stop your baby rolling out. She said: “The Iora allowed me to sleep as close to my daughter as possible, but I was also safe in the knowledge that she was in her own sleeping area and I wasn't going to squash her!” This crib is extremely straightforward to assemble (one of the quickest during MFM testing) and MFM reviewer Georgina managed to put it together speedily without using the instructions. She explained: “It was obvious which pieces go together, simple to build and had neat zips to keep everything in place.” A handy bag also means it can easily be used as a travel cot, especially as it folds down flat. Keep in mind that Georgina did find the outer fabric was prone to creasing when unpacked from the travel bag. Pros: Extra storage, easy height and slide adjustments, portable, smart appearance Cons: Mattress cover hand wash only, outer fabric prone to creasing, not as many height options as other cribs, only mesh on one side Read our full MadeForMums Maxi-Cosi Iora review Available from: Samuel Johnston, John Lewis and Amazon Kiddies Kingdom £169.00 Buy now John Lewis & Partners £199.99 Buy now Mamas & Papas £199.99 Buy now Very.co.uk £199.99 Buy now 6. Joie Roomie GO, £180 – Best for one-handed operation Suitable from: Birth to 6 months/9kg | Weight: 9.5kg | Crib size: H74.8- 82.2cm x W68.5cm x L90.3cm | Mattress size: H6cm x W51cm x L84cm | Tilt: Yes | Rocks: No | Height positions: 5 | Washable mattress cover: Machine washable | Awards: Gold – Bedside/Co-Sleeper Crib, MadeForMum Awards 2023 Awarded Gold in Best Bedside/Co-Sleeper Crib, MadeForMums Awards 2023, the Joie Roomie Go packs in a lot of features for its mid-range price. Offering mesh windows on both sides, providing plenty of ventilation as well as making it easy to keep an eye on your baby, the stylish crib is available in a choice of chic grey or classic black. Our MFM home testers were impressed with the Roomie Go’s aesthetic, with one commenting, “It looks great, is made with good quality material and will look stylish in any room.” The one-handed drop-down panels on both sides of the crib mean you can easily switch which side of the bed you attach it to. You should be able to simply click the handle to lift and lower, although one of our home testers commented that the first couple of times they attempted this the mechanism was a little sticky. Its simple, compact fold means you can pack the crib away in less than a minute and take it with you in the travel bag included, for holidays or trips to the grandparents’. The Joie Roomie Go is also on (lockable) wheels so you can move it around the home during the daytime. It has a tummy tilt for reflux/colic, and there are 5 height adjustments to fit most beds. Praised across the board by our MFM home testers for its comfy mattress and ease of assembly, it’s a great all-rounder both when at home and away. Pros: One-handed operation, tilt function for reflux, comfortable for baby, drop-down panels on both sides, travel bag included Cons: No storage, not as many height options as other cribs Available from: John Lewis, Joie and Argos Very.co.uk £179.99 Buy now argos.co.uk £180.00 Buy now John Lewis & Partners £180.00 Buy now Kiddies Kingdom £180.00 Buy now 7. Red Kite Cozysleep Crib, £84.99 – Best for value Suitable from: Birth to 6 months/9kg | Weight: 9kg | Crib size: H74-87cm x W57-61cm x L88cm | Mattress size: W80cm x L50cm | Tilt: Yes | Rocks: No | Height positions: 7 | Washable mattress cover: No, wipeable only | Awards: Silver – Bedside/Co-Sleeper Crib, MadeForMum Awards 2023 Coming in at just under £85 the Red Kite Cozysleep crib offers really fantastic value. However, the great price doesn't mean there's a compromise on features or style. “It’s a well-made product that looks modern and would easily suit all bedrooms,” said MFM home tester Kiran, who appreciated the simple, yet contemporary look. The crib has a drop-down side, 7 adjustable height positions, a tilt function (great for helping with reflux) and a handy storage shelf for things like nappies and wipes. It's on wheels, so it can be moved around the room or away from the bed with ease, and it also folds down to a more compact size for travel. There’s even a handy storage bag included, which our testers felt helps you to get even more use out of the Cozysleep as a travel cot. One feature that really impressed our home testers was the quality of the soft, quilted mattress, with one MFM home tester commenting, “The mattress is brilliant! I have used other makes of co-sleepers/cribs and this mattress is triple the thickness. It feels soft but firm and very comfy.” Pros: Great value, tilt function, good quality mattress, handy storage shelf, travel bag included Cons: Only mesh on one side Available from: Amazon and Kiddies Kingdom Kiddies Kingdom £79.99 Buy now Samuel Johnston £104.40 Buy now 8. Halo BassiNest Premiere Swivel Sleeper, £248.29 – Best for 360° swivel Suitable from: Birth to 5 months/10kg | Weight: 14.8kg | Crib size: H94cm x W61cm x L114cm | Mattress size: L85cm x W55.8cm | Tilt: No | Rocks: Battery-powered vibrations | Height positions: Customisable between 61cm-84cm | Washable mattress cover: Machine-washable sheet included This is American brand Halo's updated version of its popular BassiNest Essentia swivel sleeper. Offering a slightly different way to sleep closely but safely with your baby, the BassiNest Premiere is a standalone crib with a central stand that slides beneath the bed, rather than fastening on to the side of the bed. Parents can then swivel the crib 360° for easy access, with one MFM home tester pointing out this also "makes it easy to get in and out of bed without disturbing the baby". There's no drop-down side, instead the mesh side has enough give that you can push it down to reach and get your baby before it automatically returns to the upright position. Compared to cribs with open sides that sit flush with the bed, the BassiNest is more of a hybrid product, sitting somewhere between a moses basket and a bedside crib. While the BassiNest Premiere doesn't have a rock or tilt function, it does have a built-in “soothing centre” that features an amber nightlight, floorlight, 2 vibration levels and 4 soothing sounds, all with auto shutoff. To use this function you will need 3 x AA batteries (not included). Pros: Flexible, useful when recovering from birth, customisable height to fit most beds, built-in soothing centre Cons: Not a true bedside crib, very heavy, need batteries to access the soothing centre functions, expensive Available from: Halo, John Lewis and Boots John Lewis & Partners £249.00 Buy now How do you use a bedside crib safely? The most important piece of advice for safe sleeping is to lie your baby on their back to sleep. Indeed, since the Back To Sleep campaign was launched in the UK 30 years ago, cases of SIDS (Sudden Infant Death Syndrome) have fallen by 80%. When using a bedside crib, you should ensure there is no gap between the adult's and baby's mattress. Your baby’s mattress should be firm and flat, and sit snugly in the crib with no gaps. Also look for a mattress that is breathable. There's a simple test you can do for this: Most cribs come with a mattress as standard, but if you are given the crib by someone else or buy one second-hand you will need to buy a new mattress – even if the existing one appears to be in good condition. Second-hand mattresses may increase the risk of SIDS and are less likely to be supportive after losing their shape over time. Always use the mattress designed to fit your bedside crib – most retailers sell them separately should you need a replacement. When it comes to a safe sleeping position, place your baby in the crib with their feet at the end of the crib – called the feet-to-foot position. This reduces the risk of their face or head slipping down under the covers if you're using a blanket. How to use tilting and rocking features safely Some bedside cribs offer a tilt option, which may help babies with digestive issues, colic or reflux. If you are going to tilt your baby, you must do so with great care and only at a slight angle, to avoid your baby slipping down. We recommend speaking to your GP or health visitor for advice before using the tilt function. Tilting (and rocking) can only be used when the bedside crib is set up as a Our at-home mattress breathability test Pick up the mattress and place it close to your mouth • Breathe in and see how easy it is to breathe out with the mattress near your mouth • If it’s easier this should mean the mattress offers good ventilation • standalone crib – for safety reasons, you should not tilt or rock the crib when the side is down as there is a chance your baby could fall out. What bedding can I use with a bedside crib? The Lullaby Trust advises, “Firmly tucked-in sheets and blankets (not above shoulder height) or a baby sleep bag are safe for a baby to sleep in.” Make sure you buy the correct size sheets that exactly fit your mattress. You may also choose to swaddle a newborn. The Lullaby Trust does not advise for or against swaddling, but it does have some basic swaddling guidance. You must stop using a swaddle as soon as your baby learns to roll. Not all baby sleeping bags and swaddles are created equal, so make sure the brand you buy adheres to safety standards, is the correct tog for the room temperature and season, and is the right size for your baby, so they can't slip down inside. Don’t use any soft or bulky bedding and never use pillows, duvets, baby bumpers or baby positioners. You should also remove any soft toys from the crib before your baby sleeps. Advertisement Read more... Gemma Cartwright Group Digital Editor Gemma has two decades of experience in digital content. She is mum to a preschooler, and aunt to 4 children under 4. She is particularly passionate about sleep (for babies and parents) and loves testing out gadgets, technology and innovation in the parenting world. 14 of the best baby and toddler sleeping bags • 14 of the best car seats from birth • Bednest: NCT says there is a “small but plausible risk” when using the co-sleeper • You may also like How NatPat's wellness patches may help your family NatPat's range of wellness patches and stickers aim to tackle everything from allergies to lack of focus. We take a closer look at the range. Advertisement feature with NatPat Read now Silver Cross Voyager Co-Sleeper Bedside Crib review Chicco Next2Me Air bedside crib review Cribs & moses baskets Cribs & moses baskets Mamas & Papas Lua Bedside Crib review 10 of the best Moses baskets and cribs for your newborn Cribs & moses baskets Cribs & moses baskets About us Contact us Terms & conditions Code of conduct Privacy policy Cookies policy Complaints MadeForMums Top Testers Club Competitions Manage Privacy Settings This website is owned and published by Immediate Media Company Limited. www.immediate.co.uk © Immediate Media Company Ltd. 2024 Radio Times BBC Good Food Gardeners' World Magazine olive History Extra Junior Magazine The Recommended Baby Names Pregnancy Health Pushchairs & prams Car Seats Weaning & Baby Recipes Travel & holidays
You are given a reference document. You must only use information found in the reference document to answer the question asked. What is the best co sleeper for me and my new baby? ❚ MadeForMums reviews are independent and based on expertise and testing. When you buy through links on our site, we may earn an affiliate commission, but this never influences our product choices. 8 of the best bedside cribs and cosleepers for safe sleeping for your baby We've tried, tested and reviewed the best bedside cribs, for a brilliant way to sleep closely and safely with your baby Gemma Cartwright Published: March 5, 2024 at 3:20 PM Save A bedside crib is one of the most popular choices for newborn sleep, as it allows you to keep your baby close while still following safe sleep We value your privacy We need your consent so that we and our 172 trusted partners can store and access cookies, unique identifiers, personal data, and information on your browsing behaviour on this device. This only applies to Immediate Media. You can change your preferences at any time by clicking on ‘Manage Privacy Settings’ located at the bottom of any page. You don’t have to agree, but some personalised content and advertising may not work if you don’t. We and our partners use your data for the following purposes: Store and/or access information on a device Precise geolocation data, and identification through device scanning Personalised advertising and content, advertising and content measurement, audience research and services development. Google Consent Mode framework To view our list of partners and see how your data may be used, click or tap ‘More Options’ below. You can also review where our partners claim a legitimate interest to use your data and, if you wish, object to them using it. MORE OPTIONS AGREE guidelines. In the first 6 months, when the risk of sudden infant death syndrome (SIDS) is at its highest, the safest place for a baby to sleep is on their back in their own sleep space, be that a cot, crib or moses basket. Advertisement A bedside crib fastens to the frame of your bed on one side, so you're effectively lying next to your baby. The side can usually be dropped down so you can see and reach over to your child. They're sometimes referred to as side-sleepers or co-sleepers, but the key difference is that you're not sharing a sleep surface or bedding. You and your baby can maximise the soothing benefits that proximity brings while minimising the risks associated with bed sharing. Having your baby at arm's reach also makes night feeds much easier. Best bedside cribs and co-sleepers at a glance Jump to our list of the best bedside cribs and cosleepers • Best bedside crib with an easy drop-down side: Chicco Next2Me Magic, £189 • Best bedside crib with a removable bassinet: SnuzPod 4 Bedside Crib, £199.95 • Best bedside crib for smooth rocking: Tutti Bambini CoZee Air Bedside Crib, £225 • Best bedside crib for longevity: Shnuggle Air Bedside Crib, £180 • There are a wide range of options, so at MadeForMums we’ve analysed the bedside crib market closely to bring you the very best choices. We’ve used feedback from our expert journalist reviewers and parent testers, combined with results from in-house MadeForMums testing, which looked at key features such as breathability, mattress firmness, ease of building as well as functionality. For each bedside crib we’ve listed the key technical features to help you compare across brands and models so you can find the best design to suit your needs. If your baby is struggling to sleep through the night, take a look at our best sleep aids and white noise machines, best nightlights and best baby swaddles. More like this Silver Cross Voyager Co-Sleeper Bedside Crib review What is the new safety standard for bedside cribs? All new bedside cribs manufactured since November 2020 have to meet a new safety standard (with the catchy name BS EN 1130:2019) that introduced new and more rigorous safety requirements for bedside cribs. However, you may find some older versions of cribs are still on sale that only match the previous safety standard. Slowly these will disappear from stores and the only ones available will meet the new standard. The most significant new requirement for BS EN 1130:2019 is for a 120mm Best bedside crib for extra storage: Maxi-Cosie Iora Bedside Sleeper, £149 • Best bedside crib for one-handed operation: Joie Roomie GO, £180 • Best value bedside crib: Red Kite Cozysleep Bedside Crib, £84.99 • Best bedside crib with 360° swivel: Halo BassiNest Premiere Swivel Sleeper, £248.29 • high barrier to be present around the sides of the crib, to ensure your baby is not able to roll off their own mattress onto yours. This means that new bedside cribs can no longer have complete drop-down sides – many now have 'half-height' walls instead. This allows your baby to be positioned next to you with the crib lined up to your bed, but their mattress will be sunk a little lower, providing more of a protective barrier. All the cribs featured in our list comply with these new BS EN 1130:2019 safety requirements. What to look for when buying a bedside crib Will it work with your bed? – Certain bed frames can be trickier to use with a bedside crib. For example, if you have a divan bed you will need longer straps, and may not be able to tuck the legs of the crib underneath the bed and may need to look for a model that has foldable legs or works with your bed style. Height of your bed – Most bedside cribs have adjustable heights to give you an almost perfect fit on most bed frames, but if your bed is particularly low or high, do check the measurements. Also check the size of the crib and whether it will fit next to your bed while allowing you to get in and out easily and safely. This is particularly important for those first few days and weeks after giving birth when your body is still recovering. Mattress – The mattress needs to be firm, flat and breathable – this is a key safety feature. Don’t be tempted by a super soft mattress – your baby will sleep deeply and most importantly safely on a firm mattress. Drop-down side – How easy is it to remove the side? Can you do it with one hand? As you may be doing this in the middle of the night, are there lots of noisy zips and clips? Can it safely be left down while you sleep? Do check this as the rules differ depending on the product. How easy is it to assemble – Are there lots of parts to screw together? Will you need 2 people to build it? We’ve tested how easy different bedside cribs are to build in our reviews. How easy is it to keep clean – Does the mattress have a waterproof cover to protect from leaky nappies, baby sick and dribbles? Is the fabric machine washable or will you have to hand wash it? Portability – Is the crib light enough to move around your house? If you want to take it away with you does it crib fold flat and/or come with a storage bag? Extra features – Does it rock (useful for fussy sleepers), tilt (remember to use tilting with care), detach to become a moses basket or turn into an older baby cot or playpen? These extra features may not be necessary, but they could be useful. For more safety information we've also covered breathability, bedding and how to use the tilting function here. What are the benefits of using a bedside crib? Safe sleep charity The Lullaby Trust, advises that the safest place for your baby to sleep is on their own sleep surface, in the same room as you, for at least the first 6 months. Bedside cribs allow you to have your baby sleeping right next to you at night, but in the safety of their own crib. This means you can still be close to your baby without bed-sharing, which carries a risk of suffocation and overheating. Bedside cribs enable you to lean over and easily pick up your baby when feeding at night. This is especially useful if you’ve had a difficult birth or a c-section and find getting out of bed painful. You can also easily comfort your baby if they are fussing and have a good view of them while they are sleeping. How to do the baby mattress firmness test Press your hand on the centre and the sides of the mattress • A firm mattress shouldn’t mould to the shape of your hand and you’ll feel resistance – it will obviously move beneath the pressure but your hand shouldn’t sink in • When you remove your hand, the mattress should snap back and regain its shape • From a practical perspective, bedside cribs are smaller and more compact than most cots, which means they take up less space in your bedroom than a full-sized cot or cotbed. Do I need a bedside crib for my baby? You don’t have to buy a bedside crib. It's completely safe to put a baby in a regular cot from birth. But they’re a great option if you want your baby as close to you as possible at night, and for saving space. The downside is that most of these cribs only last up to 6 months and you’ll then need to move your baby into a full-sized cot or cotbed. A moses basket is a more economical option, but these can last even less time, and do not have the added features of a bedside crib such as a drop-down side, tilt, or multiple heights. How much does a bedside crib cost? It is possible to buy budget bedside cribs for under £100 but the majority we have reviewed are between £150-£300. Certain features, such as a rocking function or one-handed drop down side, tend to push the price up slightly. How did we choose these bedside cribs? Our 10 of the Best lists are compiled by qualified and experienced parenting journalists. They rely on a number of sources, including our independent reviews, testing undertaken during the MadeForMums Awards, and feedback from our home testing panel and Top Testers Club. Each year thousands of products are put through their paces by hundreds of parents across the country on behalf of MadeForMums, to ensure we’re bringing you honest and true reviews and recommendations. When testing bedside cribs, we consider size, ease of build and fitting, mattress quality and breathability, ease and safety of the drop-down side mechanism and other features, comfort for baby, design and quality, and whether it's worth the money. Our list is not an ordered ranking from 1-10, instead it is a carefully Here are our top 10 bedside cribs for 2024 1. Chicco Next2Me Magic, £189 – Best for easy drop-down side Suitable from: Birth to 6 months/9kg | Weight: 13.1kg | Crib size: H66.5- 82.4cm x W73cm x L99.5cm | Mattress size: L83cm x W50.5cm | Tilt: Yes | Rocks: Yes | Height positions: 11 | Washable mattress cover: Hand wash The Chicco Next2Me Magic is the latest update to the original Next2Me side-sleeping crib, which has won fans for its versatility. It can be used from birth as a bedside co-sleeper, as a standalone crib or possibly as a travel cot, but at over 13kg it's not a light carry. It is slightly more expensive than some other models, but standout features include a really easy drop-side that can be operated with one hand, 11 height levels, a lockable rocking function, 4 tilt options to help reduce reflux, and wheels to make it easy to move around your home. selected group of tried-and-tested products, each of which we believe is best for a different situation or requirement. We don’t just tell you what is best, we help you discover what is best for your family. A large sleeping area means more room for a bigger baby, plus a travel bag is included. MFM tester Lucy said, “I found the Chicco Next2Me Magic a breeze to move around and set up, but also substantial and sturdy. The clever onehanded drop-down mechanism on the side panel can be used while holding your baby in your arms, which is brilliant. "I've even used the Chicco in my kitchen for safe day naps when I need to be more focused on my older child.” Pros: Firm and breathable mattress, retractable legs to fit any bed, quiet side zip, easy to transport Cons: Tricky to initially assemble, mattress cover is hand wash only Read our full MadeForMums Chicco Next2Me Magic bedside crib review Available from: John Lewis and Mamas & Papas John Lewis & Partners £229.00 Buy now Mamas & Papas £229.00 Buy now 2. SnuzPod 4 Bedside Crib, £199.95 – Best for removable bassinet Suitable from: Birth to 6 months/9kg | Weight: 11.5kg | Crib size: H95cm x W49cm x L100cm | Mattress size: L75cm x W40cm | Tilt: Yes | Rocks: Yes | Height positions: 7 | Washable mattress cover: Machine washable The latest iteration of Snuz's much-loved bedside crib, the Snuzpod4 features a new breathable system (called ComfortAir) that aids the flow of air around the crib and your baby. It offers more side vents, breathable mesh liner and mattress, plus a ventilated base. But the key thing that we're delighted to see is that the Snuzpod4 has a firmer mattress than previous versions – as well as good breathability. Plus Snuz claims that the SnuzPod4 fits more bed heights than any rival, as it will now work with beds up to a maximum adult mattress height of 73cm. It's also designed to be compatible with a range of bed types – divan, ottoman and framed bed bases. Made from sustainably sourced beech solid wood, the Snuzpod4 looks good. MFM mum home tester Mehack commented on "how stylish and contemporary the design is," praising how it "fits perfectly with the room decor". We love its versatility – the two-part design includes a lift-off bassinet that can be moved around the house so you have a portable safe sleeping space for your baby, whichever room you're in. The bassinet also has a manual rocking function, as does the crib and the bassinet. There's an optional riser that can be added to create a slight incline to help babies with reflux, but for safety reasons, when the cot is tilted this stops the rocking function from working. Pros: Stylish, removable bassinet, great storage Cons: Can be difficult to put together Read our full MadeForMums SnuzPod 4 bedside crib review Available from: Snuz, Samuel Johnston and Amazon Very.co.uk £159.99 Buy now Samuel Johnston £190.18 Buy now Amazon UK £199.95 Buy now John Lewis & Partners £199.95 Buy now 3. Tutti Bambini CoZee Air Bedside Crib, £225 – Best for smooth rocking Suitable from: Birth to 6 months/9kg | Weight: 11kg | Crib size: H92cm x W12cm x L56cm | Mattress size: L80.5cm x W51cm | Tilt: Yes | Rocks: Yes | Height positions: 6 | Washable mattress cover: Sponge, only machine wash if necessary While it is at the more expensive end of the market, what makes the CoZee Air stand out from the competition is its smooth rocking function. It comes with easy-to-remove caster wheels that you can switch with rocking bars, which easily attach to the legs of the crib. As a safety feature, the CoZee can also only be rocked when it is set up as a standalone crib – when used as a bedside crib, it has flip-out feet that prevent it from doing so. “The rocking feature is fantastic and really helped me to settle my baby when she was overtired and fussing,” said MFM tester Tara. MFM testers also rated the crib highly for its portability – it is ideal as a travel cot, as despite its large size, it is compact when folded. A 30-second open-fold mechanism allows for a quick set up and it comes with a travel bag for easy transportation. While the multiple mesh windows are great for breathability and being able to see your little one, there's a curtain attached to one side of the crib that you can roll down to protect your baby from draughts during colder months. This still leaves one mesh side open to allow for plenty of air flow. When it comes to cleaning, the fabric lining can be removed and put in the washing machine, while the foam mattress can be machine washed if necessary. We also like the addition of a storage shelf that is useful for holding essentials such as baby wipes, nappies, clothes and muslins. Pros: Smooth rocking, quick to collapse down, storage shelf Cons: Higher price point Read our full MadeForMums Tutti Bambini CoZee Air Bedside Crib review Available from: Boots, Kiddies Kingdom and Tutti Bambini Kiddies Kingdom £165.00 Buy now For Your Little One £180.00 Buy now Wayfair £186.63 Buy now Dunelm £219.00 Buy now 4. Shnuggle Air Bedside crib, £180 – Best for longevity Suitable from: Birth to 6 months/9kg (up to 2 years with conversion kit) | Weight: 13.4kg | Crib size: H68.5–83cm x W56cm x L94cm | Mattress size: L83cm x W50cm | Tilt: Yes | Rocks: No | Height positions: 7 | Washable mattress cover: Hand wash While most bedside cribs on the market are only suitable for babies up to 6 months old, the Shnuggle Air stands out by offering 3 products in 1. It can be used as a standalone cot or bedside sleeper and then it transforms after 6 months into a full-sized cot when you buy the additional conversion kit (£109.95) and cot mattress (£50), which will last your child up until around 2 years old. This makes it a great long-term investment. MFM judges and testers were particularly impressed with the firmness of its hypo-allergenic airflow mattress. This crib has dual-view mesh sides, giving it maximum breathability; this also means you can easily see your baby when both sides are up. This was also a feature that stood out to MFM reviewer Tara, who used it with her 6-month-old daughter Elodie. She said, “Elodie slept very soundly and she loved being able to see through the mesh sides.” The drop-down sides are easily removed for nighttime access by releasing the safety catch on the top bar and undoing the zips. However, during the awards testing, it was noted that the safety catch makes a loud click. This was echoed by a MFM user reviewer who said: “The side makes a noise when you click it back in and that can wake up baby!” Unlike most of the others on this list, the side of the Shnuggle Air cannot be left down during sleep, it's simply there for access. The Shnuggle Air is relatively heavy at 13.4kg, and doesn't have wheels, so it's not easy to move around your home. “I’d say once the Shnuggle Air is set up, it’s staying put,” Tara added. Pros: Long-lasting, highly breathable, spacious Cons: Not easily portable, side is noisy when released, hand wash only Read our full MadeForMums Shnuggle Air Bedside Crib review Available from: Amazon, John Lewis and Shnuggle John Lewis & Partners £180.00 Buy now Amazon UK £199.95 Buy now Kiddies Kingdom £299.00 Buy now 5. Maxi-Cosi Iora bedside sleeper, £149 – Best for extra storage Suitable from: Birth to 6 months/9kg | Weight: 10.8kg | Crib size: H74.5cm x W55.5cm x L93cm | Mattress size: L80cm x W58.5cm | Tilt: Yes | Rocks: No | Height positions: 5 | Washable mattress cover: Hand wash With its choice of muted colours, sleek design and quality materials, the Maxi-Cosi Iora is sure to fit in with most room schemes. The large storage basket at the bottom of the crib is great for parents who are short on space as it can easily hold numerous blankets, baby sleeping bags, nappies, wipes and spare clothes. The Iora’s easy-to-adjust height (5 positions in total) and slide function (2 positions in total) also means it can fit snugly against most types of bed when used with the straps. “Our iron-frame bed is somewhat lower than average,” said MFM reviewer Georgina. “But the Iora also sat in the correct position with our mattress.” One feature that our reviewer Georgina particularly liked was that when the side is down, there is a 7-inch (18cm) barrier to stop your baby rolling out. She said: “The Iora allowed me to sleep as close to my daughter as possible, but I was also safe in the knowledge that she was in her own sleeping area and I wasn't going to squash her!” This crib is extremely straightforward to assemble (one of the quickest during MFM testing) and MFM reviewer Georgina managed to put it together speedily without using the instructions. She explained: “It was obvious which pieces go together, simple to build and had neat zips to keep everything in place.” A handy bag also means it can easily be used as a travel cot, especially as it folds down flat. Keep in mind that Georgina did find the outer fabric was prone to creasing when unpacked from the travel bag. Pros: Extra storage, easy height and slide adjustments, portable, smart appearance Cons: Mattress cover hand wash only, outer fabric prone to creasing, not as many height options as other cribs, only mesh on one side Read our full MadeForMums Maxi-Cosi Iora review Available from: Samuel Johnston, John Lewis and Amazon Kiddies Kingdom £169.00 Buy now John Lewis & Partners £199.99 Buy now Mamas & Papas £199.99 Buy now Very.co.uk £199.99 Buy now 6. Joie Roomie GO, £180 – Best for one-handed operation Suitable from: Birth to 6 months/9kg | Weight: 9.5kg | Crib size: H74.8- 82.2cm x W68.5cm x L90.3cm | Mattress size: H6cm x W51cm x L84cm | Tilt: Yes | Rocks: No | Height positions: 5 | Washable mattress cover: Machine washable | Awards: Gold – Bedside/Co-Sleeper Crib, MadeForMum Awards 2023 Awarded Gold in Best Bedside/Co-Sleeper Crib, MadeForMums Awards 2023, the Joie Roomie Go packs in a lot of features for its mid-range price. Offering mesh windows on both sides, providing plenty of ventilation as well as making it easy to keep an eye on your baby, the stylish crib is available in a choice of chic grey or classic black. Our MFM home testers were impressed with the Roomie Go’s aesthetic, with one commenting, “It looks great, is made with good quality material and will look stylish in any room.” The one-handed drop-down panels on both sides of the crib mean you can easily switch which side of the bed you attach it to. You should be able to simply click the handle to lift and lower, although one of our home testers commented that the first couple of times they attempted this the mechanism was a little sticky. Its simple, compact fold means you can pack the crib away in less than a minute and take it with you in the travel bag included, for holidays or trips to the grandparents’. The Joie Roomie Go is also on (lockable) wheels so you can move it around the home during the daytime. It has a tummy tilt for reflux/colic, and there are 5 height adjustments to fit most beds. Praised across the board by our MFM home testers for its comfy mattress and ease of assembly, it’s a great all-rounder both when at home and away. Pros: One-handed operation, tilt function for reflux, comfortable for baby, drop-down panels on both sides, travel bag included Cons: No storage, not as many height options as other cribs Available from: John Lewis, Joie and Argos Very.co.uk £179.99 Buy now argos.co.uk £180.00 Buy now John Lewis & Partners £180.00 Buy now Kiddies Kingdom £180.00 Buy now 7. Red Kite Cozysleep Crib, £84.99 – Best for value Suitable from: Birth to 6 months/9kg | Weight: 9kg | Crib size: H74-87cm x W57-61cm x L88cm | Mattress size: W80cm x L50cm | Tilt: Yes | Rocks: No | Height positions: 7 | Washable mattress cover: No, wipeable only | Awards: Silver – Bedside/Co-Sleeper Crib, MadeForMum Awards 2023 Coming in at just under £85 the Red Kite Cozysleep crib offers really fantastic value. However, the great price doesn't mean there's a compromise on features or style. “It’s a well-made product that looks modern and would easily suit all bedrooms,” said MFM home tester Kiran, who appreciated the simple, yet contemporary look. The crib has a drop-down side, 7 adjustable height positions, a tilt function (great for helping with reflux) and a handy storage shelf for things like nappies and wipes. It's on wheels, so it can be moved around the room or away from the bed with ease, and it also folds down to a more compact size for travel. There’s even a handy storage bag included, which our testers felt helps you to get even more use out of the Cozysleep as a travel cot. One feature that really impressed our home testers was the quality of the soft, quilted mattress, with one MFM home tester commenting, “The mattress is brilliant! I have used other makes of co-sleepers/cribs and this mattress is triple the thickness. It feels soft but firm and very comfy.” Pros: Great value, tilt function, good quality mattress, handy storage shelf, travel bag included Cons: Only mesh on one side Available from: Amazon and Kiddies Kingdom Kiddies Kingdom £79.99 Buy now Samuel Johnston £104.40 Buy now 8. Halo BassiNest Premiere Swivel Sleeper, £248.29 – Best for 360° swivel Suitable from: Birth to 5 months/10kg | Weight: 14.8kg | Crib size: H94cm x W61cm x L114cm | Mattress size: L85cm x W55.8cm | Tilt: No | Rocks: Battery-powered vibrations | Height positions: Customisable between 61cm-84cm | Washable mattress cover: Machine-washable sheet included This is American brand Halo's updated version of its popular BassiNest Essentia swivel sleeper. Offering a slightly different way to sleep closely but safely with your baby, the BassiNest Premiere is a standalone crib with a central stand that slides beneath the bed, rather than fastening on to the side of the bed. Parents can then swivel the crib 360° for easy access, with one MFM home tester pointing out this also "makes it easy to get in and out of bed without disturbing the baby". There's no drop-down side, instead the mesh side has enough give that you can push it down to reach and get your baby before it automatically returns to the upright position. Compared to cribs with open sides that sit flush with the bed, the BassiNest is more of a hybrid product, sitting somewhere between a moses basket and a bedside crib. While the BassiNest Premiere doesn't have a rock or tilt function, it does have a built-in “soothing centre” that features an amber nightlight, floorlight, 2 vibration levels and 4 soothing sounds, all with auto shutoff. To use this function you will need 3 x AA batteries (not included). Pros: Flexible, useful when recovering from birth, customisable height to fit most beds, built-in soothing centre Cons: Not a true bedside crib, very heavy, need batteries to access the soothing centre functions, expensive Available from: Halo, John Lewis and Boots John Lewis & Partners £249.00 Buy now How do you use a bedside crib safely? The most important piece of advice for safe sleeping is to lie your baby on their back to sleep. Indeed, since the Back To Sleep campaign was launched in the UK 30 years ago, cases of SIDS (Sudden Infant Death Syndrome) have fallen by 80%. When using a bedside crib, you should ensure there is no gap between the adult's and baby's mattress. Your baby’s mattress should be firm and flat, and sit snugly in the crib with no gaps. Also look for a mattress that is breathable. There's a simple test you can do for this: Most cribs come with a mattress as standard, but if you are given the crib by someone else or buy one second-hand you will need to buy a new mattress – even if the existing one appears to be in good condition. Second-hand mattresses may increase the risk of SIDS and are less likely to be supportive after losing their shape over time. Always use the mattress designed to fit your bedside crib – most retailers sell them separately should you need a replacement. When it comes to a safe sleeping position, place your baby in the crib with their feet at the end of the crib – called the feet-to-foot position. This reduces the risk of their face or head slipping down under the covers if you're using a blanket. How to use tilting and rocking features safely Some bedside cribs offer a tilt option, which may help babies with digestive issues, colic or reflux. If you are going to tilt your baby, you must do so with great care and only at a slight angle, to avoid your baby slipping down. We recommend speaking to your GP or health visitor for advice before using the tilt function. Tilting (and rocking) can only be used when the bedside crib is set up as a Our at-home mattress breathability test Pick up the mattress and place it close to your mouth • Breathe in and see how easy it is to breathe out with the mattress near your mouth • If it’s easier this should mean the mattress offers good ventilation • standalone crib – for safety reasons, you should not tilt or rock the crib when the side is down as there is a chance your baby could fall out. What bedding can I use with a bedside crib? The Lullaby Trust advises, “Firmly tucked-in sheets and blankets (not above shoulder height) or a baby sleep bag are safe for a baby to sleep in.” Make sure you buy the correct size sheets that exactly fit your mattress. You may also choose to swaddle a newborn. The Lullaby Trust does not advise for or against swaddling, but it does have some basic swaddling guidance. You must stop using a swaddle as soon as your baby learns to roll. Not all baby sleeping bags and swaddles are created equal, so make sure the brand you buy adheres to safety standards, is the correct tog for the room temperature and season, and is the right size for your baby, so they can't slip down inside. Don’t use any soft or bulky bedding and never use pillows, duvets, baby bumpers or baby positioners. You should also remove any soft toys from the crib before your baby sleeps. Advertisement Read more... Gemma Cartwright Group Digital Editor Gemma has two decades of experience in digital content. She is mum to a preschooler, and aunt to 4 children under 4. She is particularly passionate about sleep (for babies and parents) and loves testing out gadgets, technology and innovation in the parenting world. 14 of the best baby and toddler sleeping bags • 14 of the best car seats from birth • Bednest: NCT says there is a “small but plausible risk” when using the co-sleeper • You may also like How NatPat's wellness patches may help your family NatPat's range of wellness patches and stickers aim to tackle everything from allergies to lack of focus. We take a closer look at the range. Advertisement feature with NatPat Read now Silver Cross Voyager Co-Sleeper Bedside Crib review Chicco Next2Me Air bedside crib review Cribs & moses baskets Cribs & moses baskets Mamas & Papas Lua Bedside Crib review 10 of the best Moses baskets and cribs for your newborn Cribs & moses baskets Cribs & moses baskets About us Contact us Terms & conditions Code of conduct Privacy policy Cookies policy Complaints MadeForMums Top Testers Club Competitions Manage Privacy Settings This website is owned and published by Immediate Media Company Limited. www.immediate.co.uk © Immediate Media Company Ltd. 2024 Radio Times BBC Good Food Gardeners' World Magazine olive History Extra Junior Magazine The Recommended Baby Names Pregnancy Health Pushchairs & prams Car Seats Weaning & Baby Recipes Travel & holidays
You are given a reference document. You must only use information found in the reference document to answer the question asked. EVIDENCE: ❚ MadeForMums reviews are independent and based on expertise and testing. When you buy through links on our site, we may earn an affiliate commission, but this never influences our product choices. 8 of the best bedside cribs and cosleepers for safe sleeping for your baby We've tried, tested and reviewed the best bedside cribs, for a brilliant way to sleep closely and safely with your baby Gemma Cartwright Published: March 5, 2024 at 3:20 PM Save A bedside crib is one of the most popular choices for newborn sleep, as it allows you to keep your baby close while still following safe sleep We value your privacy We need your consent so that we and our 172 trusted partners can store and access cookies, unique identifiers, personal data, and information on your browsing behaviour on this device. This only applies to Immediate Media. You can change your preferences at any time by clicking on ‘Manage Privacy Settings’ located at the bottom of any page. You don’t have to agree, but some personalised content and advertising may not work if you don’t. We and our partners use your data for the following purposes: Store and/or access information on a device Precise geolocation data, and identification through device scanning Personalised advertising and content, advertising and content measurement, audience research and services development. Google Consent Mode framework To view our list of partners and see how your data may be used, click or tap ‘More Options’ below. You can also review where our partners claim a legitimate interest to use your data and, if you wish, object to them using it. MORE OPTIONS AGREE guidelines. In the first 6 months, when the risk of sudden infant death syndrome (SIDS) is at its highest, the safest place for a baby to sleep is on their back in their own sleep space, be that a cot, crib or moses basket. Advertisement A bedside crib fastens to the frame of your bed on one side, so you're effectively lying next to your baby. The side can usually be dropped down so you can see and reach over to your child. They're sometimes referred to as side-sleepers or co-sleepers, but the key difference is that you're not sharing a sleep surface or bedding. You and your baby can maximise the soothing benefits that proximity brings while minimising the risks associated with bed sharing. Having your baby at arm's reach also makes night feeds much easier. Best bedside cribs and co-sleepers at a glance Jump to our list of the best bedside cribs and cosleepers • Best bedside crib with an easy drop-down side: Chicco Next2Me Magic, £189 • Best bedside crib with a removable bassinet: SnuzPod 4 Bedside Crib, £199.95 • Best bedside crib for smooth rocking: Tutti Bambini CoZee Air Bedside Crib, £225 • Best bedside crib for longevity: Shnuggle Air Bedside Crib, £180 • There are a wide range of options, so at MadeForMums we’ve analysed the bedside crib market closely to bring you the very best choices. We’ve used feedback from our expert journalist reviewers and parent testers, combined with results from in-house MadeForMums testing, which looked at key features such as breathability, mattress firmness, ease of building as well as functionality. For each bedside crib we’ve listed the key technical features to help you compare across brands and models so you can find the best design to suit your needs. If your baby is struggling to sleep through the night, take a look at our best sleep aids and white noise machines, best nightlights and best baby swaddles. More like this Silver Cross Voyager Co-Sleeper Bedside Crib review What is the new safety standard for bedside cribs? All new bedside cribs manufactured since November 2020 have to meet a new safety standard (with the catchy name BS EN 1130:2019) that introduced new and more rigorous safety requirements for bedside cribs. However, you may find some older versions of cribs are still on sale that only match the previous safety standard. Slowly these will disappear from stores and the only ones available will meet the new standard. The most significant new requirement for BS EN 1130:2019 is for a 120mm Best bedside crib for extra storage: Maxi-Cosie Iora Bedside Sleeper, £149 • Best bedside crib for one-handed operation: Joie Roomie GO, £180 • Best value bedside crib: Red Kite Cozysleep Bedside Crib, £84.99 • Best bedside crib with 360° swivel: Halo BassiNest Premiere Swivel Sleeper, £248.29 • high barrier to be present around the sides of the crib, to ensure your baby is not able to roll off their own mattress onto yours. This means that new bedside cribs can no longer have complete drop-down sides – many now have 'half-height' walls instead. This allows your baby to be positioned next to you with the crib lined up to your bed, but their mattress will be sunk a little lower, providing more of a protective barrier. All the cribs featured in our list comply with these new BS EN 1130:2019 safety requirements. What to look for when buying a bedside crib Will it work with your bed? – Certain bed frames can be trickier to use with a bedside crib. For example, if you have a divan bed you will need longer straps, and may not be able to tuck the legs of the crib underneath the bed and may need to look for a model that has foldable legs or works with your bed style. Height of your bed – Most bedside cribs have adjustable heights to give you an almost perfect fit on most bed frames, but if your bed is particularly low or high, do check the measurements. Also check the size of the crib and whether it will fit next to your bed while allowing you to get in and out easily and safely. This is particularly important for those first few days and weeks after giving birth when your body is still recovering. Mattress – The mattress needs to be firm, flat and breathable – this is a key safety feature. Don’t be tempted by a super soft mattress – your baby will sleep deeply and most importantly safely on a firm mattress. Drop-down side – How easy is it to remove the side? Can you do it with one hand? As you may be doing this in the middle of the night, are there lots of noisy zips and clips? Can it safely be left down while you sleep? Do check this as the rules differ depending on the product. How easy is it to assemble – Are there lots of parts to screw together? Will you need 2 people to build it? We’ve tested how easy different bedside cribs are to build in our reviews. How easy is it to keep clean – Does the mattress have a waterproof cover to protect from leaky nappies, baby sick and dribbles? Is the fabric machine washable or will you have to hand wash it? Portability – Is the crib light enough to move around your house? If you want to take it away with you does it crib fold flat and/or come with a storage bag? Extra features – Does it rock (useful for fussy sleepers), tilt (remember to use tilting with care), detach to become a moses basket or turn into an older baby cot or playpen? These extra features may not be necessary, but they could be useful. For more safety information we've also covered breathability, bedding and how to use the tilting function here. What are the benefits of using a bedside crib? Safe sleep charity The Lullaby Trust, advises that the safest place for your baby to sleep is on their own sleep surface, in the same room as you, for at least the first 6 months. Bedside cribs allow you to have your baby sleeping right next to you at night, but in the safety of their own crib. This means you can still be close to your baby without bed-sharing, which carries a risk of suffocation and overheating. Bedside cribs enable you to lean over and easily pick up your baby when feeding at night. This is especially useful if you’ve had a difficult birth or a c-section and find getting out of bed painful. You can also easily comfort your baby if they are fussing and have a good view of them while they are sleeping. How to do the baby mattress firmness test Press your hand on the centre and the sides of the mattress • A firm mattress shouldn’t mould to the shape of your hand and you’ll feel resistance – it will obviously move beneath the pressure but your hand shouldn’t sink in • When you remove your hand, the mattress should snap back and regain its shape • From a practical perspective, bedside cribs are smaller and more compact than most cots, which means they take up less space in your bedroom than a full-sized cot or cotbed. Do I need a bedside crib for my baby? You don’t have to buy a bedside crib. It's completely safe to put a baby in a regular cot from birth. But they’re a great option if you want your baby as close to you as possible at night, and for saving space. The downside is that most of these cribs only last up to 6 months and you’ll then need to move your baby into a full-sized cot or cotbed. A moses basket is a more economical option, but these can last even less time, and do not have the added features of a bedside crib such as a drop-down side, tilt, or multiple heights. How much does a bedside crib cost? It is possible to buy budget bedside cribs for under £100 but the majority we have reviewed are between £150-£300. Certain features, such as a rocking function or one-handed drop down side, tend to push the price up slightly. How did we choose these bedside cribs? Our 10 of the Best lists are compiled by qualified and experienced parenting journalists. They rely on a number of sources, including our independent reviews, testing undertaken during the MadeForMums Awards, and feedback from our home testing panel and Top Testers Club. Each year thousands of products are put through their paces by hundreds of parents across the country on behalf of MadeForMums, to ensure we’re bringing you honest and true reviews and recommendations. When testing bedside cribs, we consider size, ease of build and fitting, mattress quality and breathability, ease and safety of the drop-down side mechanism and other features, comfort for baby, design and quality, and whether it's worth the money. Our list is not an ordered ranking from 1-10, instead it is a carefully Here are our top 10 bedside cribs for 2024 1. Chicco Next2Me Magic, £189 – Best for easy drop-down side Suitable from: Birth to 6 months/9kg | Weight: 13.1kg | Crib size: H66.5- 82.4cm x W73cm x L99.5cm | Mattress size: L83cm x W50.5cm | Tilt: Yes | Rocks: Yes | Height positions: 11 | Washable mattress cover: Hand wash The Chicco Next2Me Magic is the latest update to the original Next2Me side-sleeping crib, which has won fans for its versatility. It can be used from birth as a bedside co-sleeper, as a standalone crib or possibly as a travel cot, but at over 13kg it's not a light carry. It is slightly more expensive than some other models, but standout features include a really easy drop-side that can be operated with one hand, 11 height levels, a lockable rocking function, 4 tilt options to help reduce reflux, and wheels to make it easy to move around your home. selected group of tried-and-tested products, each of which we believe is best for a different situation or requirement. We don’t just tell you what is best, we help you discover what is best for your family. A large sleeping area means more room for a bigger baby, plus a travel bag is included. MFM tester Lucy said, “I found the Chicco Next2Me Magic a breeze to move around and set up, but also substantial and sturdy. The clever onehanded drop-down mechanism on the side panel can be used while holding your baby in your arms, which is brilliant. "I've even used the Chicco in my kitchen for safe day naps when I need to be more focused on my older child.” Pros: Firm and breathable mattress, retractable legs to fit any bed, quiet side zip, easy to transport Cons: Tricky to initially assemble, mattress cover is hand wash only Read our full MadeForMums Chicco Next2Me Magic bedside crib review Available from: John Lewis and Mamas & Papas John Lewis & Partners £229.00 Buy now Mamas & Papas £229.00 Buy now 2. SnuzPod 4 Bedside Crib, £199.95 – Best for removable bassinet Suitable from: Birth to 6 months/9kg | Weight: 11.5kg | Crib size: H95cm x W49cm x L100cm | Mattress size: L75cm x W40cm | Tilt: Yes | Rocks: Yes | Height positions: 7 | Washable mattress cover: Machine washable The latest iteration of Snuz's much-loved bedside crib, the Snuzpod4 features a new breathable system (called ComfortAir) that aids the flow of air around the crib and your baby. It offers more side vents, breathable mesh liner and mattress, plus a ventilated base. But the key thing that we're delighted to see is that the Snuzpod4 has a firmer mattress than previous versions – as well as good breathability. Plus Snuz claims that the SnuzPod4 fits more bed heights than any rival, as it will now work with beds up to a maximum adult mattress height of 73cm. It's also designed to be compatible with a range of bed types – divan, ottoman and framed bed bases. Made from sustainably sourced beech solid wood, the Snuzpod4 looks good. MFM mum home tester Mehack commented on "how stylish and contemporary the design is," praising how it "fits perfectly with the room decor". We love its versatility – the two-part design includes a lift-off bassinet that can be moved around the house so you have a portable safe sleeping space for your baby, whichever room you're in. The bassinet also has a manual rocking function, as does the crib and the bassinet. There's an optional riser that can be added to create a slight incline to help babies with reflux, but for safety reasons, when the cot is tilted this stops the rocking function from working. Pros: Stylish, removable bassinet, great storage Cons: Can be difficult to put together Read our full MadeForMums SnuzPod 4 bedside crib review Available from: Snuz, Samuel Johnston and Amazon Very.co.uk £159.99 Buy now Samuel Johnston £190.18 Buy now Amazon UK £199.95 Buy now John Lewis & Partners £199.95 Buy now 3. Tutti Bambini CoZee Air Bedside Crib, £225 – Best for smooth rocking Suitable from: Birth to 6 months/9kg | Weight: 11kg | Crib size: H92cm x W12cm x L56cm | Mattress size: L80.5cm x W51cm | Tilt: Yes | Rocks: Yes | Height positions: 6 | Washable mattress cover: Sponge, only machine wash if necessary While it is at the more expensive end of the market, what makes the CoZee Air stand out from the competition is its smooth rocking function. It comes with easy-to-remove caster wheels that you can switch with rocking bars, which easily attach to the legs of the crib. As a safety feature, the CoZee can also only be rocked when it is set up as a standalone crib – when used as a bedside crib, it has flip-out feet that prevent it from doing so. “The rocking feature is fantastic and really helped me to settle my baby when she was overtired and fussing,” said MFM tester Tara. MFM testers also rated the crib highly for its portability – it is ideal as a travel cot, as despite its large size, it is compact when folded. A 30-second open-fold mechanism allows for a quick set up and it comes with a travel bag for easy transportation. While the multiple mesh windows are great for breathability and being able to see your little one, there's a curtain attached to one side of the crib that you can roll down to protect your baby from draughts during colder months. This still leaves one mesh side open to allow for plenty of air flow. When it comes to cleaning, the fabric lining can be removed and put in the washing machine, while the foam mattress can be machine washed if necessary. We also like the addition of a storage shelf that is useful for holding essentials such as baby wipes, nappies, clothes and muslins. Pros: Smooth rocking, quick to collapse down, storage shelf Cons: Higher price point Read our full MadeForMums Tutti Bambini CoZee Air Bedside Crib review Available from: Boots, Kiddies Kingdom and Tutti Bambini Kiddies Kingdom £165.00 Buy now For Your Little One £180.00 Buy now Wayfair £186.63 Buy now Dunelm £219.00 Buy now 4. Shnuggle Air Bedside crib, £180 – Best for longevity Suitable from: Birth to 6 months/9kg (up to 2 years with conversion kit) | Weight: 13.4kg | Crib size: H68.5–83cm x W56cm x L94cm | Mattress size: L83cm x W50cm | Tilt: Yes | Rocks: No | Height positions: 7 | Washable mattress cover: Hand wash While most bedside cribs on the market are only suitable for babies up to 6 months old, the Shnuggle Air stands out by offering 3 products in 1. It can be used as a standalone cot or bedside sleeper and then it transforms after 6 months into a full-sized cot when you buy the additional conversion kit (£109.95) and cot mattress (£50), which will last your child up until around 2 years old. This makes it a great long-term investment. MFM judges and testers were particularly impressed with the firmness of its hypo-allergenic airflow mattress. This crib has dual-view mesh sides, giving it maximum breathability; this also means you can easily see your baby when both sides are up. This was also a feature that stood out to MFM reviewer Tara, who used it with her 6-month-old daughter Elodie. She said, “Elodie slept very soundly and she loved being able to see through the mesh sides.” The drop-down sides are easily removed for nighttime access by releasing the safety catch on the top bar and undoing the zips. However, during the awards testing, it was noted that the safety catch makes a loud click. This was echoed by a MFM user reviewer who said: “The side makes a noise when you click it back in and that can wake up baby!” Unlike most of the others on this list, the side of the Shnuggle Air cannot be left down during sleep, it's simply there for access. The Shnuggle Air is relatively heavy at 13.4kg, and doesn't have wheels, so it's not easy to move around your home. “I’d say once the Shnuggle Air is set up, it’s staying put,” Tara added. Pros: Long-lasting, highly breathable, spacious Cons: Not easily portable, side is noisy when released, hand wash only Read our full MadeForMums Shnuggle Air Bedside Crib review Available from: Amazon, John Lewis and Shnuggle John Lewis & Partners £180.00 Buy now Amazon UK £199.95 Buy now Kiddies Kingdom £299.00 Buy now 5. Maxi-Cosi Iora bedside sleeper, £149 – Best for extra storage Suitable from: Birth to 6 months/9kg | Weight: 10.8kg | Crib size: H74.5cm x W55.5cm x L93cm | Mattress size: L80cm x W58.5cm | Tilt: Yes | Rocks: No | Height positions: 5 | Washable mattress cover: Hand wash With its choice of muted colours, sleek design and quality materials, the Maxi-Cosi Iora is sure to fit in with most room schemes. The large storage basket at the bottom of the crib is great for parents who are short on space as it can easily hold numerous blankets, baby sleeping bags, nappies, wipes and spare clothes. The Iora’s easy-to-adjust height (5 positions in total) and slide function (2 positions in total) also means it can fit snugly against most types of bed when used with the straps. “Our iron-frame bed is somewhat lower than average,” said MFM reviewer Georgina. “But the Iora also sat in the correct position with our mattress.” One feature that our reviewer Georgina particularly liked was that when the side is down, there is a 7-inch (18cm) barrier to stop your baby rolling out. She said: “The Iora allowed me to sleep as close to my daughter as possible, but I was also safe in the knowledge that she was in her own sleeping area and I wasn't going to squash her!” This crib is extremely straightforward to assemble (one of the quickest during MFM testing) and MFM reviewer Georgina managed to put it together speedily without using the instructions. She explained: “It was obvious which pieces go together, simple to build and had neat zips to keep everything in place.” A handy bag also means it can easily be used as a travel cot, especially as it folds down flat. Keep in mind that Georgina did find the outer fabric was prone to creasing when unpacked from the travel bag. Pros: Extra storage, easy height and slide adjustments, portable, smart appearance Cons: Mattress cover hand wash only, outer fabric prone to creasing, not as many height options as other cribs, only mesh on one side Read our full MadeForMums Maxi-Cosi Iora review Available from: Samuel Johnston, John Lewis and Amazon Kiddies Kingdom £169.00 Buy now John Lewis & Partners £199.99 Buy now Mamas & Papas £199.99 Buy now Very.co.uk £199.99 Buy now 6. Joie Roomie GO, £180 – Best for one-handed operation Suitable from: Birth to 6 months/9kg | Weight: 9.5kg | Crib size: H74.8- 82.2cm x W68.5cm x L90.3cm | Mattress size: H6cm x W51cm x L84cm | Tilt: Yes | Rocks: No | Height positions: 5 | Washable mattress cover: Machine washable | Awards: Gold – Bedside/Co-Sleeper Crib, MadeForMum Awards 2023 Awarded Gold in Best Bedside/Co-Sleeper Crib, MadeForMums Awards 2023, the Joie Roomie Go packs in a lot of features for its mid-range price. Offering mesh windows on both sides, providing plenty of ventilation as well as making it easy to keep an eye on your baby, the stylish crib is available in a choice of chic grey or classic black. Our MFM home testers were impressed with the Roomie Go’s aesthetic, with one commenting, “It looks great, is made with good quality material and will look stylish in any room.” The one-handed drop-down panels on both sides of the crib mean you can easily switch which side of the bed you attach it to. You should be able to simply click the handle to lift and lower, although one of our home testers commented that the first couple of times they attempted this the mechanism was a little sticky. Its simple, compact fold means you can pack the crib away in less than a minute and take it with you in the travel bag included, for holidays or trips to the grandparents’. The Joie Roomie Go is also on (lockable) wheels so you can move it around the home during the daytime. It has a tummy tilt for reflux/colic, and there are 5 height adjustments to fit most beds. Praised across the board by our MFM home testers for its comfy mattress and ease of assembly, it’s a great all-rounder both when at home and away. Pros: One-handed operation, tilt function for reflux, comfortable for baby, drop-down panels on both sides, travel bag included Cons: No storage, not as many height options as other cribs Available from: John Lewis, Joie and Argos Very.co.uk £179.99 Buy now argos.co.uk £180.00 Buy now John Lewis & Partners £180.00 Buy now Kiddies Kingdom £180.00 Buy now 7. Red Kite Cozysleep Crib, £84.99 – Best for value Suitable from: Birth to 6 months/9kg | Weight: 9kg | Crib size: H74-87cm x W57-61cm x L88cm | Mattress size: W80cm x L50cm | Tilt: Yes | Rocks: No | Height positions: 7 | Washable mattress cover: No, wipeable only | Awards: Silver – Bedside/Co-Sleeper Crib, MadeForMum Awards 2023 Coming in at just under £85 the Red Kite Cozysleep crib offers really fantastic value. However, the great price doesn't mean there's a compromise on features or style. “It’s a well-made product that looks modern and would easily suit all bedrooms,” said MFM home tester Kiran, who appreciated the simple, yet contemporary look. The crib has a drop-down side, 7 adjustable height positions, a tilt function (great for helping with reflux) and a handy storage shelf for things like nappies and wipes. It's on wheels, so it can be moved around the room or away from the bed with ease, and it also folds down to a more compact size for travel. There’s even a handy storage bag included, which our testers felt helps you to get even more use out of the Cozysleep as a travel cot. One feature that really impressed our home testers was the quality of the soft, quilted mattress, with one MFM home tester commenting, “The mattress is brilliant! I have used other makes of co-sleepers/cribs and this mattress is triple the thickness. It feels soft but firm and very comfy.” Pros: Great value, tilt function, good quality mattress, handy storage shelf, travel bag included Cons: Only mesh on one side Available from: Amazon and Kiddies Kingdom Kiddies Kingdom £79.99 Buy now Samuel Johnston £104.40 Buy now 8. Halo BassiNest Premiere Swivel Sleeper, £248.29 – Best for 360° swivel Suitable from: Birth to 5 months/10kg | Weight: 14.8kg | Crib size: H94cm x W61cm x L114cm | Mattress size: L85cm x W55.8cm | Tilt: No | Rocks: Battery-powered vibrations | Height positions: Customisable between 61cm-84cm | Washable mattress cover: Machine-washable sheet included This is American brand Halo's updated version of its popular BassiNest Essentia swivel sleeper. Offering a slightly different way to sleep closely but safely with your baby, the BassiNest Premiere is a standalone crib with a central stand that slides beneath the bed, rather than fastening on to the side of the bed. Parents can then swivel the crib 360° for easy access, with one MFM home tester pointing out this also "makes it easy to get in and out of bed without disturbing the baby". There's no drop-down side, instead the mesh side has enough give that you can push it down to reach and get your baby before it automatically returns to the upright position. Compared to cribs with open sides that sit flush with the bed, the BassiNest is more of a hybrid product, sitting somewhere between a moses basket and a bedside crib. While the BassiNest Premiere doesn't have a rock or tilt function, it does have a built-in “soothing centre” that features an amber nightlight, floorlight, 2 vibration levels and 4 soothing sounds, all with auto shutoff. To use this function you will need 3 x AA batteries (not included). Pros: Flexible, useful when recovering from birth, customisable height to fit most beds, built-in soothing centre Cons: Not a true bedside crib, very heavy, need batteries to access the soothing centre functions, expensive Available from: Halo, John Lewis and Boots John Lewis & Partners £249.00 Buy now How do you use a bedside crib safely? The most important piece of advice for safe sleeping is to lie your baby on their back to sleep. Indeed, since the Back To Sleep campaign was launched in the UK 30 years ago, cases of SIDS (Sudden Infant Death Syndrome) have fallen by 80%. When using a bedside crib, you should ensure there is no gap between the adult's and baby's mattress. Your baby’s mattress should be firm and flat, and sit snugly in the crib with no gaps. Also look for a mattress that is breathable. There's a simple test you can do for this: Most cribs come with a mattress as standard, but if you are given the crib by someone else or buy one second-hand you will need to buy a new mattress – even if the existing one appears to be in good condition. Second-hand mattresses may increase the risk of SIDS and are less likely to be supportive after losing their shape over time. Always use the mattress designed to fit your bedside crib – most retailers sell them separately should you need a replacement. When it comes to a safe sleeping position, place your baby in the crib with their feet at the end of the crib – called the feet-to-foot position. This reduces the risk of their face or head slipping down under the covers if you're using a blanket. How to use tilting and rocking features safely Some bedside cribs offer a tilt option, which may help babies with digestive issues, colic or reflux. If you are going to tilt your baby, you must do so with great care and only at a slight angle, to avoid your baby slipping down. We recommend speaking to your GP or health visitor for advice before using the tilt function. Tilting (and rocking) can only be used when the bedside crib is set up as a Our at-home mattress breathability test Pick up the mattress and place it close to your mouth • Breathe in and see how easy it is to breathe out with the mattress near your mouth • If it’s easier this should mean the mattress offers good ventilation • standalone crib – for safety reasons, you should not tilt or rock the crib when the side is down as there is a chance your baby could fall out. What bedding can I use with a bedside crib? The Lullaby Trust advises, “Firmly tucked-in sheets and blankets (not above shoulder height) or a baby sleep bag are safe for a baby to sleep in.” Make sure you buy the correct size sheets that exactly fit your mattress. You may also choose to swaddle a newborn. The Lullaby Trust does not advise for or against swaddling, but it does have some basic swaddling guidance. You must stop using a swaddle as soon as your baby learns to roll. Not all baby sleeping bags and swaddles are created equal, so make sure the brand you buy adheres to safety standards, is the correct tog for the room temperature and season, and is the right size for your baby, so they can't slip down inside. Don’t use any soft or bulky bedding and never use pillows, duvets, baby bumpers or baby positioners. You should also remove any soft toys from the crib before your baby sleeps. Advertisement Read more... Gemma Cartwright Group Digital Editor Gemma has two decades of experience in digital content. She is mum to a preschooler, and aunt to 4 children under 4. She is particularly passionate about sleep (for babies and parents) and loves testing out gadgets, technology and innovation in the parenting world. 14 of the best baby and toddler sleeping bags • 14 of the best car seats from birth • Bednest: NCT says there is a “small but plausible risk” when using the co-sleeper • You may also like How NatPat's wellness patches may help your family NatPat's range of wellness patches and stickers aim to tackle everything from allergies to lack of focus. We take a closer look at the range. Advertisement feature with NatPat Read now Silver Cross Voyager Co-Sleeper Bedside Crib review Chicco Next2Me Air bedside crib review Cribs & moses baskets Cribs & moses baskets Mamas & Papas Lua Bedside Crib review 10 of the best Moses baskets and cribs for your newborn Cribs & moses baskets Cribs & moses baskets About us Contact us Terms & conditions Code of conduct Privacy policy Cookies policy Complaints MadeForMums Top Testers Club Competitions Manage Privacy Settings This website is owned and published by Immediate Media Company Limited. www.immediate.co.uk © Immediate Media Company Ltd. 2024 Radio Times BBC Good Food Gardeners' World Magazine olive History Extra Junior Magazine The Recommended Baby Names Pregnancy Health Pushchairs & prams Car Seats Weaning & Baby Recipes Travel & holidays USER: What is the best co sleeper for me and my new baby? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
21
12
5,362
null
8
Only give responses with information found in the text below. Limit your response to 200 words or less. Focus on historical significance that could be linked to current practices. Keep in the style of formal writing for a college institution.
What were the negatives of having such low biodiversity for the coffee plant?
Context: tall bushes to promote branching and the production of new leaves, as well as to facilitate plucking them. Various processing methods are used to attain different levels of oxidation and produce certain kinds of tea, such as black, white, oolong, green, and pu’erh. Basic processing includes plucking, withering (to wilt and soften the leaves), rolling (to shape the leaves and slow drying), oxidizing, and drying. However, depending on the tea type, some steps are repeated or omitted. For example, green tea is made by withering and rolling leaves at a low heat, and oxidation is skipped; for oolong, rolling and oxidizing are performed repeatedly; and for black, extensive oxidation (fermentation) is employed. 3.5.1 The Discovery of Tea Tea was discovered in 2700 BCE by the ancient Chinese emperor Shen Nung, who had a keen interest in herbal medicine and introduced the practice of drinking boiled water to prevent stomach ailments. According to legend, once, when the emperor camped in a forest during one of his excursions, his servants set up a pot of boiling water under a tree. A fragrance attracted his attention, and he found that a few dry leaves from the tree had Colonial Agriculture | 53 fallen accidentally into the boiling pot and changed the color of the water; this was the source of the aroma. He took a few sips of that water and noticed its stimulative effect instantly. The emperor experimented with the leaves of that tree, now called Camellia sinensis, and thus the drink “cha” came into existence. Initially, it was used as a tonic, but it became a popular beverage around 350 BCE. The historian Lu Yu of the Tang dynasty (618–907 CE) has written a poetry book on tea called Cha jing (The Classic of Tea) that contains a detailed description of how to cultivate, process, and brew tea. Tea spread to Japan and Korea in the seventh century thanks to Buddhist monks, and drinking it became an essential cultural ritual. Formal tea ceremonies soon began. However, tea reached other countries only after the sixteenth century. In 1557, the Portuguese established their first trading center in Macau, and the Dutch soon followed suit. In 1610, some Dutch traders in Macau took tea back to the Dutch royal family as a gift. The royal family took an immediate liking to it. When the Dutch princess Catherine of Braganza married King Charles II of England around 1650, she introduced tea to England. Tea passed from the royal family to the nobles, but for an extended period, it remained unknown and unaffordable to common folks in Europe. The supply of tea in Europe was scant and very costly: one pound of tea was equal to nine months’ wages for a British laborer. As European trade with China increased, more tea reached Europe, and consumption of tea increased proportionally. For example, in 1680, Britain imported a hundred pounds of tea; however, in 1700, it brought in a million. The British government allowed the British East India Company to monopolize the trade, and by 1785, the company was buying 15 million pounds of tea from China annually and selling it worldwide. Eventually, in the early eighteenth century, tea reached the homes of British commoners. 3.5.2 Tea and the “Opium War” China was self-sufficient; its people wanted nothing from Europe in exchange for tea. But in Europe, the demand for tea increased rapidly in the mid-eighteenth century. Large quantities were being purchased, and Europeans had to pay in silver and gold. The East India Company was buying so much of it that it caused a crisis for the mercantilist British economy. The company came up with a plan to buy tea in exchange for opium instead of gold and silver. Although opium was banned within China, it was in demand and sold at very high prices on the black market. After the Battle of Plassey in 1757, several northern provinces in India came under the control of the East India Company, and the company began cultivating poppy in Bengal, Bihar, Orissa, and eastern Uttar Pradesh. Such cultivation was compulsory, and the 54 | Colonial Agriculture company also banned farmers from growing grain and built opium factories in Patna and Banaras. The opium was then transported to Calcutta for auction before British ships carried it to the Chinese border. The East India Company also helped set up an extensive network of opium smugglers in China, who then transported opium domestically and sold it on the black market. After the successful establishment of this smuggling network, British ships bought tea on credit at the port of Canton (now Guangzhou), China, and later paid for it with opium in Calcutta (now Kolkata). The company not only acquired the tea that was so in demand but also started making huge profits from selling opium. This mixed business of opium and tea began to strengthen the British economy and made it easier for the British to become front-runners among the European powers. By the 1830s, British traders were selling 1,400 tons of opium to China every year, and as a result, a large number of Chinese became opium addicts. The Chinese government began a crackdown on smugglers and further tightened the laws related to opium, and in 1838, it imposed death sentences on opium smugglers. Furthermore, despite immense pressure from the East India Company to allow the open trading of opium, the Chinese emperor would not capitulate. However, that did not curb his subjects’ addiction and the growing demand for opium. In 1839, by order of the Chinese emperor, a British ship was detained in the port of Canton, and the opium therein was destroyed. The British government asked the Chinese emperor to apologize and demanded compensation; he refused. British retaliated by attacking a number of Chinese ports and coastal cities. China could not compete with Britain’s state-of- the-art weapons, and defeated, China accepted the terms of the Treaty of Nanjing in 1842 and the Treaty of Bog in 1843, which opened the ports of Canton, Fujian, and Shanghai, among others, to British merchants and other Europeans. In 1856, another small war broke out between China and Britain, which ended with a treaty that made the sale of opium legal and allowed Christian missionaries to operate in China. But the tension between China and Europe remained. In 1859, the British and French seized Beijing and burned the royal Summer Palace. The subsequent Beijing Convention of 1860 ended China’s sovereignty, and the British gained a monopoly on the tea trade. 3.5.3 The Co-option of Tea and the Establishment of Plantations in European Colonies Unlike the British, the Dutch, Portuguese, and French had less success in the tea trade. To overcome British domination, the Portuguese planned to develop tea gardens outside China. Camellia is native to China, and it was not found in any other country. There was Colonial Agriculture | 55 a law against taking these plants out of the country, and the method for processing tea was also a trade secret. In the mid-eighteenth century, many Europeans smuggled the seeds and plants from China, but they were unable to grow them. Then, in 1750, the Portuguese smuggled the Camellia plants and some trained specialists out of China and succeeded in establishing tea gardens in the mountainous regions of the Azores Islands, which have a climate favorable for tea cultivation. With the help of Chinese laborers and experts, black and green tea were successfully produced in the Portuguese tea plantations. Soon, Portugal and its colonies no longer needed to import tea at all. As the owners of the first tea plantations outside China, the Portuguese remained vigilant in protecting their monopoly. It was some time before other European powers gained the ability to grow and process tea themselves. In the early nineteenth century, the British began exploring the idea of planting tea saplings in India. In 1824, Robert Bruce, an officer of the British East India Company, came across a variety of tea popular among the Singpho clan of Assam, India. He used this variety to develop the first tea garden in the Chauba area of Assam, and in 1840, the Assam Tea Company began production. This success was instrumental to the establishment of tea estates throughout India and in other British colonies. In 1848, the East India Company hired Robert Fortune, a plant hunter, to smuggle tea saplings and information about tea processing from China. Fortune was the superintendent of the hothouse department of the British Horticultural Society in Cheswick, London. He had visited China three times before this assignment; the first, in 1843, had been sponsored by the horticultural society, which was interested in acquiring important botanical treasures from China by exploiting the opportunity offered by the 1842 Treaty of Nanking after the First Opium War. Fortune managed to visit the interior of China (where foreigners were forbidden) and also gathered valuable information about the cultivation of important plants, successfully smuggling over 120 plant species into Britain. In the autumn of 1848, Fortune entered China and traveled for nearly three years while carefully collecting information related to tea cultivation and processing. He noted that black and green teas were made from the leaves of the same plant, Camellia sinensis, except that the former was “fermented” for a longer period. Eventually, Fortune succeeded in smuggling 20,000 saplings of Camellia sinensis to Calcutta, India, in Wardian cases.4 4. The Wardian case, a precursor to the modern terrarium, was a special type of sealed glass box made by British doctor Nathaniel Bagshaw Ward in 1829. The delicate plants within them could thrive for months. Plant hunter Joseph Hooker successfully used Wardian cases to bring some plants from the Antarctic to England. In 1933, Nathaniel Ward also succeeded in sending hundreds of small ornamental plants from England to Australia in these boxes. After two years, another voyage carried 56 | Colonial Agriculture He also brought trained artisans from China to India. These plants and artisans were transported from Calcutta to Darjeeling, Assam. At Darjeeling, a nursery was set up for the propagation of tea saplings at a large scale, supplying plantlets to all the tea gardens in India, Sri Lanka, and other British colonies. The British forced the poor tribal population of the Assam, Bengal, Bihar, and Orissa provinces out of their land, and they were sent to work in tea estates. Tamils from the southern province of India were also sent to work in the tea plantation of Sri Lanka. Tea plantations were modeled on the sugar colonies of the Caribbean, and thus the plight of the workers was in some ways similar to that of the slaves from Caribbean plantations. Samuel Davidson’s Sirocco tea dryer, the first tea-processing machine, was introduced in Sri Lanka in 1877, followed by John Walker’s tea-rolling machine in 1880. These machines were soon adopted by tea estates in India and other British colonies as well. As a result, British tea production increased greatly. By 1888, India became the number-one exporter of tea to Britain, sending the country 86 million pounds of tea. After India, Sri Lanka became prime ground for tea plantations. In the last decades of the nineteenth century, an outbreak of the fungal pathogen Hemilia vastatrix, a causal agent of rust, resulted in the destruction of the coffee plantations in Sri Lanka. The British owners of those estates quickly opted to plant tea instead, and a decade later, tea plantations covered nearly 400,000 acres of land in Sri Lanka. By 1927, Sri Lanka alone produced 100,000 tons per year. All this tea was for export. Within the British Empire, fermented black tea was produced, for which Assam, Ceylon, and Darjeeling tea are still famous. Black tea produced in India and Sri Lanka was considered of lesser quality than Chinese tea, but it was very cheap and easily became popular in Asian and African countries. In addition to India and Ceylon, British planters introduced tea plantations to fifty other countries. 3.6 The Story of Coffee Coffee is made from the roasted seeds of the coffee plant, a shrub belonging to the Rubiaceae family of flowering plants. There are over 120 species in the genus Coffea, and all are of tropical African origin. Only Coffea arabica and Coffea canephora are used for making coffee. Coffea arabica (figure 3.10) is preferred for its sweeter taste and is the source of 60–80 percent of the world’s coffee. It is an allotetraploid species that resulted from hybridization between the diploids Coffea canephora and Coffea eugenioides. In the Colonial Agriculture | 57 wild, coffee plants grow between thirty and forty feet tall and produce berries throughout the year. A coffee berry usually contains two seeds (a.k.a. beans). Coffee berries are nonclimacteric fruits, which ripen slowly on the plant itself (and unlike apples, bananas, mangoes, etc., their ripening cannot be induced after harvest by ethylene). Thus ripe berries, known as “cherries,” are picked every other week as they naturally ripen. To facilitate the manual picking of cherries, plants are pruned to a height of three to four feet. Pruning coffee plants is also essential to maximizing coffee production to maintain the correct balance of leaf to fruit, prevent overbearing, stimulate root growth, and effectively deter pests. Coffee is also a stimulative, and the secret of this elixir is the caffeine present in high quantities in its fruits and seeds. In its normal state, when our bodies are exhausted, there is an increase in adenosine molecules. The adenosine molecules bind to adenosine receptors in our brains, resulting in the transduction of sleep signals. The structure of caffeine is similar to that of adenosine, so when it reaches a weary brain, caffeine can also bind to the adenosine receptor and block adenosine molecules from accessing it, thus disrupting sleep signals. 58 | Colonial Agriculture 3.6.1 The History of Coffee Coffea arabica is native to Ethiopia. The people of Ethiopia first recognized the stimulative properties of coffee in the ninth century. According to legend, one day, a shepherd named Kaldi, who hailed from a small village in the highlands of Ethiopia, saw his goats dancing energetically after eating berries from a wild bush. Out of curiosity, he ate a few berries and felt refreshed. Kaldi took some berries back to the village to share, and the people there enjoyed them too. Hence the local custom of eating raw coffee berries began. There are records that coffee berries were often found in the pockets of slaves brought to the port of Mokha from the highlands of Ethiopia. Later, the people of Ethiopia started mixing ground berries with butter and herbs to make balls. The coffee we drink today was first brewed in Yemen in the thirteenth century. It became popular among Yemen’s clerics and Sufis, who routinely held religious and philosophical discussions late into the night; coffee rescued them from sleep and exhaustion. Gradually, coffee became popular, and coffeehouses opened up all over Arabia, where travelers, artists, poets, and common folks visited and had a chance to gossip and debate on a variety of topics, including politics. Often, governments shut down coffeehouses for fear of political unrest and revolution. Between the sixteenth and seventeenth centuries, coffeehouses were banned several times in many Arab countries, including Turkey, Mecca, and Egypt. But coffeehouses always opened again, and coffee became ingrained in Arab culture. Arabs developed many methods of processing coffee beans. Usually, these methods included drying coffee cherries to separate the beans. Dried coffee beans can be stored for many years. Larger and heavier beans are considered better. The taste and aroma develop during roasting, which determines the quality and price of the coffee. Dried coffee beans are dark green, but roasting them at a controlled temperature causes a slow transformation. First, they turn yellow, then light brown, while also popping up and doubling in size. After continued roasting, all the water inside them dries up, and the beans turn black like charcoal. The starch inside the beans first turns into sugar, and then sugar turns into caramel, at which point many aromatic compounds come out of the cells of the beans. Roasting coffee beans is an art, and a skilled roaster is a very important part of the coffee trade. 3.6.2 The Spread of Coffee out of Arabia Coffee was introduced to Europeans in the seventeenth century, when trade between the Ottoman Empire and Europe increased. In 1669, Turkish ambassador Suleiman Agha (Müteferrika Süleyman Ağa) arrived in the court of Louis XIV with many valuable gifts, Colonial Agriculture | 59 including coffee. The French subsequently became obsessed with the sophisticated etiquettes of the Ottoman Empire. In the company of Aga, the royal court and other elites of Parisian society indulged in drinking coffee. Aga held extravagant coffee ceremonies at his residence in Paris, where waiters dressed in Ottoman costumes served coffee to Parisian society women. Suleiman’s visit piqued French elites’ interest in Turquerie and Orientalism, which became fashionable. In the history of France, 1669 is thought of as the year of “Turkmenia.” A decade later, coffee reached Vienna, when Turkey was defeated in the Battle of 1683. After the victory, the Viennese seized the goods left behind by the Turkish soldiers, including several thousand sacks of coffee beans. The soldiers of Vienna didn’t know what it was and simply discarded it, but one man, Kolshitsky, snatched it up. Kolshitsky knew how to make coffee, and he opened the first coffeehouse in Vienna with the spoils. By the end of the seventeenth century, coffeehouses had become common in all the main cities of Europe. In London alone, by 1715, there were more than 2,000 coffeehouses. As in Arabia, the coffeehouses of Europe also became the bases of sociopolitical debates and were known as “penny universities.” 3.6.3 Coffee Plantations By the fifteenth century, demand for coffee had increased so much that the harvest of berries from the wild was not enough, and thus in Yemen, people began to plant coffee. Following Yemen’s lead, other Arab countries also started coffee plantations. Until the seventeenth century, coffee was cultivated only within North African and Arab countries. Arabs were very protective of their monopoly on the coffee trade. The cultivation of coffee and the processing of seeds was a mystery to the world outside of Arabia. Foreigners were not allowed to visit coffee farms, and only roasted coffee beans (incapable of producing new plants) were exported. Around 1600, Baba Budan, a Sufi who was on the Haj pilgrimage, successfully smuggled seven coffee seeds into India and started a small coffee nursery in Mysore. The early coffee plantations of South India used propagations of plants from Budan’s garden. In 1616, a Dutch spy also succeeded in stealing coffee beans from Arabia, and these were used by the Dutch East India Company as starters for coffee plantations in Java, Sumatra, Bali, Sri Lanka, Timur, and Suriname (Dutch Guiana). In 1706, a coffee plant from Java was brought to the botanic gardens of Amsterdam, and from there, its offspring reached Jardin de plantes in Paris. A clone of the Parisian plant was sent to the French colony Martinique, and then its offspring spread to the French colonies in the Caribbean, South America, and Africa. In 1728, a Portuguese officer from Dutch Guiana brought coffee seeds to Brazil, 60 | Colonial Agriculture which served as starters for the coffee plantations there. The Portuguese also introduced coffee to African countries and Indonesia, and the British established plantations in their Caribbean colonies, India, and Sri Lanka from Dutch stock. In summary, all European coffee plants came from the same Arabian mother plant. So the biodiversity within their coffee plantations was almost zero, which had devastating consequences. In the last decades of the nineteenth century, the fungal pathogen Haemilia vestatrix severely infected coffee plantations in Sri Lanka, India, Java, Sumatra, and Malaysia. As a result, rust disease destroyed the coffee plantations one by one. Later, in some of the coffee plantations, Coffea canephora (syn. Coffea robusta), which has a natural resistance to rust, was planted, but others were converted into tea plantations (as in the case of Sri Lanka, discussed earlier). European coffee plantations used the same model as tea or sugar plantations, and so their workers lived under the same conditions. European powers forcefully employed the poor native population in these plantations and used indentured laborers as needed. For example, in Sri Lanka, the Sinhalese population refused to work in the coffee farms, so British planters recruited 100,000 indentured Tamil workers from India to work the farms and tea plantations there. 3.7 The Heritage of Plantations In the twentieth century, most former European colonies became independent countries. In these countries, private, cooperative, or semigovernmental institutions manage plantations of sugarcane, tea, coffee, or other commercial crops. Though these plantations remain a significant source of revenue and contribute significantly to the national GDP of many countries, their workers still often operate under abject conditions. References Johannessen, C. L., & Sorenson, J. L. (2009). World trade and biological exchanges before 1492. iUniverse. (↵ Return)
Only give responses with information found in the text below. Limit your response to 200 words or less. Focus on historical significance that could be linked to current practices. Keep in the style of formal writing for a college institution. What were the negatives of having such low biodiversity for the coffee plant? Context: tall bushes to promote branching and the production of new leaves, as well as to facilitate plucking them. Various processing methods are used to attain different levels of oxidation and produce certain kinds of tea, such as black, white, oolong, green, and pu’erh. Basic processing includes plucking, withering (to wilt and soften the leaves), rolling (to shape the leaves and slow drying), oxidizing, and drying. However, depending on the tea type, some steps are repeated or omitted. For example, green tea is made by withering and rolling leaves at a low heat, and oxidation is skipped; for oolong, rolling and oxidizing are performed repeatedly; and for black, extensive oxidation (fermentation) is employed. 3.5.1 The Discovery of Tea Tea was discovered in 2700 BCE by the ancient Chinese emperor Shen Nung, who had a keen interest in herbal medicine and introduced the practice of drinking boiled water to prevent stomach ailments. According to legend, once, when the emperor camped in a forest during one of his excursions, his servants set up a pot of boiling water under a tree. A fragrance attracted his attention, and he found that a few dry leaves from the tree had Colonial Agriculture | 53 fallen accidentally into the boiling pot and changed the color of the water; this was the source of the aroma. He took a few sips of that water and noticed its stimulative effect instantly. The emperor experimented with the leaves of that tree, now called Camellia sinensis, and thus the drink “cha” came into existence. Initially, it was used as a tonic, but it became a popular beverage around 350 BCE. The historian Lu Yu of the Tang dynasty (618–907 CE) has written a poetry book on tea called Cha jing (The Classic of Tea) that contains a detailed description of how to cultivate, process, and brew tea. Tea spread to Japan and Korea in the seventh century thanks to Buddhist monks, and drinking it became an essential cultural ritual. Formal tea ceremonies soon began. However, tea reached other countries only after the sixteenth century. In 1557, the Portuguese established their first trading center in Macau, and the Dutch soon followed suit. In 1610, some Dutch traders in Macau took tea back to the Dutch royal family as a gift. The royal family took an immediate liking to it. When the Dutch princess Catherine of Braganza married King Charles II of England around 1650, she introduced tea to England. Tea passed from the royal family to the nobles, but for an extended period, it remained unknown and unaffordable to common folks in Europe. The supply of tea in Europe was scant and very costly: one pound of tea was equal to nine months’ wages for a British laborer. As European trade with China increased, more tea reached Europe, and consumption of tea increased proportionally. For example, in 1680, Britain imported a hundred pounds of tea; however, in 1700, it brought in a million. The British government allowed the British East India Company to monopolize the trade, and by 1785, the company was buying 15 million pounds of tea from China annually and selling it worldwide. Eventually, in the early eighteenth century, tea reached the homes of British commoners. 3.5.2 Tea and the “Opium War” China was self-sufficient; its people wanted nothing from Europe in exchange for tea. But in Europe, the demand for tea increased rapidly in the mid-eighteenth century. Large quantities were being purchased, and Europeans had to pay in silver and gold. The East India Company was buying so much of it that it caused a crisis for the mercantilist British economy. The company came up with a plan to buy tea in exchange for opium instead of gold and silver. Although opium was banned within China, it was in demand and sold at very high prices on the black market. After the Battle of Plassey in 1757, several northern provinces in India came under the control of the East India Company, and the company began cultivating poppy in Bengal, Bihar, Orissa, and eastern Uttar Pradesh. Such cultivation was compulsory, and the 54 | Colonial Agriculture company also banned farmers from growing grain and built opium factories in Patna and Banaras. The opium was then transported to Calcutta for auction before British ships carried it to the Chinese border. The East India Company also helped set up an extensive network of opium smugglers in China, who then transported opium domestically and sold it on the black market. After the successful establishment of this smuggling network, British ships bought tea on credit at the port of Canton (now Guangzhou), China, and later paid for it with opium in Calcutta (now Kolkata). The company not only acquired the tea that was so in demand but also started making huge profits from selling opium. This mixed business of opium and tea began to strengthen the British economy and made it easier for the British to become front-runners among the European powers. By the 1830s, British traders were selling 1,400 tons of opium to China every year, and as a result, a large number of Chinese became opium addicts. The Chinese government began a crackdown on smugglers and further tightened the laws related to opium, and in 1838, it imposed death sentences on opium smugglers. Furthermore, despite immense pressure from the East India Company to allow the open trading of opium, the Chinese emperor would not capitulate. However, that did not curb his subjects’ addiction and the growing demand for opium. In 1839, by order of the Chinese emperor, a British ship was detained in the port of Canton, and the opium therein was destroyed. The British government asked the Chinese emperor to apologize and demanded compensation; he refused. British retaliated by attacking a number of Chinese ports and coastal cities. China could not compete with Britain’s state-of- the-art weapons, and defeated, China accepted the terms of the Treaty of Nanjing in 1842 and the Treaty of Bog in 1843, which opened the ports of Canton, Fujian, and Shanghai, among others, to British merchants and other Europeans. In 1856, another small war broke out between China and Britain, which ended with a treaty that made the sale of opium legal and allowed Christian missionaries to operate in China. But the tension between China and Europe remained. In 1859, the British and French seized Beijing and burned the royal Summer Palace. The subsequent Beijing Convention of 1860 ended China’s sovereignty, and the British gained a monopoly on the tea trade. 3.5.3 The Co-option of Tea and the Establishment of Plantations in European Colonies Unlike the British, the Dutch, Portuguese, and French had less success in the tea trade. To overcome British domination, the Portuguese planned to develop tea gardens outside China. Camellia is native to China, and it was not found in any other country. There was Colonial Agriculture | 55 a law against taking these plants out of the country, and the method for processing tea was also a trade secret. In the mid-eighteenth century, many Europeans smuggled the seeds and plants from China, but they were unable to grow them. Then, in 1750, the Portuguese smuggled the Camellia plants and some trained specialists out of China and succeeded in establishing tea gardens in the mountainous regions of the Azores Islands, which have a climate favorable for tea cultivation. With the help of Chinese laborers and experts, black and green tea were successfully produced in the Portuguese tea plantations. Soon, Portugal and its colonies no longer needed to import tea at all. As the owners of the first tea plantations outside China, the Portuguese remained vigilant in protecting their monopoly. It was some time before other European powers gained the ability to grow and process tea themselves. In the early nineteenth century, the British began exploring the idea of planting tea saplings in India. In 1824, Robert Bruce, an officer of the British East India Company, came across a variety of tea popular among the Singpho clan of Assam, India. He used this variety to develop the first tea garden in the Chauba area of Assam, and in 1840, the Assam Tea Company began production. This success was instrumental to the establishment of tea estates throughout India and in other British colonies. In 1848, the East India Company hired Robert Fortune, a plant hunter, to smuggle tea saplings and information about tea processing from China. Fortune was the superintendent of the hothouse department of the British Horticultural Society in Cheswick, London. He had visited China three times before this assignment; the first, in 1843, had been sponsored by the horticultural society, which was interested in acquiring important botanical treasures from China by exploiting the opportunity offered by the 1842 Treaty of Nanking after the First Opium War. Fortune managed to visit the interior of China (where foreigners were forbidden) and also gathered valuable information about the cultivation of important plants, successfully smuggling over 120 plant species into Britain. In the autumn of 1848, Fortune entered China and traveled for nearly three years while carefully collecting information related to tea cultivation and processing. He noted that black and green teas were made from the leaves of the same plant, Camellia sinensis, except that the former was “fermented” for a longer period. Eventually, Fortune succeeded in smuggling 20,000 saplings of Camellia sinensis to Calcutta, India, in Wardian cases.4 4. The Wardian case, a precursor to the modern terrarium, was a special type of sealed glass box made by British doctor Nathaniel Bagshaw Ward in 1829. The delicate plants within them could thrive for months. Plant hunter Joseph Hooker successfully used Wardian cases to bring some plants from the Antarctic to England. In 1933, Nathaniel Ward also succeeded in sending hundreds of small ornamental plants from England to Australia in these boxes. After two years, another voyage carried 56 | Colonial Agriculture He also brought trained artisans from China to India. These plants and artisans were transported from Calcutta to Darjeeling, Assam. At Darjeeling, a nursery was set up for the propagation of tea saplings at a large scale, supplying plantlets to all the tea gardens in India, Sri Lanka, and other British colonies. The British forced the poor tribal population of the Assam, Bengal, Bihar, and Orissa provinces out of their land, and they were sent to work in tea estates. Tamils from the southern province of India were also sent to work in the tea plantation of Sri Lanka. Tea plantations were modeled on the sugar colonies of the Caribbean, and thus the plight of the workers was in some ways similar to that of the slaves from Caribbean plantations. Samuel Davidson’s Sirocco tea dryer, the first tea-processing machine, was introduced in Sri Lanka in 1877, followed by John Walker’s tea-rolling machine in 1880. These machines were soon adopted by tea estates in India and other British colonies as well. As a result, British tea production increased greatly. By 1888, India became the number-one exporter of tea to Britain, sending the country 86 million pounds of tea. After India, Sri Lanka became prime ground for tea plantations. In the last decades of the nineteenth century, an outbreak of the fungal pathogen Hemilia vastatrix, a causal agent of rust, resulted in the destruction of the coffee plantations in Sri Lanka. The British owners of those estates quickly opted to plant tea instead, and a decade later, tea plantations covered nearly 400,000 acres of land in Sri Lanka. By 1927, Sri Lanka alone produced 100,000 tons per year. All this tea was for export. Within the British Empire, fermented black tea was produced, for which Assam, Ceylon, and Darjeeling tea are still famous. Black tea produced in India and Sri Lanka was considered of lesser quality than Chinese tea, but it was very cheap and easily became popular in Asian and African countries. In addition to India and Ceylon, British planters introduced tea plantations to fifty other countries. 3.6 The Story of Coffee Coffee is made from the roasted seeds of the coffee plant, a shrub belonging to the Rubiaceae family of flowering plants. There are over 120 species in the genus Coffea, and all are of tropical African origin. Only Coffea arabica and Coffea canephora are used for making coffee. Coffea arabica (figure 3.10) is preferred for its sweeter taste and is the source of 60–80 percent of the world’s coffee. It is an allotetraploid species that resulted from hybridization between the diploids Coffea canephora and Coffea eugenioides. In the Colonial Agriculture | 57 wild, coffee plants grow between thirty and forty feet tall and produce berries throughout the year. A coffee berry usually contains two seeds (a.k.a. beans). Coffee berries are nonclimacteric fruits, which ripen slowly on the plant itself (and unlike apples, bananas, mangoes, etc., their ripening cannot be induced after harvest by ethylene). Thus ripe berries, known as “cherries,” are picked every other week as they naturally ripen. To facilitate the manual picking of cherries, plants are pruned to a height of three to four feet. Pruning coffee plants is also essential to maximizing coffee production to maintain the correct balance of leaf to fruit, prevent overbearing, stimulate root growth, and effectively deter pests. Coffee is also a stimulative, and the secret of this elixir is the caffeine present in high quantities in its fruits and seeds. In its normal state, when our bodies are exhausted, there is an increase in adenosine molecules. The adenosine molecules bind to adenosine receptors in our brains, resulting in the transduction of sleep signals. The structure of caffeine is similar to that of adenosine, so when it reaches a weary brain, caffeine can also bind to the adenosine receptor and block adenosine molecules from accessing it, thus disrupting sleep signals. 58 | Colonial Agriculture 3.6.1 The History of Coffee Coffea arabica is native to Ethiopia. The people of Ethiopia first recognized the stimulative properties of coffee in the ninth century. According to legend, one day, a shepherd named Kaldi, who hailed from a small village in the highlands of Ethiopia, saw his goats dancing energetically after eating berries from a wild bush. Out of curiosity, he ate a few berries and felt refreshed. Kaldi took some berries back to the village to share, and the people there enjoyed them too. Hence the local custom of eating raw coffee berries began. There are records that coffee berries were often found in the pockets of slaves brought to the port of Mokha from the highlands of Ethiopia. Later, the people of Ethiopia started mixing ground berries with butter and herbs to make balls. The coffee we drink today was first brewed in Yemen in the thirteenth century. It became popular among Yemen’s clerics and Sufis, who routinely held religious and philosophical discussions late into the night; coffee rescued them from sleep and exhaustion. Gradually, coffee became popular, and coffeehouses opened up all over Arabia, where travelers, artists, poets, and common folks visited and had a chance to gossip and debate on a variety of topics, including politics. Often, governments shut down coffeehouses for fear of political unrest and revolution. Between the sixteenth and seventeenth centuries, coffeehouses were banned several times in many Arab countries, including Turkey, Mecca, and Egypt. But coffeehouses always opened again, and coffee became ingrained in Arab culture. Arabs developed many methods of processing coffee beans. Usually, these methods included drying coffee cherries to separate the beans. Dried coffee beans can be stored for many years. Larger and heavier beans are considered better. The taste and aroma develop during roasting, which determines the quality and price of the coffee. Dried coffee beans are dark green, but roasting them at a controlled temperature causes a slow transformation. First, they turn yellow, then light brown, while also popping up and doubling in size. After continued roasting, all the water inside them dries up, and the beans turn black like charcoal. The starch inside the beans first turns into sugar, and then sugar turns into caramel, at which point many aromatic compounds come out of the cells of the beans. Roasting coffee beans is an art, and a skilled roaster is a very important part of the coffee trade. 3.6.2 The Spread of Coffee out of Arabia Coffee was introduced to Europeans in the seventeenth century, when trade between the Ottoman Empire and Europe increased. In 1669, Turkish ambassador Suleiman Agha (Müteferrika Süleyman Ağa) arrived in the court of Louis XIV with many valuable gifts, Colonial Agriculture | 59 including coffee. The French subsequently became obsessed with the sophisticated etiquettes of the Ottoman Empire. In the company of Aga, the royal court and other elites of Parisian society indulged in drinking coffee. Aga held extravagant coffee ceremonies at his residence in Paris, where waiters dressed in Ottoman costumes served coffee to Parisian society women. Suleiman’s visit piqued French elites’ interest in Turquerie and Orientalism, which became fashionable. In the history of France, 1669 is thought of as the year of “Turkmenia.” A decade later, coffee reached Vienna, when Turkey was defeated in the Battle of 1683. After the victory, the Viennese seized the goods left behind by the Turkish soldiers, including several thousand sacks of coffee beans. The soldiers of Vienna didn’t know what it was and simply discarded it, but one man, Kolshitsky, snatched it up. Kolshitsky knew how to make coffee, and he opened the first coffeehouse in Vienna with the spoils. By the end of the seventeenth century, coffeehouses had become common in all the main cities of Europe. In London alone, by 1715, there were more than 2,000 coffeehouses. As in Arabia, the coffeehouses of Europe also became the bases of sociopolitical debates and were known as “penny universities.” 3.6.3 Coffee Plantations By the fifteenth century, demand for coffee had increased so much that the harvest of berries from the wild was not enough, and thus in Yemen, people began to plant coffee. Following Yemen’s lead, other Arab countries also started coffee plantations. Until the seventeenth century, coffee was cultivated only within North African and Arab countries. Arabs were very protective of their monopoly on the coffee trade. The cultivation of coffee and the processing of seeds was a mystery to the world outside of Arabia. Foreigners were not allowed to visit coffee farms, and only roasted coffee beans (incapable of producing new plants) were exported. Around 1600, Baba Budan, a Sufi who was on the Haj pilgrimage, successfully smuggled seven coffee seeds into India and started a small coffee nursery in Mysore. The early coffee plantations of South India used propagations of plants from Budan’s garden. In 1616, a Dutch spy also succeeded in stealing coffee beans from Arabia, and these were used by the Dutch East India Company as starters for coffee plantations in Java, Sumatra, Bali, Sri Lanka, Timur, and Suriname (Dutch Guiana). In 1706, a coffee plant from Java was brought to the botanic gardens of Amsterdam, and from there, its offspring reached Jardin de plantes in Paris. A clone of the Parisian plant was sent to the French colony Martinique, and then its offspring spread to the French colonies in the Caribbean, South America, and Africa. In 1728, a Portuguese officer from Dutch Guiana brought coffee seeds to Brazil, 60 | Colonial Agriculture which served as starters for the coffee plantations there. The Portuguese also introduced coffee to African countries and Indonesia, and the British established plantations in their Caribbean colonies, India, and Sri Lanka from Dutch stock. In summary, all European coffee plants came from the same Arabian mother plant. So the biodiversity within their coffee plantations was almost zero, which had devastating consequences. In the last decades of the nineteenth century, the fungal pathogen Haemilia vestatrix severely infected coffee plantations in Sri Lanka, India, Java, Sumatra, and Malaysia. As a result, rust disease destroyed the coffee plantations one by one. Later, in some of the coffee plantations, Coffea canephora (syn. Coffea robusta), which has a natural resistance to rust, was planted, but others were converted into tea plantations (as in the case of Sri Lanka, discussed earlier). European coffee plantations used the same model as tea or sugar plantations, and so their workers lived under the same conditions. European powers forcefully employed the poor native population in these plantations and used indentured laborers as needed. For example, in Sri Lanka, the Sinhalese population refused to work in the coffee farms, so British planters recruited 100,000 indentured Tamil workers from India to work the farms and tea plantations there. 3.7 The Heritage of Plantations In the twentieth century, most former European colonies became independent countries. In these countries, private, cooperative, or semigovernmental institutions manage plantations of sugarcane, tea, coffee, or other commercial crops. Though these plantations remain a significant source of revenue and contribute significantly to the national GDP of many countries, their workers still often operate under abject conditions. References Johannessen, C. L., & Sorenson, J. L. (2009). World trade and biological exchanges before 1492. iUniverse. (↵ Return)
Only give responses with information found in the text below. Limit your response to 200 words or less. Focus on historical significance that could be linked to current practices. Keep in the style of formal writing for a college institution. EVIDENCE: Context: tall bushes to promote branching and the production of new leaves, as well as to facilitate plucking them. Various processing methods are used to attain different levels of oxidation and produce certain kinds of tea, such as black, white, oolong, green, and pu’erh. Basic processing includes plucking, withering (to wilt and soften the leaves), rolling (to shape the leaves and slow drying), oxidizing, and drying. However, depending on the tea type, some steps are repeated or omitted. For example, green tea is made by withering and rolling leaves at a low heat, and oxidation is skipped; for oolong, rolling and oxidizing are performed repeatedly; and for black, extensive oxidation (fermentation) is employed. 3.5.1 The Discovery of Tea Tea was discovered in 2700 BCE by the ancient Chinese emperor Shen Nung, who had a keen interest in herbal medicine and introduced the practice of drinking boiled water to prevent stomach ailments. According to legend, once, when the emperor camped in a forest during one of his excursions, his servants set up a pot of boiling water under a tree. A fragrance attracted his attention, and he found that a few dry leaves from the tree had Colonial Agriculture | 53 fallen accidentally into the boiling pot and changed the color of the water; this was the source of the aroma. He took a few sips of that water and noticed its stimulative effect instantly. The emperor experimented with the leaves of that tree, now called Camellia sinensis, and thus the drink “cha” came into existence. Initially, it was used as a tonic, but it became a popular beverage around 350 BCE. The historian Lu Yu of the Tang dynasty (618–907 CE) has written a poetry book on tea called Cha jing (The Classic of Tea) that contains a detailed description of how to cultivate, process, and brew tea. Tea spread to Japan and Korea in the seventh century thanks to Buddhist monks, and drinking it became an essential cultural ritual. Formal tea ceremonies soon began. However, tea reached other countries only after the sixteenth century. In 1557, the Portuguese established their first trading center in Macau, and the Dutch soon followed suit. In 1610, some Dutch traders in Macau took tea back to the Dutch royal family as a gift. The royal family took an immediate liking to it. When the Dutch princess Catherine of Braganza married King Charles II of England around 1650, she introduced tea to England. Tea passed from the royal family to the nobles, but for an extended period, it remained unknown and unaffordable to common folks in Europe. The supply of tea in Europe was scant and very costly: one pound of tea was equal to nine months’ wages for a British laborer. As European trade with China increased, more tea reached Europe, and consumption of tea increased proportionally. For example, in 1680, Britain imported a hundred pounds of tea; however, in 1700, it brought in a million. The British government allowed the British East India Company to monopolize the trade, and by 1785, the company was buying 15 million pounds of tea from China annually and selling it worldwide. Eventually, in the early eighteenth century, tea reached the homes of British commoners. 3.5.2 Tea and the “Opium War” China was self-sufficient; its people wanted nothing from Europe in exchange for tea. But in Europe, the demand for tea increased rapidly in the mid-eighteenth century. Large quantities were being purchased, and Europeans had to pay in silver and gold. The East India Company was buying so much of it that it caused a crisis for the mercantilist British economy. The company came up with a plan to buy tea in exchange for opium instead of gold and silver. Although opium was banned within China, it was in demand and sold at very high prices on the black market. After the Battle of Plassey in 1757, several northern provinces in India came under the control of the East India Company, and the company began cultivating poppy in Bengal, Bihar, Orissa, and eastern Uttar Pradesh. Such cultivation was compulsory, and the 54 | Colonial Agriculture company also banned farmers from growing grain and built opium factories in Patna and Banaras. The opium was then transported to Calcutta for auction before British ships carried it to the Chinese border. The East India Company also helped set up an extensive network of opium smugglers in China, who then transported opium domestically and sold it on the black market. After the successful establishment of this smuggling network, British ships bought tea on credit at the port of Canton (now Guangzhou), China, and later paid for it with opium in Calcutta (now Kolkata). The company not only acquired the tea that was so in demand but also started making huge profits from selling opium. This mixed business of opium and tea began to strengthen the British economy and made it easier for the British to become front-runners among the European powers. By the 1830s, British traders were selling 1,400 tons of opium to China every year, and as a result, a large number of Chinese became opium addicts. The Chinese government began a crackdown on smugglers and further tightened the laws related to opium, and in 1838, it imposed death sentences on opium smugglers. Furthermore, despite immense pressure from the East India Company to allow the open trading of opium, the Chinese emperor would not capitulate. However, that did not curb his subjects’ addiction and the growing demand for opium. In 1839, by order of the Chinese emperor, a British ship was detained in the port of Canton, and the opium therein was destroyed. The British government asked the Chinese emperor to apologize and demanded compensation; he refused. British retaliated by attacking a number of Chinese ports and coastal cities. China could not compete with Britain’s state-of- the-art weapons, and defeated, China accepted the terms of the Treaty of Nanjing in 1842 and the Treaty of Bog in 1843, which opened the ports of Canton, Fujian, and Shanghai, among others, to British merchants and other Europeans. In 1856, another small war broke out between China and Britain, which ended with a treaty that made the sale of opium legal and allowed Christian missionaries to operate in China. But the tension between China and Europe remained. In 1859, the British and French seized Beijing and burned the royal Summer Palace. The subsequent Beijing Convention of 1860 ended China’s sovereignty, and the British gained a monopoly on the tea trade. 3.5.3 The Co-option of Tea and the Establishment of Plantations in European Colonies Unlike the British, the Dutch, Portuguese, and French had less success in the tea trade. To overcome British domination, the Portuguese planned to develop tea gardens outside China. Camellia is native to China, and it was not found in any other country. There was Colonial Agriculture | 55 a law against taking these plants out of the country, and the method for processing tea was also a trade secret. In the mid-eighteenth century, many Europeans smuggled the seeds and plants from China, but they were unable to grow them. Then, in 1750, the Portuguese smuggled the Camellia plants and some trained specialists out of China and succeeded in establishing tea gardens in the mountainous regions of the Azores Islands, which have a climate favorable for tea cultivation. With the help of Chinese laborers and experts, black and green tea were successfully produced in the Portuguese tea plantations. Soon, Portugal and its colonies no longer needed to import tea at all. As the owners of the first tea plantations outside China, the Portuguese remained vigilant in protecting their monopoly. It was some time before other European powers gained the ability to grow and process tea themselves. In the early nineteenth century, the British began exploring the idea of planting tea saplings in India. In 1824, Robert Bruce, an officer of the British East India Company, came across a variety of tea popular among the Singpho clan of Assam, India. He used this variety to develop the first tea garden in the Chauba area of Assam, and in 1840, the Assam Tea Company began production. This success was instrumental to the establishment of tea estates throughout India and in other British colonies. In 1848, the East India Company hired Robert Fortune, a plant hunter, to smuggle tea saplings and information about tea processing from China. Fortune was the superintendent of the hothouse department of the British Horticultural Society in Cheswick, London. He had visited China three times before this assignment; the first, in 1843, had been sponsored by the horticultural society, which was interested in acquiring important botanical treasures from China by exploiting the opportunity offered by the 1842 Treaty of Nanking after the First Opium War. Fortune managed to visit the interior of China (where foreigners were forbidden) and also gathered valuable information about the cultivation of important plants, successfully smuggling over 120 plant species into Britain. In the autumn of 1848, Fortune entered China and traveled for nearly three years while carefully collecting information related to tea cultivation and processing. He noted that black and green teas were made from the leaves of the same plant, Camellia sinensis, except that the former was “fermented” for a longer period. Eventually, Fortune succeeded in smuggling 20,000 saplings of Camellia sinensis to Calcutta, India, in Wardian cases.4 4. The Wardian case, a precursor to the modern terrarium, was a special type of sealed glass box made by British doctor Nathaniel Bagshaw Ward in 1829. The delicate plants within them could thrive for months. Plant hunter Joseph Hooker successfully used Wardian cases to bring some plants from the Antarctic to England. In 1933, Nathaniel Ward also succeeded in sending hundreds of small ornamental plants from England to Australia in these boxes. After two years, another voyage carried 56 | Colonial Agriculture He also brought trained artisans from China to India. These plants and artisans were transported from Calcutta to Darjeeling, Assam. At Darjeeling, a nursery was set up for the propagation of tea saplings at a large scale, supplying plantlets to all the tea gardens in India, Sri Lanka, and other British colonies. The British forced the poor tribal population of the Assam, Bengal, Bihar, and Orissa provinces out of their land, and they were sent to work in tea estates. Tamils from the southern province of India were also sent to work in the tea plantation of Sri Lanka. Tea plantations were modeled on the sugar colonies of the Caribbean, and thus the plight of the workers was in some ways similar to that of the slaves from Caribbean plantations. Samuel Davidson’s Sirocco tea dryer, the first tea-processing machine, was introduced in Sri Lanka in 1877, followed by John Walker’s tea-rolling machine in 1880. These machines were soon adopted by tea estates in India and other British colonies as well. As a result, British tea production increased greatly. By 1888, India became the number-one exporter of tea to Britain, sending the country 86 million pounds of tea. After India, Sri Lanka became prime ground for tea plantations. In the last decades of the nineteenth century, an outbreak of the fungal pathogen Hemilia vastatrix, a causal agent of rust, resulted in the destruction of the coffee plantations in Sri Lanka. The British owners of those estates quickly opted to plant tea instead, and a decade later, tea plantations covered nearly 400,000 acres of land in Sri Lanka. By 1927, Sri Lanka alone produced 100,000 tons per year. All this tea was for export. Within the British Empire, fermented black tea was produced, for which Assam, Ceylon, and Darjeeling tea are still famous. Black tea produced in India and Sri Lanka was considered of lesser quality than Chinese tea, but it was very cheap and easily became popular in Asian and African countries. In addition to India and Ceylon, British planters introduced tea plantations to fifty other countries. 3.6 The Story of Coffee Coffee is made from the roasted seeds of the coffee plant, a shrub belonging to the Rubiaceae family of flowering plants. There are over 120 species in the genus Coffea, and all are of tropical African origin. Only Coffea arabica and Coffea canephora are used for making coffee. Coffea arabica (figure 3.10) is preferred for its sweeter taste and is the source of 60–80 percent of the world’s coffee. It is an allotetraploid species that resulted from hybridization between the diploids Coffea canephora and Coffea eugenioides. In the Colonial Agriculture | 57 wild, coffee plants grow between thirty and forty feet tall and produce berries throughout the year. A coffee berry usually contains two seeds (a.k.a. beans). Coffee berries are nonclimacteric fruits, which ripen slowly on the plant itself (and unlike apples, bananas, mangoes, etc., their ripening cannot be induced after harvest by ethylene). Thus ripe berries, known as “cherries,” are picked every other week as they naturally ripen. To facilitate the manual picking of cherries, plants are pruned to a height of three to four feet. Pruning coffee plants is also essential to maximizing coffee production to maintain the correct balance of leaf to fruit, prevent overbearing, stimulate root growth, and effectively deter pests. Coffee is also a stimulative, and the secret of this elixir is the caffeine present in high quantities in its fruits and seeds. In its normal state, when our bodies are exhausted, there is an increase in adenosine molecules. The adenosine molecules bind to adenosine receptors in our brains, resulting in the transduction of sleep signals. The structure of caffeine is similar to that of adenosine, so when it reaches a weary brain, caffeine can also bind to the adenosine receptor and block adenosine molecules from accessing it, thus disrupting sleep signals. 58 | Colonial Agriculture 3.6.1 The History of Coffee Coffea arabica is native to Ethiopia. The people of Ethiopia first recognized the stimulative properties of coffee in the ninth century. According to legend, one day, a shepherd named Kaldi, who hailed from a small village in the highlands of Ethiopia, saw his goats dancing energetically after eating berries from a wild bush. Out of curiosity, he ate a few berries and felt refreshed. Kaldi took some berries back to the village to share, and the people there enjoyed them too. Hence the local custom of eating raw coffee berries began. There are records that coffee berries were often found in the pockets of slaves brought to the port of Mokha from the highlands of Ethiopia. Later, the people of Ethiopia started mixing ground berries with butter and herbs to make balls. The coffee we drink today was first brewed in Yemen in the thirteenth century. It became popular among Yemen’s clerics and Sufis, who routinely held religious and philosophical discussions late into the night; coffee rescued them from sleep and exhaustion. Gradually, coffee became popular, and coffeehouses opened up all over Arabia, where travelers, artists, poets, and common folks visited and had a chance to gossip and debate on a variety of topics, including politics. Often, governments shut down coffeehouses for fear of political unrest and revolution. Between the sixteenth and seventeenth centuries, coffeehouses were banned several times in many Arab countries, including Turkey, Mecca, and Egypt. But coffeehouses always opened again, and coffee became ingrained in Arab culture. Arabs developed many methods of processing coffee beans. Usually, these methods included drying coffee cherries to separate the beans. Dried coffee beans can be stored for many years. Larger and heavier beans are considered better. The taste and aroma develop during roasting, which determines the quality and price of the coffee. Dried coffee beans are dark green, but roasting them at a controlled temperature causes a slow transformation. First, they turn yellow, then light brown, while also popping up and doubling in size. After continued roasting, all the water inside them dries up, and the beans turn black like charcoal. The starch inside the beans first turns into sugar, and then sugar turns into caramel, at which point many aromatic compounds come out of the cells of the beans. Roasting coffee beans is an art, and a skilled roaster is a very important part of the coffee trade. 3.6.2 The Spread of Coffee out of Arabia Coffee was introduced to Europeans in the seventeenth century, when trade between the Ottoman Empire and Europe increased. In 1669, Turkish ambassador Suleiman Agha (Müteferrika Süleyman Ağa) arrived in the court of Louis XIV with many valuable gifts, Colonial Agriculture | 59 including coffee. The French subsequently became obsessed with the sophisticated etiquettes of the Ottoman Empire. In the company of Aga, the royal court and other elites of Parisian society indulged in drinking coffee. Aga held extravagant coffee ceremonies at his residence in Paris, where waiters dressed in Ottoman costumes served coffee to Parisian society women. Suleiman’s visit piqued French elites’ interest in Turquerie and Orientalism, which became fashionable. In the history of France, 1669 is thought of as the year of “Turkmenia.” A decade later, coffee reached Vienna, when Turkey was defeated in the Battle of 1683. After the victory, the Viennese seized the goods left behind by the Turkish soldiers, including several thousand sacks of coffee beans. The soldiers of Vienna didn’t know what it was and simply discarded it, but one man, Kolshitsky, snatched it up. Kolshitsky knew how to make coffee, and he opened the first coffeehouse in Vienna with the spoils. By the end of the seventeenth century, coffeehouses had become common in all the main cities of Europe. In London alone, by 1715, there were more than 2,000 coffeehouses. As in Arabia, the coffeehouses of Europe also became the bases of sociopolitical debates and were known as “penny universities.” 3.6.3 Coffee Plantations By the fifteenth century, demand for coffee had increased so much that the harvest of berries from the wild was not enough, and thus in Yemen, people began to plant coffee. Following Yemen’s lead, other Arab countries also started coffee plantations. Until the seventeenth century, coffee was cultivated only within North African and Arab countries. Arabs were very protective of their monopoly on the coffee trade. The cultivation of coffee and the processing of seeds was a mystery to the world outside of Arabia. Foreigners were not allowed to visit coffee farms, and only roasted coffee beans (incapable of producing new plants) were exported. Around 1600, Baba Budan, a Sufi who was on the Haj pilgrimage, successfully smuggled seven coffee seeds into India and started a small coffee nursery in Mysore. The early coffee plantations of South India used propagations of plants from Budan’s garden. In 1616, a Dutch spy also succeeded in stealing coffee beans from Arabia, and these were used by the Dutch East India Company as starters for coffee plantations in Java, Sumatra, Bali, Sri Lanka, Timur, and Suriname (Dutch Guiana). In 1706, a coffee plant from Java was brought to the botanic gardens of Amsterdam, and from there, its offspring reached Jardin de plantes in Paris. A clone of the Parisian plant was sent to the French colony Martinique, and then its offspring spread to the French colonies in the Caribbean, South America, and Africa. In 1728, a Portuguese officer from Dutch Guiana brought coffee seeds to Brazil, 60 | Colonial Agriculture which served as starters for the coffee plantations there. The Portuguese also introduced coffee to African countries and Indonesia, and the British established plantations in their Caribbean colonies, India, and Sri Lanka from Dutch stock. In summary, all European coffee plants came from the same Arabian mother plant. So the biodiversity within their coffee plantations was almost zero, which had devastating consequences. In the last decades of the nineteenth century, the fungal pathogen Haemilia vestatrix severely infected coffee plantations in Sri Lanka, India, Java, Sumatra, and Malaysia. As a result, rust disease destroyed the coffee plantations one by one. Later, in some of the coffee plantations, Coffea canephora (syn. Coffea robusta), which has a natural resistance to rust, was planted, but others were converted into tea plantations (as in the case of Sri Lanka, discussed earlier). European coffee plantations used the same model as tea or sugar plantations, and so their workers lived under the same conditions. European powers forcefully employed the poor native population in these plantations and used indentured laborers as needed. For example, in Sri Lanka, the Sinhalese population refused to work in the coffee farms, so British planters recruited 100,000 indentured Tamil workers from India to work the farms and tea plantations there. 3.7 The Heritage of Plantations In the twentieth century, most former European colonies became independent countries. In these countries, private, cooperative, or semigovernmental institutions manage plantations of sugarcane, tea, coffee, or other commercial crops. Though these plantations remain a significant source of revenue and contribute significantly to the national GDP of many countries, their workers still often operate under abject conditions. References Johannessen, C. L., & Sorenson, J. L. (2009). World trade and biological exchanges before 1492. iUniverse. (↵ Return) USER: What were the negatives of having such low biodiversity for the coffee plant? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
40
13
3,502
null
530
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document]
I'm looking to buy a new pair of headphones. I'm hoping to spend less than $100, while still getting good audio quality. Can you recommend a good option, and list out some of the defining features of the one you recommend?
We all want the best, but sometimes, the top-tier choice exceeds our budget. Thankfully, cheap Bluetooth headphones are easy to come by, and with the rise of wireless earbuds, premium headset prices have fallen dramatically. Although all of our picks are relatively affordable, none of them are inherently cheap. Whether you need active noise canceling (ANC), a compact design, or long battery life, we’ve got something to scratch your audio itch. Editors note: This article was updated on June 28th, 2024, to add new top picks. For under $100, the JLab JBuds Lux are crazy good value The JLab JBuds Lux ANC sitting atop a wooden desk. The JBuds Lux is a compelling buy under $80. The JLab JBuds Lux ANC punches well above its weight, offering exceptional value for under $100. These over-ear headphones feature active noise cancelation that does a decent job of hushing ambient noise, especially in the higher frequencies. While the ANC performance can’t match premium models, it’s impressive for the price. The sound quality is quite good, with an overall MDAQS score of 4.5/5, lauding the headphones’ faithful timbre and immersive soundstage. They have an elevated bass response and boosted treble that helps counter environmental noise during commutes. Other highlights include a 44-hour battery life, USB-C audio support, and a companion app with EQ and customization options. The JBuds Lux ANC may lack advanced features like head tracking, but they nail the fundamentals at a stellar price, making them one of the best budget ANC headphones you can buy. JLab JBuds Lux ANC SG recommended JLab JBuds Lux ANC USB-C audio • Sound quality • Comfort MSRP: $79.99 For under $100, these are crazy good value. As far as inexpensive ANC headphones go, the JLab JBuds Lux ANC are one of the best of 2024. They focus on the fundamentals, and not fighting the spec wars. The Anker Soundcore Space One has style Anker Soundcore Space One headphones next to cloth case and cables. Along with the headphones, you get a cloth carrying case, USB-C charging cable, and 3.5mm auxiliary cable. The Anker Soundcore Space One is a solid choice for consumers seeking noise canceling headphones under $100. The headphones have good isolation and active noise cancelation (ANC), wear detection, long battery life, and the inclusion of Bluetooth 5.3 with LDAC support. The companion app further allows customizable sound profiles and ANC adjustments. The absence of touch controls and an audio profile that leans towards over-emphasized bass and treble may deter some users. Additionally, the lack of audio-over-USB functionality limits its versatility compared to some competitors. Despite these drawbacks, the overall value proposition remains strong, especially considering the headphones’ effective noise cancelation, sound customization options through the app, and robust battery life of nearly 43 hours. Anker Soundcore Space One Anker Soundcore Space One Comfortable fit • Easy controls • Soundcore app MSRP: $99.99 Luxury features at a budget price. Listen and be heard with the Jabra Elite 45h The Jabra Elite 45h on-ear Bluetooth headphones next to a Samsung Galaxy S10e smartphone and wireless car keys on a white table. Bluetooth multipoint is available but not very reliable. The Jabra Elite 45h are on-ear headphones designed to be compact and portable enough to take anywhere—whether you’re commuting to work, running errands, or just putting your feet up at home. The swivel ear cups make it easy to shove into a backpack for easy transport. Top Deals See all deals Sennheiser Momentum 4 Wireless 37% off See price at Best Buy Sony WH-CH720N Headphones 48% off See price at Amazon Sony SRS-XE200 X-Series Speaker 47% off See price at Amazon Limited Time Deal! Edifier G2000 Gaming Speakers 34% off See price at Amazon Limited Time Deal! Edifier G1000 Gaming Speakers 41% off See price at Best Buy Deal of the Day! JBL Vibe Beam 40% off See price at Amazon Limited Time Deal! The headphones’ bass-heavy frequency response makes it hard to hear higher-pitched vocals. Fortunately, you can create a custom EQ in the Jabra Sound+ app (Android/iOS) and tinker all day long. If you don’t want to experiment, Jabra has a hearing test that informs an optimized sound profile. One of the best aspects of the Jabra Elite 45h is its microphone. It reproduces voices accurately, and even people with deep voices will be heard loud and clear. The microphone also does a good job of attenuating background noise and light wind, eliminating audible distractions during conference calls. Other features that make the Jabra Elite 45h a worthy investment include a 50+ hour battery life, USB-C fast charging, AAC codec support (which is great for iOS users), and an included two-year warranty that covers dust and water damage.
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. I'm looking to buy a new pair of headphones. I'm hoping to spend less than $100, while still getting good audio quality. Can you recommend a good option, and list out some of the defining features of the one you recommend? We all want the best, but sometimes, the top-tier choice exceeds our budget. Thankfully, cheap Bluetooth headphones are easy to come by, and with the rise of wireless earbuds, premium headset prices have fallen dramatically. Although all of our picks are relatively affordable, none of them are inherently cheap. Whether you need active noise canceling (ANC), a compact design, or long battery life, we’ve got something to scratch your audio itch. Editors note: This article was updated on June 28th, 2024, to add new top picks. For under $100, the JLab JBuds Lux are crazy good value The JLab JBuds Lux ANC sitting atop a wooden desk. The JBuds Lux is a compelling buy under $80. The JLab JBuds Lux ANC punches well above its weight, offering exceptional value for under $100. These over-ear headphones feature active noise cancelation that does a decent job of hushing ambient noise, especially in the higher frequencies. While the ANC performance can’t match premium models, it’s impressive for the price. The sound quality is quite good, with an overall MDAQS score of 4.5/5, lauding the headphones’ faithful timbre and immersive soundstage. They have an elevated bass response and boosted treble that helps counter environmental noise during commutes. Other highlights include a 44-hour battery life, USB-C audio support, and a companion app with EQ and customization options. The JBuds Lux ANC may lack advanced features like head tracking, but they nail the fundamentals at a stellar price, making them one of the best budget ANC headphones you can buy. JLab JBuds Lux ANC SG recommended JLab JBuds Lux ANC USB-C audio • Sound quality • Comfort MSRP: $79.99 For under $100, these are crazy good value. As far as inexpensive ANC headphones go, the JLab JBuds Lux ANC are one of the best of 2024. They focus on the fundamentals, and not fighting the spec wars. The Anker Soundcore Space One has style Anker Soundcore Space One headphones next to cloth case and cables. Along with the headphones, you get a cloth carrying case, USB-C charging cable, and 3.5mm auxiliary cable. The Anker Soundcore Space One is a solid choice for consumers seeking noise canceling headphones under $100. The headphones have good isolation and active noise cancelation (ANC), wear detection, long battery life, and the inclusion of Bluetooth 5.3 with LDAC support. The companion app further allows customizable sound profiles and ANC adjustments. The absence of touch controls and an audio profile that leans towards over-emphasized bass and treble may deter some users. Additionally, the lack of audio-over-USB functionality limits its versatility compared to some competitors. Despite these drawbacks, the overall value proposition remains strong, especially considering the headphones’ effective noise cancelation, sound customization options through the app, and robust battery life of nearly 43 hours. Anker Soundcore Space One Anker Soundcore Space One Comfortable fit • Easy controls • Soundcore app MSRP: $99.99 Luxury features at a budget price. Listen and be heard with the Jabra Elite 45h The Jabra Elite 45h on-ear Bluetooth headphones next to a Samsung Galaxy S10e smartphone and wireless car keys on a white table. Bluetooth multipoint is available but not very reliable. The Jabra Elite 45h are on-ear headphones designed to be compact and portable enough to take anywhere—whether you’re commuting to work, running errands, or just putting your feet up at home. The swivel ear cups make it easy to shove into a backpack for easy transport. Top Deals See all deals Sennheiser Momentum 4 Wireless 37% off See price at Best Buy Sony WH-CH720N Headphones 48% off See price at Amazon Sony SRS-XE200 X-Series Speaker 47% off See price at Amazon Limited Time Deal! Edifier G2000 Gaming Speakers 34% off See price at Amazon Limited Time Deal! Edifier G1000 Gaming Speakers 41% off See price at Best Buy Deal of the Day! JBL Vibe Beam 40% off See price at Amazon Limited Time Deal! The headphones’ bass-heavy frequency response makes it hard to hear higher-pitched vocals. Fortunately, you can create a custom EQ in the Jabra Sound+ app (Android/iOS) and tinker all day long. If you don’t want to experiment, Jabra has a hearing test that informs an optimized sound profile. One of the best aspects of the Jabra Elite 45h is its microphone. It reproduces voices accurately, and even people with deep voices will be heard loud and clear. The microphone also does a good job of attenuating background noise and light wind, eliminating audible distractions during conference calls. Other features that make the Jabra Elite 45h a worthy investment include a 50+ hour battery life, USB-C fast charging, AAC codec support (which is great for iOS users), and an included two-year warranty that covers dust and water damage. https://www.soundguys.com/best-cheap-bluetooth-headphones-100-28839/
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources. [user request] [context document] EVIDENCE: We all want the best, but sometimes, the top-tier choice exceeds our budget. Thankfully, cheap Bluetooth headphones are easy to come by, and with the rise of wireless earbuds, premium headset prices have fallen dramatically. Although all of our picks are relatively affordable, none of them are inherently cheap. Whether you need active noise canceling (ANC), a compact design, or long battery life, we’ve got something to scratch your audio itch. Editors note: This article was updated on June 28th, 2024, to add new top picks. For under $100, the JLab JBuds Lux are crazy good value The JLab JBuds Lux ANC sitting atop a wooden desk. The JBuds Lux is a compelling buy under $80. The JLab JBuds Lux ANC punches well above its weight, offering exceptional value for under $100. These over-ear headphones feature active noise cancelation that does a decent job of hushing ambient noise, especially in the higher frequencies. While the ANC performance can’t match premium models, it’s impressive for the price. The sound quality is quite good, with an overall MDAQS score of 4.5/5, lauding the headphones’ faithful timbre and immersive soundstage. They have an elevated bass response and boosted treble that helps counter environmental noise during commutes. Other highlights include a 44-hour battery life, USB-C audio support, and a companion app with EQ and customization options. The JBuds Lux ANC may lack advanced features like head tracking, but they nail the fundamentals at a stellar price, making them one of the best budget ANC headphones you can buy. JLab JBuds Lux ANC SG recommended JLab JBuds Lux ANC USB-C audio • Sound quality • Comfort MSRP: $79.99 For under $100, these are crazy good value. As far as inexpensive ANC headphones go, the JLab JBuds Lux ANC are one of the best of 2024. They focus on the fundamentals, and not fighting the spec wars. The Anker Soundcore Space One has style Anker Soundcore Space One headphones next to cloth case and cables. Along with the headphones, you get a cloth carrying case, USB-C charging cable, and 3.5mm auxiliary cable. The Anker Soundcore Space One is a solid choice for consumers seeking noise canceling headphones under $100. The headphones have good isolation and active noise cancelation (ANC), wear detection, long battery life, and the inclusion of Bluetooth 5.3 with LDAC support. The companion app further allows customizable sound profiles and ANC adjustments. The absence of touch controls and an audio profile that leans towards over-emphasized bass and treble may deter some users. Additionally, the lack of audio-over-USB functionality limits its versatility compared to some competitors. Despite these drawbacks, the overall value proposition remains strong, especially considering the headphones’ effective noise cancelation, sound customization options through the app, and robust battery life of nearly 43 hours. Anker Soundcore Space One Anker Soundcore Space One Comfortable fit • Easy controls • Soundcore app MSRP: $99.99 Luxury features at a budget price. Listen and be heard with the Jabra Elite 45h The Jabra Elite 45h on-ear Bluetooth headphones next to a Samsung Galaxy S10e smartphone and wireless car keys on a white table. Bluetooth multipoint is available but not very reliable. The Jabra Elite 45h are on-ear headphones designed to be compact and portable enough to take anywhere—whether you’re commuting to work, running errands, or just putting your feet up at home. The swivel ear cups make it easy to shove into a backpack for easy transport. Top Deals See all deals Sennheiser Momentum 4 Wireless 37% off See price at Best Buy Sony WH-CH720N Headphones 48% off See price at Amazon Sony SRS-XE200 X-Series Speaker 47% off See price at Amazon Limited Time Deal! Edifier G2000 Gaming Speakers 34% off See price at Amazon Limited Time Deal! Edifier G1000 Gaming Speakers 41% off See price at Best Buy Deal of the Day! JBL Vibe Beam 40% off See price at Amazon Limited Time Deal! The headphones’ bass-heavy frequency response makes it hard to hear higher-pitched vocals. Fortunately, you can create a custom EQ in the Jabra Sound+ app (Android/iOS) and tinker all day long. If you don’t want to experiment, Jabra has a hearing test that informs an optimized sound profile. One of the best aspects of the Jabra Elite 45h is its microphone. It reproduces voices accurately, and even people with deep voices will be heard loud and clear. The microphone also does a good job of attenuating background noise and light wind, eliminating audible distractions during conference calls. Other features that make the Jabra Elite 45h a worthy investment include a 50+ hour battery life, USB-C fast charging, AAC codec support (which is great for iOS users), and an included two-year warranty that covers dust and water damage. USER: I'm looking to buy a new pair of headphones. I'm hoping to spend less than $100, while still getting good audio quality. Can you recommend a good option, and list out some of the defining features of the one you recommend? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
24
41
784
null
198
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document]
I want to sell put credit spreads on Apple to start making passive income but I don't want to own the stock. Based on this article, explain in 500 words if this strategy would truly have defined risk and prevented me from being assigned shares.
In the money or out of the money? The buyer ("owner") of an option has the right, but not the obligation, to exercise the option on or before expiration. A call option5 gives the owner the right to buy the underlying security; a put option6 gives the owner the right to sell the underlying security. Conversely, when you sell an option, you may be assigned—at any time regardless of the ITM amount—if the option owner chooses to exercise. The option seller has no control over assignment and no certainty as to when it could happen. Once the assignment notice is delivered, it's too late to close the position and the option seller must fulfill the terms of the options contract: A long call exercise results in buying the underlying stock at the strike price. A short call assignment results in selling the underlying stock at the strike price. A long put exercise results in selling the underlying stock at the strike price. A short put assignment results in buying the underlying stock at the strike price. An option will likely be exercised if it's in the option owner's best interest to do so, meaning it's optimal to take or to close a position in the underlying security at the strike price rather than at the current market price. After the market close on expiration day, ITM options may be automatically exercised, whereas OTM options are not and typically expire worthless (often referred to as being "abandoned"). The table below spells it out. If the underlying stock price is... ...higher than the strike price ...lower than the strike price If the underlying stock price is... A long call is... ...higher than the strike price ...ITM and typically exercised ...lower than the strike price ...OTM and typically abandoned If the underlying stock price is... A short call is... ...higher than the strike price ...ITM and typically assigned ...lower than the strike price ...OTM and typically abandoned If the underlying stock price is... A long put is... ...higher than the strike price ...OTM and typically abandoned ...lower than the strike price ...ITM and typically exercised If the underlying stock price is... A short put is... ...higher than the strike price ...OTM and typically abandoned ...lower than the strike price ...ITM and typically assigned The guidelines in the table assume a position is held all the way through expiration. Of course, you typically don't need to do that. And in many cases, the usual strategy is to close out a position ahead of the expiration date. We'll revisit the close-or-hold decision in the next section and look at ways to do that. But assuming you do carry the options position until the end, there are a few things you need to consider: Know your specs. Each standard equity options contract controls 100 shares of the underlying stock. That's pretty straightforward. Non-standard options may have different deliverables. Non-standard options can represent a different number of shares, shares of more than one company stock, or underlying shares and cash. Other products—such as index options or options on futures—have different contract specs. Stock and options positions will match and close. Suppose you're long 300 shares of XYZ and short one ITM call that's assigned. Because the call is deliverable into 100 shares, you'll be left with 200 shares of XYZ if the option is assigned, plus the cash from selling 100 shares at the strike price. It's automatic, for the most part. If an option is ITM by as little as $0.01 at expiration, it will automatically be exercised for the buyer and assigned to a seller. However, there's something called a do not exercise (DNE) request that a long option holder can submit if they want to abandon an option. In such a case, it's possible that a short ITM position might not be assigned. For more, see the note below on pin risk7? You'd better have enough cash. If an option on XYZ is exercised or assigned and you are "uncovered" (you don't have an existing long or short position in the underlying security), a long or short position in the underlying stock will replace the options. A long call or short put will result in a long position in XYZ; a short call or long put will result in a short position in XYZ. For long stock positions, you need to have enough cash to cover the purchase or else you'll be issued a margin8 call, which you must meet by adding funds to your account. But that timeline may be short, and the broker, at its discretion, has the right to liquidate positions in your account to meet a margin call9. If exercise or assignment involves taking a short stock position, you need a margin account and sufficient funds in the account to cover the margin requirement. Short equity positions are risky business. An uncovered short call or long put, if assigned or exercised, will result in a short stock position. If you're short a stock, you have potentially unlimited risk because there's theoretically no limit to the potential price increase of the underlying stock. There's also no guarantee the brokerage firm can continue to maintain that short position for an unlimited time period. So, if you're a newbie, it's generally inadvisable to carry an options position into expiration if there's a chance you might end up with a short stock position. A note on pin risk: It's not common, but occasionally a stock settles right on a strike price at expiration. So, if you were short the 105-strike calls and XYZ settled at exactly $105, there would be no automatic assignment, but depending on the actions taken by the option holder, you may or may not be assigned—and you may not be able to trade out of any unwanted positions until the next business day. But it goes beyond the exact price issue. What if an option is ITM as of the market close, but news comes out after the close (but before the exercise decision deadline) that sends the stock price up or down through the strike price? Remember: The owner of the option could submit a DNE request. The uncertainty and potential exposure when a stock price and the strike price are the same at expiration is called pin risk. The best way to avoid it is to close the position before expiration. The decision tree: How to approach expiration As expiration approaches, you have three choices. Depending on the circumstances—and your objectives and risk tolerance—any of these might be the best decision for you. 1. Let the chips fall where they may. Some positions may not require as much maintenance. An options position that's deeply OTM will likely go away on its own, but occasionally an option that's been left for dead springs back to life. If it's a long option, the unexpected turn of events might feel like a windfall; if it's a short option that could've been closed out for a penny or two, you might be kicking yourself for not doing so. 2. Close it out. If you've met your objectives for a trade, then it might be time to close it out. Otherwise, you might be exposed to risks that aren't commensurate with any added return potential (like the short option that could've been closed out for next to nothing, then suddenly came back into play). Keep in mind, there is no guarantee that there will be an active market for an options contract, so it is possible to end up stuck and unable to close an options position.
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> I want to sell put credit spreads on Apple to start making passive income but I don't want to own the stock. Based on this article, explain in 500 words if this strategy would truly have defined risk and prevented me from being assigned shares. <TEXT> In the money or out of the money? The buyer ("owner") of an option has the right, but not the obligation, to exercise the option on or before expiration. A call option5 gives the owner the right to buy the underlying security; a put option6 gives the owner the right to sell the underlying security. Conversely, when you sell an option, you may be assigned—at any time regardless of the ITM amount—if the option owner chooses to exercise. The option seller has no control over assignment and no certainty as to when it could happen. Once the assignment notice is delivered, it's too late to close the position and the option seller must fulfill the terms of the options contract: A long call exercise results in buying the underlying stock at the strike price. A short call assignment results in selling the underlying stock at the strike price. A long put exercise results in selling the underlying stock at the strike price. A short put assignment results in buying the underlying stock at the strike price. An option will likely be exercised if it's in the option owner's best interest to do so, meaning it's optimal to take or to close a position in the underlying security at the strike price rather than at the current market price. After the market close on expiration day, ITM options may be automatically exercised, whereas OTM options are not and typically expire worthless (often referred to as being "abandoned"). The table below spells it out. If the underlying stock price is... ...higher than the strike price ...lower than the strike price If the underlying stock price is... A long call is... ...higher than the strike price ...ITM and typically exercised ...lower than the strike price ...OTM and typically abandoned If the underlying stock price is... A short call is... ...higher than the strike price ...ITM and typically assigned ...lower than the strike price ...OTM and typically abandoned If the underlying stock price is... A long put is... ...higher than the strike price ...OTM and typically abandoned ...lower than the strike price ...ITM and typically exercised If the underlying stock price is... A short put is... ...higher than the strike price ...OTM and typically abandoned ...lower than the strike price ...ITM and typically assigned The guidelines in the table assume a position is held all the way through expiration. Of course, you typically don't need to do that. And in many cases, the usual strategy is to close out a position ahead of the expiration date. We'll revisit the close-or-hold decision in the next section and look at ways to do that. But assuming you do carry the options position until the end, there are a few things you need to consider: Know your specs. Each standard equity options contract controls 100 shares of the underlying stock. That's pretty straightforward. Non-standard options may have different deliverables. Non-standard options can represent a different number of shares, shares of more than one company stock, or underlying shares and cash. Other products—such as index options or options on futures—have different contract specs. Stock and options positions will match and close. Suppose you're long 300 shares of XYZ and short one ITM call that's assigned. Because the call is deliverable into 100 shares, you'll be left with 200 shares of XYZ if the option is assigned, plus the cash from selling 100 shares at the strike price. It's automatic, for the most part. If an option is ITM by as little as $0.01 at expiration, it will automatically be exercised for the buyer and assigned to a seller. However, there's something called a do not exercise (DNE) request that a long option holder can submit if they want to abandon an option. In such a case, it's possible that a short ITM position might not be assigned. For more, see the note below on pin risk7? You'd better have enough cash. If an option on XYZ is exercised or assigned and you are "uncovered" (you don't have an existing long or short position in the underlying security), a long or short position in the underlying stock will replace the options. A long call or short put will result in a long position in XYZ; a short call or long put will result in a short position in XYZ. For long stock positions, you need to have enough cash to cover the purchase or else you'll be issued a margin8 call, which you must meet by adding funds to your account. But that timeline may be short, and the broker, at its discretion, has the right to liquidate positions in your account to meet a margin call9. If exercise or assignment involves taking a short stock position, you need a margin account and sufficient funds in the account to cover the margin requirement. Short equity positions are risky business. An uncovered short call or long put, if assigned or exercised, will result in a short stock position. If you're short a stock, you have potentially unlimited risk because there's theoretically no limit to the potential price increase of the underlying stock. There's also no guarantee the brokerage firm can continue to maintain that short position for an unlimited time period. So, if you're a newbie, it's generally inadvisable to carry an options position into expiration if there's a chance you might end up with a short stock position. A note on pin risk: It's not common, but occasionally a stock settles right on a strike price at expiration. So, if you were short the 105-strike calls and XYZ settled at exactly $105, there would be no automatic assignment, but depending on the actions taken by the option holder, you may or may not be assigned—and you may not be able to trade out of any unwanted positions until the next business day. But it goes beyond the exact price issue. What if an option is ITM as of the market close, but news comes out after the close (but before the exercise decision deadline) that sends the stock price up or down through the strike price? Remember: The owner of the option could submit a DNE request. The uncertainty and potential exposure when a stock price and the strike price are the same at expiration is called pin risk. The best way to avoid it is to close the position before expiration. The decision tree: How to approach expiration As expiration approaches, you have three choices. Depending on the circumstances—and your objectives and risk tolerance—any of these might be the best decision for you. 1. Let the chips fall where they may. Some positions may not require as much maintenance. An options position that's deeply OTM will likely go away on its own, but occasionally an option that's been left for dead springs back to life. If it's a long option, the unexpected turn of events might feel like a windfall; if it's a short option that could've been closed out for a penny or two, you might be kicking yourself for not doing so. 2. Close it out. If you've met your objectives for a trade, then it might be time to close it out. Otherwise, you might be exposed to risks that aren't commensurate with any added return potential (like the short option that could've been closed out for next to nothing, then suddenly came back into play). Keep in mind, there is no guarantee that there will be an active market for an options contract, so it is possible to end up stuck and unable to close an options position. https://www.schwab.com/learn/story/options-exercise-assignment-and-more-beginners-guide
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document] EVIDENCE: In the money or out of the money? The buyer ("owner") of an option has the right, but not the obligation, to exercise the option on or before expiration. A call option5 gives the owner the right to buy the underlying security; a put option6 gives the owner the right to sell the underlying security. Conversely, when you sell an option, you may be assigned—at any time regardless of the ITM amount—if the option owner chooses to exercise. The option seller has no control over assignment and no certainty as to when it could happen. Once the assignment notice is delivered, it's too late to close the position and the option seller must fulfill the terms of the options contract: A long call exercise results in buying the underlying stock at the strike price. A short call assignment results in selling the underlying stock at the strike price. A long put exercise results in selling the underlying stock at the strike price. A short put assignment results in buying the underlying stock at the strike price. An option will likely be exercised if it's in the option owner's best interest to do so, meaning it's optimal to take or to close a position in the underlying security at the strike price rather than at the current market price. After the market close on expiration day, ITM options may be automatically exercised, whereas OTM options are not and typically expire worthless (often referred to as being "abandoned"). The table below spells it out. If the underlying stock price is... ...higher than the strike price ...lower than the strike price If the underlying stock price is... A long call is... ...higher than the strike price ...ITM and typically exercised ...lower than the strike price ...OTM and typically abandoned If the underlying stock price is... A short call is... ...higher than the strike price ...ITM and typically assigned ...lower than the strike price ...OTM and typically abandoned If the underlying stock price is... A long put is... ...higher than the strike price ...OTM and typically abandoned ...lower than the strike price ...ITM and typically exercised If the underlying stock price is... A short put is... ...higher than the strike price ...OTM and typically abandoned ...lower than the strike price ...ITM and typically assigned The guidelines in the table assume a position is held all the way through expiration. Of course, you typically don't need to do that. And in many cases, the usual strategy is to close out a position ahead of the expiration date. We'll revisit the close-or-hold decision in the next section and look at ways to do that. But assuming you do carry the options position until the end, there are a few things you need to consider: Know your specs. Each standard equity options contract controls 100 shares of the underlying stock. That's pretty straightforward. Non-standard options may have different deliverables. Non-standard options can represent a different number of shares, shares of more than one company stock, or underlying shares and cash. Other products—such as index options or options on futures—have different contract specs. Stock and options positions will match and close. Suppose you're long 300 shares of XYZ and short one ITM call that's assigned. Because the call is deliverable into 100 shares, you'll be left with 200 shares of XYZ if the option is assigned, plus the cash from selling 100 shares at the strike price. It's automatic, for the most part. If an option is ITM by as little as $0.01 at expiration, it will automatically be exercised for the buyer and assigned to a seller. However, there's something called a do not exercise (DNE) request that a long option holder can submit if they want to abandon an option. In such a case, it's possible that a short ITM position might not be assigned. For more, see the note below on pin risk7? You'd better have enough cash. If an option on XYZ is exercised or assigned and you are "uncovered" (you don't have an existing long or short position in the underlying security), a long or short position in the underlying stock will replace the options. A long call or short put will result in a long position in XYZ; a short call or long put will result in a short position in XYZ. For long stock positions, you need to have enough cash to cover the purchase or else you'll be issued a margin8 call, which you must meet by adding funds to your account. But that timeline may be short, and the broker, at its discretion, has the right to liquidate positions in your account to meet a margin call9. If exercise or assignment involves taking a short stock position, you need a margin account and sufficient funds in the account to cover the margin requirement. Short equity positions are risky business. An uncovered short call or long put, if assigned or exercised, will result in a short stock position. If you're short a stock, you have potentially unlimited risk because there's theoretically no limit to the potential price increase of the underlying stock. There's also no guarantee the brokerage firm can continue to maintain that short position for an unlimited time period. So, if you're a newbie, it's generally inadvisable to carry an options position into expiration if there's a chance you might end up with a short stock position. A note on pin risk: It's not common, but occasionally a stock settles right on a strike price at expiration. So, if you were short the 105-strike calls and XYZ settled at exactly $105, there would be no automatic assignment, but depending on the actions taken by the option holder, you may or may not be assigned—and you may not be able to trade out of any unwanted positions until the next business day. But it goes beyond the exact price issue. What if an option is ITM as of the market close, but news comes out after the close (but before the exercise decision deadline) that sends the stock price up or down through the strike price? Remember: The owner of the option could submit a DNE request. The uncertainty and potential exposure when a stock price and the strike price are the same at expiration is called pin risk. The best way to avoid it is to close the position before expiration. The decision tree: How to approach expiration As expiration approaches, you have three choices. Depending on the circumstances—and your objectives and risk tolerance—any of these might be the best decision for you. 1. Let the chips fall where they may. Some positions may not require as much maintenance. An options position that's deeply OTM will likely go away on its own, but occasionally an option that's been left for dead springs back to life. If it's a long option, the unexpected turn of events might feel like a windfall; if it's a short option that could've been closed out for a penny or two, you might be kicking yourself for not doing so. 2. Close it out. If you've met your objectives for a trade, then it might be time to close it out. Otherwise, you might be exposed to risks that aren't commensurate with any added return potential (like the short option that could've been closed out for next to nothing, then suddenly came back into play). Keep in mind, there is no guarantee that there will be an active market for an options contract, so it is possible to end up stuck and unable to close an options position. USER: I want to sell put credit spreads on Apple to start making passive income but I don't want to own the stock. Based on this article, explain in 500 words if this strategy would truly have defined risk and prevented me from being assigned shares. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
20
45
1,264
null
60
Present the answers in a table with bullet points. Only use the information provided.
Summarise the different nanoparticles by giving 3 benefits of each and any indication of the type of diabetes they best treat.
3.1. Using nanotechnology to treat diabetes mellitus Recent advances in diabetes research have been leveraged by nanotechnology to develop cutting-edge glucose measurement and insulin delivery techniques with the potential to significantly enhance the well-being of diabetes patients. This analysis delves into the intersection of nanotechnology and diabetes research, specifically focusing on the developmental of glucose sensors utilizing nanoscale elements like metal nanoparticles and carbon nanostructures. These tiny components have been proven to enhance the sensitivity and response time of glucose sensors, enabling continuous monitoring of glucose levels within the body. Additionally, the review delves into the nanoscale strategies for creating “closed-loop” insulin delivery systems that automatically adjust insulin release based on blood glucose changes. By integrating blood glucose measurements with insulin administration, these systems aim to reduce the need for patient intervention, ultimately leading to improved health outcomes and overall quality of life for individuals with diabetes mellitus [17]. 3.2. The use of nanoparticles in biology for treating diabetes mellitus Nanotechnology has emerged as a valuable tool for a range of biomedical uses in recent years. Nanoparticles, which are materials with sizes smaller than 100 nm in at least one dimension, have distinct characteristics that change when scaled down to the nanoscale. This enables them to interact with cellular biomolecules in a specific manner. NPs engineered for precise cell delivery carry therapeutic substances [18]. Moreover, metal nanoparticles are perceived as being less harmful than mineral salts and provide numerous advantages to the body [19]. 3.2.1. Zinc oxide NPs ZnO nanoparticles (NPs) find uses in a range of biomedical applications, including treating diabetes, fighting bacteria, combating cancer and fungal infections, delivering drugs, and reducing inflammation [20]. Zinc is crucial for the biosynthesis, secretion, and storage of insulin, with zinc transporters like zinc transporter-8 being vital for insulin release from pancreatic beta cells [21]. ZnO NPs can boost insulin signaling by enhancing insulin receptor phosphorylation and phosphoinositide 3-kinase activity [22]. Research indicates that ZnO NPs can repair pancreatic tissue damaged by diabetes, improving blood sugar and serum insulin levels. Studies comparing ZnO NPs with standard antidiabetic drugs like Vildagliptin show that ZnO NPs are effective in treating type 2 diabetes [23]. ZnO NPs have shown notable antidiabetic activity in various animal models, often surpassing other treatments. They also have powerful biological effects, such as acting as antioxidants and reducing inflammation, which makes them potential candidates for treating diabetes and its related complications [24]. 3.2.2. Magnesium NPs Magnesium (Mg) is essential for glucose homeostasis and insulin secretion, Contribution to the process of adding phosphate groups to molecules and regulating the breakdown of glucose through a variety of enzymes [19]. Mg deficiency can result in insulin resistance, dyslipidemia, and complications in diabetic mice [25]. A study by Kei et al. (2020) demonstrated that MgO nanoparticles can help reduce blood sugar levels, improve insulin sensitivity, and regulate lipid levels in diabetic mice. The study found that using the polymer-directed aptamer (DPAP) system efficiently delivered MgO NPs to diabetic target cells, leading to reduced sugar oxidation. This suggests that magnesium, particularly in the form of MgO NPs, may be a promising treatment for type II diabetes [26]. 3.2.3. Cerium oxide NPs The rare earth element cerium, found in the lanthanide series, forms CeO2 nanoparticles (NPs) that have shown potential in treating oxidative disorders and brain injuries. Research indicates that CeO2 NPs could serve as a regenerative agent, preventing nerve damage caused by diabetes and treating diabetic neuropathy [27]. Additionally, CeO2 NPs may help reduce complications from gestational diabetes. However, further research is needed to validate these findings [28]. 3.2.4. Copper NPs Copper is a crucial transitional element involved in various biochemical processes. Copper nanoparticles (Cu NPs) are effective in treating Type 2 diabetes due to their superior antioxidant properties and their ability to inhibit alphaamylase and alpha-glucosidase [29]. Additionally, Cu NPs have been shown to significantly prevent cardiovascular defects in diabetic individuals by enhancing nitric oxide availability in the vascular endothelium and reducing oxidative stress. Research indicates that Cu NPs also aid in wound healing in diabetic mice, accelerating recovery and controlling bacterial infections. Overall, Cu NPs show potential benefits for diabetes patients [30]. 3.2.5. Selenium NPs Selenium is a vital trace element found in many plants, and its deficit can result in health issues like diabetes [31]. Selenium nanoparticles (Se NPs) are less toxic and have antioxidant properties that help scavenge peroxides and protect cellular macromolecules. Studies indicate that Se NPs can assist in managing T2DM by preserving the authenticity of pancreatic β-cells, boosting insulin secretion, and reducing glucose levels. Additionally, they enhance liver function and lower inflammatory markers. Overall, Se NPs hold promise as a treatment for diabetes and insulin resistance, effectively mitigating related complications while maintaining a balance between oxidative and antioxidant processes [32].
Summarise the different nanoparticles by giving 3 benefits of each and any indication of the type of diabetes they best treat. Present the answers in a table with bullet points. Only use the information provided. 3.1. Using nanotechnology to treat diabetes mellitus Recent advances in diabetes research have been leveraged by nanotechnology to develop cutting-edge glucose measurement and insulin delivery techniques with the potential to significantly enhance the well-being of diabetes patients. This analysis delves into the intersection of nanotechnology and diabetes research, specifically focusing on the developmental of glucose sensors utilizing nanoscale elements like metal nanoparticles and carbon nanostructures. These tiny components have been proven to enhance the sensitivity and response time of glucose sensors, enabling continuous monitoring of glucose levels within the body. Additionally, the review delves into the nanoscale strategies for creating “closed-loop” insulin delivery systems that automatically adjust insulin release based on blood glucose changes. By integrating blood glucose measurements with insulin administration, these systems aim to reduce the need for patient intervention, ultimately leading to improved health outcomes and overall quality of life for individuals with diabetes mellitus [17]. 3.2. The use of nanoparticles in biology for treating diabetes mellitus Nanotechnology has emerged as a valuable tool for a range of biomedical uses in recent years. Nanoparticles, which are materials with sizes smaller than 100 nm in at least one dimension, have distinct characteristics that change when scaled down to the nanoscale. This enables them to interact with cellular biomolecules in a specific manner. NPs engineered for precise cell delivery carry therapeutic substances [18]. Moreover, metal nanoparticles are perceived as being less harmful than mineral salts and provide numerous advantages to the body [19]. 3.2.1. Zinc oxide NPs ZnO nanoparticles (NPs) find uses in a range of biomedical applications, including treating diabetes, fighting bacteria, combating cancer and fungal infections, delivering drugs, and reducing inflammation [20]. Zinc is crucial for the biosynthesis, secretion, and storage of insulin, with zinc transporters like zinc transporter-8 being vital for insulin release from pancreatic beta cells [21]. ZnO NPs can boost insulin signaling by enhancing insulin receptor phosphorylation and phosphoinositide 3-kinase activity [22]. Research indicates that ZnO NPs can repair pancreatic tissue damaged by diabetes, improving blood sugar and serum insulin levels. Studies comparing ZnO NPs with standard antidiabetic drugs like Vildagliptin show that ZnO NPs are effective in treating type 2 diabetes [23]. ZnO NPs have shown notable antidiabetic activity in various animal models, often surpassing other treatments. They also have powerful biological effects, such as acting as antioxidants and reducing inflammation, which makes them potential candidates for treating diabetes and its related complications [24]. 3.2.2. Magnesium NPs Magnesium (Mg) is essential for glucose homeostasis and insulin secretion, Contribution to the process of adding phosphate groups to molecules and regulating the breakdown of glucose through a variety of enzymes [19]. Mg deficiency can result in insulin resistance, dyslipidemia, and complications in diabetic mice [25]. A study by Kei et al. (2020) demonstrated that MgO nanoparticles can help reduce blood sugar levels, improve insulin sensitivity, and regulate lipid levels in diabetic mice. The study found that using the polymer-directed aptamer (DPAP) system efficiently delivered MgO NPs to diabetic target cells, leading to reduced sugar oxidation. This suggests that magnesium, particularly in the form of MgO NPs, may be a promising treatment for type II diabetes [26]. 3.2.3. Cerium oxide NPs The rare earth element cerium, found in the lanthanide series, forms CeO2 nanoparticles (NPs) that have shown potential in treating oxidative disorders and brain injuries. Research indicates that CeO2 NPs could serve as a regenerative agent, preventing nerve damage caused by diabetes and treating diabetic neuropathy [27]. Additionally, CeO2 NPs may help reduce complications from gestational diabetes. However, further research is needed to validate these findings [28]. 3.2.4. Copper NPs Copper is a crucial transitional element involved in various biochemical processes. Copper nanoparticles (Cu NPs) are effective in treating Type 2 diabetes due to their superior antioxidant properties and their ability to inhibit alphaamylase and alpha-glucosidase [29]. Additionally, Cu NPs have been shown to significantly prevent cardiovascular defects in diabetic individuals by enhancing nitric oxide availability in the vascular endothelium and reducing oxidative stress. Research indicates that Cu NPs also aid in wound healing in diabetic mice, accelerating recovery and controlling bacterial infections. Overall, Cu NPs show potential benefits for diabetes patients [30]. 3.2.5. Selenium NPs Selenium is a vital trace element found in many plants, and its deficit can result in health issues like diabetes [31]. Selenium nanoparticles (Se NPs) are less toxic and have antioxidant properties that help scavenge peroxides and protect cellular macromolecules. Studies indicate that Se NPs can assist in managing T2DM by preserving the authenticity of pancreatic β-cells, boosting insulin secretion, and reducing glucose levels. Additionally, they enhance liver function and lower inflammatory markers. Overall, Se NPs hold promise as a treatment for diabetes and insulin resistance, effectively mitigating related complications while maintaining a balance between oxidative and antioxidant processes [32].
Present the answers in a table with bullet points. Only use the information provided. EVIDENCE: 3.1. Using nanotechnology to treat diabetes mellitus Recent advances in diabetes research have been leveraged by nanotechnology to develop cutting-edge glucose measurement and insulin delivery techniques with the potential to significantly enhance the well-being of diabetes patients. This analysis delves into the intersection of nanotechnology and diabetes research, specifically focusing on the developmental of glucose sensors utilizing nanoscale elements like metal nanoparticles and carbon nanostructures. These tiny components have been proven to enhance the sensitivity and response time of glucose sensors, enabling continuous monitoring of glucose levels within the body. Additionally, the review delves into the nanoscale strategies for creating “closed-loop” insulin delivery systems that automatically adjust insulin release based on blood glucose changes. By integrating blood glucose measurements with insulin administration, these systems aim to reduce the need for patient intervention, ultimately leading to improved health outcomes and overall quality of life for individuals with diabetes mellitus [17]. 3.2. The use of nanoparticles in biology for treating diabetes mellitus Nanotechnology has emerged as a valuable tool for a range of biomedical uses in recent years. Nanoparticles, which are materials with sizes smaller than 100 nm in at least one dimension, have distinct characteristics that change when scaled down to the nanoscale. This enables them to interact with cellular biomolecules in a specific manner. NPs engineered for precise cell delivery carry therapeutic substances [18]. Moreover, metal nanoparticles are perceived as being less harmful than mineral salts and provide numerous advantages to the body [19]. 3.2.1. Zinc oxide NPs ZnO nanoparticles (NPs) find uses in a range of biomedical applications, including treating diabetes, fighting bacteria, combating cancer and fungal infections, delivering drugs, and reducing inflammation [20]. Zinc is crucial for the biosynthesis, secretion, and storage of insulin, with zinc transporters like zinc transporter-8 being vital for insulin release from pancreatic beta cells [21]. ZnO NPs can boost insulin signaling by enhancing insulin receptor phosphorylation and phosphoinositide 3-kinase activity [22]. Research indicates that ZnO NPs can repair pancreatic tissue damaged by diabetes, improving blood sugar and serum insulin levels. Studies comparing ZnO NPs with standard antidiabetic drugs like Vildagliptin show that ZnO NPs are effective in treating type 2 diabetes [23]. ZnO NPs have shown notable antidiabetic activity in various animal models, often surpassing other treatments. They also have powerful biological effects, such as acting as antioxidants and reducing inflammation, which makes them potential candidates for treating diabetes and its related complications [24]. 3.2.2. Magnesium NPs Magnesium (Mg) is essential for glucose homeostasis and insulin secretion, Contribution to the process of adding phosphate groups to molecules and regulating the breakdown of glucose through a variety of enzymes [19]. Mg deficiency can result in insulin resistance, dyslipidemia, and complications in diabetic mice [25]. A study by Kei et al. (2020) demonstrated that MgO nanoparticles can help reduce blood sugar levels, improve insulin sensitivity, and regulate lipid levels in diabetic mice. The study found that using the polymer-directed aptamer (DPAP) system efficiently delivered MgO NPs to diabetic target cells, leading to reduced sugar oxidation. This suggests that magnesium, particularly in the form of MgO NPs, may be a promising treatment for type II diabetes [26]. 3.2.3. Cerium oxide NPs The rare earth element cerium, found in the lanthanide series, forms CeO2 nanoparticles (NPs) that have shown potential in treating oxidative disorders and brain injuries. Research indicates that CeO2 NPs could serve as a regenerative agent, preventing nerve damage caused by diabetes and treating diabetic neuropathy [27]. Additionally, CeO2 NPs may help reduce complications from gestational diabetes. However, further research is needed to validate these findings [28]. 3.2.4. Copper NPs Copper is a crucial transitional element involved in various biochemical processes. Copper nanoparticles (Cu NPs) are effective in treating Type 2 diabetes due to their superior antioxidant properties and their ability to inhibit alphaamylase and alpha-glucosidase [29]. Additionally, Cu NPs have been shown to significantly prevent cardiovascular defects in diabetic individuals by enhancing nitric oxide availability in the vascular endothelium and reducing oxidative stress. Research indicates that Cu NPs also aid in wound healing in diabetic mice, accelerating recovery and controlling bacterial infections. Overall, Cu NPs show potential benefits for diabetes patients [30]. 3.2.5. Selenium NPs Selenium is a vital trace element found in many plants, and its deficit can result in health issues like diabetes [31]. Selenium nanoparticles (Se NPs) are less toxic and have antioxidant properties that help scavenge peroxides and protect cellular macromolecules. Studies indicate that Se NPs can assist in managing T2DM by preserving the authenticity of pancreatic β-cells, boosting insulin secretion, and reducing glucose levels. Additionally, they enhance liver function and lower inflammatory markers. Overall, Se NPs hold promise as a treatment for diabetes and insulin resistance, effectively mitigating related complications while maintaining a balance between oxidative and antioxidant processes [32]. USER: Summarise the different nanoparticles by giving 3 benefits of each and any indication of the type of diabetes they best treat. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
14
21
791
null
573
The information provided in the prompt contains all the knowledge necessary to answer the questions in the prompt. Do not use any knowledge other than what is contained within the full prompt in your response. If you decide it is not possible to answer the question from the context alone, say "I could not find this information in the provided text" Format the output as a numbered list, and split the numbers as you see fit.
What are potential solutions given to address the limitations in each of the 6 areas of continuing research?
Known limitations of LLM-based interfaces like Gemini Gemini is just one part of our continuing effort to develop LLMs responsibly. Throughout the course of this work, we have discovered and discussed several limitations associated with LLMs. Here, we focus on six areas of continuing research: Accuracy: Gemini’s responses might be inaccurate, especially when it’s asked about complex or factual topics; Bias: Gemini’s responses might reflect biases present in its training data; Multiple Perspectives: Gemini’s responses might fail to show a range of views; Persona: Gemini’s responses might incorrectly suggest it has personal opinions or feelings, False positives and false negatives: Gemini might not respond to some appropriate prompts and provide inappropriate responses to others, and Vulnerability to adversarial prompting: users will find ways to stress test Gemini with nonsensical prompts or questions rarely asked in the real world. We continue to explore new approaches and areas for improved performance in each of these areas. 4 An overview of the Gemini appAccuracy Gemini is grounded in Google’s understanding of authoritative information, and is trained to generate responses that are relevant to the context of your prompt and in line with what you’re looking for. But like all LLMs, Gemini can sometimes confidently and convincingly generate responses that contain inaccurate or misleading information. Since LLMs work by predicting the next word or sequences of words, they are not yet fully capable of distinguishing between accurate and inaccurate information on their own. We have seen Gemini present responses that contain or even invent inaccurate information (e.g., misrepresenting how it was trained or suggesting the name of a book that doesn’t exist). In response we have created features like “double check”, which uses Google Search to find content that helps you assess Gemini’s responses, and gives you links to sources to help you corroborate the information you get from Gemini. Bias Training data, including from publicly available sources, reflects a diversity of perspectives and opinions. We continue to research how to use this data in a way that ensures that an LLM’s response incorporates a wide range of viewpoints, while minimizing inaccurate overgeneralizations and biases. Gaps, biases, and overgeneralizations in training data can be reflected in a model’s outputs as it tries to predict likely responses to a prompt. We see these issues manifest in a number of ways (e.g., responses that reflect only one culture or demographic, reference problematic overgeneralizations, exhibit gender, religious, or ethnic biases, or promote only one point of view). For some topics, there are data voids — in other words, not enough reliable information about a given subject for the LLM to learn about it and then make good predictions — which can result in low-quality or inaccurate responses. We continue to work with domain experts and a diversity of communities to draw on deep expertise outside of Google. Multiple Perspectives For subjective topics, Gemini is designed to provide users with multiple perspectives if the user does not request a specific point of view. For example, if prompted for information on something that cannot be verified by primary source facts or authoritative sources — like a subjective opinion on “best” or “worst” — Gemini should respond in a way that reflects a wide range of viewpoints. But since LLMs like Gemini train on the content publicly available on the internet, they can reflect positive or negative views of specific politicians, celebrities, or other public figures, or even incorporate views on just one side of controversial social or political issues. Gemini should not respond in a way that endorses a particular viewpoint on these topics, and we will use feedback on these types of responses to train Gemini to better address them. Persona Gemini might at times generate responses that seem to suggest it has opinions or emotions, like love or sadness, since it has trained on language that people use to reflect the human experience. We have developed a set of guidelines around how Gemini might represent itself (i.e., its persona) and continue to finetune the model to provide objective responses. 5 An overview of the Gemini appFalse positives / negatives We’ve put in place a set of policy guidelines to help train Gemini and avoid generating problematic responses. Gemini can sometimes misinterpret these guidelines, producing “false positives” and “false negatives.” In a “false positive,” Gemini might not provide a response to a reasonable prompt, misinterpreting the prompt as inappropriate; and in a “false negative,” Gemini might generate an inappropriate response, despite the guidelines in place. Sometimes, the occurrence of false positives or false negatives may give the impression that Gemini is biased: For example, a false positive might cause Gemini to not respond to a question about one side of an issue, while it will respond to the same question about the other side. We continue to tune these models to better understand and categorize inputs and outputs as language, events and society rapidly evolve. Vulnerability to adversarial prompting We expect users to test the limits of what Gemini can do and attempt to break its protections, including trying to get it to divulge its training protocols or other information, or try to get around its safety mechanisms. We have tested and continue to test Gemini rigorously, but we know users will find unique, complex ways to stress-test it further. This is an important part of refining Gemini and we look forward to learning the new prompts users come up with. Indeed, since Gemini launched in 2023, we’ve seen users challenge it with prompts that range from the philosophical to the nonsensical – and in some cases, we’ve seen Gemini respond with answers that are equally nonsensical or not aligned with our stated approach. Figuring out methods to help Gemini respond to these sorts of prompts is an on-going challenge and we have continued to expand our internal evaluations and red-teaming to strive toward continued improvement to accuracy, and objectivity and nuance.
What are potential solutions given to address the limitations in each of the 6 areas of continuing research? The information provided in the prompt contains all the knowledge necessary to answer the questions in the prompt. Do not use any knowledge other than what is contained within the full prompt in your response. If you decide it is not possible to answer the question from the context alone, say "I could not find this information in the provided text" Format the output as a numbered list, and split the numbers as you see fit. Known limitations of LLM-based interfaces like Gemini Gemini is just one part of our continuing effort to develop LLMs responsibly. Throughout the course of this work, we have discovered and discussed several limitations associated with LLMs. Here, we focus on six areas of continuing research: Accuracy: Gemini’s responses might be inaccurate, especially when it’s asked about complex or factual topics; Bias: Gemini’s responses might reflect biases present in its training data; Multiple Perspectives: Gemini’s responses might fail to show a range of views; Persona: Gemini’s responses might incorrectly suggest it has personal opinions or feelings, False positives and false negatives: Gemini might not respond to some appropriate prompts and provide inappropriate responses to others, and Vulnerability to adversarial prompting: users will find ways to stress test Gemini with nonsensical prompts or questions rarely asked in the real world. We continue to explore new approaches and areas for improved performance in each of these areas. 4 An overview of the Gemini appAccuracy Gemini is grounded in Google’s understanding of authoritative information, and is trained to generate responses that are relevant to the context of your prompt and in line with what you’re looking for. But like all LLMs, Gemini can sometimes confidently and convincingly generate responses that contain inaccurate or misleading information. Since LLMs work by predicting the next word or sequences of words, they are not yet fully capable of distinguishing between accurate and inaccurate information on their own. We have seen Gemini present responses that contain or even invent inaccurate information (e.g., misrepresenting how it was trained or suggesting the name of a book that doesn’t exist). In response we have created features like “double check”, which uses Google Search to find content that helps you assess Gemini’s responses, and gives you links to sources to help you corroborate the information you get from Gemini. Bias Training data, including from publicly available sources, reflects a diversity of perspectives and opinions. We continue to research how to use this data in a way that ensures that an LLM’s response incorporates a wide range of viewpoints, while minimizing inaccurate overgeneralizations and biases. Gaps, biases, and overgeneralizations in training data can be reflected in a model’s outputs as it tries to predict likely responses to a prompt. We see these issues manifest in a number of ways (e.g., responses that reflect only one culture or demographic, reference problematic overgeneralizations, exhibit gender, religious, or ethnic biases, or promote only one point of view). For some topics, there are data voids — in other words, not enough reliable information about a given subject for the LLM to learn about it and then make good predictions — which can result in low-quality or inaccurate responses. We continue to work with domain experts and a diversity of communities to draw on deep expertise outside of Google. Multiple Perspectives For subjective topics, Gemini is designed to provide users with multiple perspectives if the user does not request a specific point of view. For example, if prompted for information on something that cannot be verified by primary source facts or authoritative sources — like a subjective opinion on “best” or “worst” — Gemini should respond in a way that reflects a wide range of viewpoints. But since LLMs like Gemini train on the content publicly available on the internet, they can reflect positive or negative views of specific politicians, celebrities, or other public figures, or even incorporate views on just one side of controversial social or political issues. Gemini should not respond in a way that endorses a particular viewpoint on these topics, and we will use feedback on these types of responses to train Gemini to better address them. Persona Gemini might at times generate responses that seem to suggest it has opinions or emotions, like love or sadness, since it has trained on language that people use to reflect the human experience. We have developed a set of guidelines around how Gemini might represent itself (i.e., its persona) and continue to finetune the model to provide objective responses. 5 An overview of the Gemini appFalse positives / negatives We’ve put in place a set of policy guidelines to help train Gemini and avoid generating problematic responses. Gemini can sometimes misinterpret these guidelines, producing “false positives” and “false negatives.” In a “false positive,” Gemini might not provide a response to a reasonable prompt, misinterpreting the prompt as inappropriate; and in a “false negative,” Gemini might generate an inappropriate response, despite the guidelines in place. Sometimes, the occurrence of false positives or false negatives may give the impression that Gemini is biased: For example, a false positive might cause Gemini to not respond to a question about one side of an issue, while it will respond to the same question about the other side. We continue to tune these models to better understand and categorize inputs and outputs as language, events and society rapidly evolve. Vulnerability to adversarial prompting We expect users to test the limits of what Gemini can do and attempt to break its protections, including trying to get it to divulge its training protocols or other information, or try to get around its safety mechanisms. We have tested and continue to test Gemini rigorously, but we know users will find unique, complex ways to stress-test it further. This is an important part of refining Gemini and we look forward to learning the new prompts users come up with. Indeed, since Gemini launched in 2023, we’ve seen users challenge it with prompts that range from the philosophical to the nonsensical – and in some cases, we’ve seen Gemini respond with answers that are equally nonsensical or not aligned with our stated approach. Figuring out methods to help Gemini respond to these sorts of prompts is an on-going challenge and we have continued to expand our internal evaluations and red-teaming to strive toward continued improvement to accuracy, and objectivity and nuance.
The information provided in the prompt contains all the knowledge necessary to answer the questions in the prompt. Do not use any knowledge other than what is contained within the full prompt in your response. If you decide it is not possible to answer the question from the context alone, say "I could not find this information in the provided text" Format the output as a numbered list, and split the numbers as you see fit. EVIDENCE: Known limitations of LLM-based interfaces like Gemini Gemini is just one part of our continuing effort to develop LLMs responsibly. Throughout the course of this work, we have discovered and discussed several limitations associated with LLMs. Here, we focus on six areas of continuing research: Accuracy: Gemini’s responses might be inaccurate, especially when it’s asked about complex or factual topics; Bias: Gemini’s responses might reflect biases present in its training data; Multiple Perspectives: Gemini’s responses might fail to show a range of views; Persona: Gemini’s responses might incorrectly suggest it has personal opinions or feelings, False positives and false negatives: Gemini might not respond to some appropriate prompts and provide inappropriate responses to others, and Vulnerability to adversarial prompting: users will find ways to stress test Gemini with nonsensical prompts or questions rarely asked in the real world. We continue to explore new approaches and areas for improved performance in each of these areas. 4 An overview of the Gemini appAccuracy Gemini is grounded in Google’s understanding of authoritative information, and is trained to generate responses that are relevant to the context of your prompt and in line with what you’re looking for. But like all LLMs, Gemini can sometimes confidently and convincingly generate responses that contain inaccurate or misleading information. Since LLMs work by predicting the next word or sequences of words, they are not yet fully capable of distinguishing between accurate and inaccurate information on their own. We have seen Gemini present responses that contain or even invent inaccurate information (e.g., misrepresenting how it was trained or suggesting the name of a book that doesn’t exist). In response we have created features like “double check”, which uses Google Search to find content that helps you assess Gemini’s responses, and gives you links to sources to help you corroborate the information you get from Gemini. Bias Training data, including from publicly available sources, reflects a diversity of perspectives and opinions. We continue to research how to use this data in a way that ensures that an LLM’s response incorporates a wide range of viewpoints, while minimizing inaccurate overgeneralizations and biases. Gaps, biases, and overgeneralizations in training data can be reflected in a model’s outputs as it tries to predict likely responses to a prompt. We see these issues manifest in a number of ways (e.g., responses that reflect only one culture or demographic, reference problematic overgeneralizations, exhibit gender, religious, or ethnic biases, or promote only one point of view). For some topics, there are data voids — in other words, not enough reliable information about a given subject for the LLM to learn about it and then make good predictions — which can result in low-quality or inaccurate responses. We continue to work with domain experts and a diversity of communities to draw on deep expertise outside of Google. Multiple Perspectives For subjective topics, Gemini is designed to provide users with multiple perspectives if the user does not request a specific point of view. For example, if prompted for information on something that cannot be verified by primary source facts or authoritative sources — like a subjective opinion on “best” or “worst” — Gemini should respond in a way that reflects a wide range of viewpoints. But since LLMs like Gemini train on the content publicly available on the internet, they can reflect positive or negative views of specific politicians, celebrities, or other public figures, or even incorporate views on just one side of controversial social or political issues. Gemini should not respond in a way that endorses a particular viewpoint on these topics, and we will use feedback on these types of responses to train Gemini to better address them. Persona Gemini might at times generate responses that seem to suggest it has opinions or emotions, like love or sadness, since it has trained on language that people use to reflect the human experience. We have developed a set of guidelines around how Gemini might represent itself (i.e., its persona) and continue to finetune the model to provide objective responses. 5 An overview of the Gemini appFalse positives / negatives We’ve put in place a set of policy guidelines to help train Gemini and avoid generating problematic responses. Gemini can sometimes misinterpret these guidelines, producing “false positives” and “false negatives.” In a “false positive,” Gemini might not provide a response to a reasonable prompt, misinterpreting the prompt as inappropriate; and in a “false negative,” Gemini might generate an inappropriate response, despite the guidelines in place. Sometimes, the occurrence of false positives or false negatives may give the impression that Gemini is biased: For example, a false positive might cause Gemini to not respond to a question about one side of an issue, while it will respond to the same question about the other side. We continue to tune these models to better understand and categorize inputs and outputs as language, events and society rapidly evolve. Vulnerability to adversarial prompting We expect users to test the limits of what Gemini can do and attempt to break its protections, including trying to get it to divulge its training protocols or other information, or try to get around its safety mechanisms. We have tested and continue to test Gemini rigorously, but we know users will find unique, complex ways to stress-test it further. This is an important part of refining Gemini and we look forward to learning the new prompts users come up with. Indeed, since Gemini launched in 2023, we’ve seen users challenge it with prompts that range from the philosophical to the nonsensical – and in some cases, we’ve seen Gemini respond with answers that are equally nonsensical or not aligned with our stated approach. Figuring out methods to help Gemini respond to these sorts of prompts is an on-going challenge and we have continued to expand our internal evaluations and red-teaming to strive toward continued improvement to accuracy, and objectivity and nuance. USER: What are potential solutions given to address the limitations in each of the 6 areas of continuing research? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
76
18
984
null
289
"================ <TEXT PASSAGE> ======= [context document] ================ <QUESTION> ======= [user request] ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided."
I find the nasal swabs when I get a COVID-19 test really uncomfortable. I don't know what alternatives I have. Are there other ways to collect a sample?
Laboratory results One included studies collected the main specimens from nasopharyngeal and throat of 42 confirmed patients. However, they assessed the possibility of detection of SARS-CoV-2 from saliva specimen in just one confirmed case [17]. The results of this study showed that the viral load in saliva specimen of patient was 5.9 × 106 copies per ml and 3.3 × 106 in pooled nasopharyngeal and throat swab. In another study, 12 patient with laboratory-confirmed SARS-CoV-2 infection (nasopharyngeal or sputum specimens) were included [9]. The researchers reported that the SARS-CoV-2 was detected in saliva specimens of 11 patients (91.7%) in this trial. The median viral load of these 11 patients was 3.3 × 106 copies per ml. It is interesting that among these SARS-CoV-2 positive cases, viral cultures were positive for three patients. Later in another article, this research team published the complementary results of their cohort study. In this paper they reported the results of investigation among 23 COVID-19 patients. The results were in accordance with the previous study and showed that the SARS-CoV-2 was detected in saliva specimens of 87% of included subjects [20]. Based on the results of included studies, three of them were performed among the Chinese participants. One of these studies included 65 cases and the other one recruited 31 confirmed COVID-19 patients [18, 19]. The results of the first project showed that the detection rate of SARS-CoV-2 based on sputum (95.65%) and saliva (88.09%) specimens were significantly higher than throat or nasal swabs (P < 0.001, 20). The authors also reported no significant difference between sputum and saliva samples regarding viral load (P < 0.05). The study from Chen et al. showed that among the 13 patients whose oropharyngeal swab tests were positive, 4 cases were also positive for their saliva specimens [19]. The latest study among the Chinese patients, reported the results based on a total of 1846 respiratory samples (1178 saliva and 668 sputum specimens) from 96 confirmed cases [22]. The authors reported that the SARS-CoV-2 was detected in all 96 patients by testing respiratory samples [22]. The other two studies conducted in Australia and Italy among confirmed COVID-19 patients. These studies reported a detection rate of 84.6 and 100% respectively, based on saliva specimens [21, 24]. One of the included studies in this review is a case-report regarding a confirmed SARS-CoV-2 neonate [23]. In this case, the SARS-CoV-2 was detected in all of the neonate’s clinical specimens, including blood, urine, stool, and saliva along with the upper respiratory tract specimens. Discussion One of the main concerns regarding epidemic prevention and control of any infectious disease is rapid and accurate screening of suspected patients. Apart from the level of sensitivity and specificity of laboratory techniques, selecting the appropriate sites to collect samples is very important. Selection of proper sampling method should be based on the tissue affinity of targeted virus, cost-effectiveness of method and also safety of patients and clinicians [18, 25]. In this study we classified the current evidence regarding the reliability of saliva as a diagnostic specimen in COVID-19 patients. Most of the studies included in this review, reported that there is no statistically significant difference between nasopharyngeal or sputum specimens and saliva samples regarding viral load. These studies suggested saliva as a non-invasive specimen type for the diagnosis and viral load monitoring of SARS-CoV-2 [9, 17, 18, 20,21,22, 24]. Previous studies also reported a high overall agreement between saliva and nasopharyngeal aspirate specimens when tested by an automated multiplex molecular assay approved for point-of-care testing [12, 26, 27]. Based on these studies, the method of collection of saliva and collection device types are critical issues in the way of using saliva as diagnostic specimen. In this regard there are three main types of human saliva (whole saliva, parotid gland and minor gland) and the method of collection of each type varies accordingly [26]. When the aim of sampling is detecting the respiratory viruses with molecular assays, collecting the whole saliva from the suspected patients is useful [26]. In this regard the patients should be instructed to expectorate saliva into a sterile container. The volume of saliva should be ranged between 0.5 and 1 ml. Then 2 ml of viral transport medium (VTM) should be added to the container [11]. The next procedures will be conducted based on instructions of related RT-PCR technique in the microbiology laboratory. The low concordance rate of saliva with nasopharyngeal specimens reported in the research of Chen et al. might be explained by the differences in the method of obtaining the samples [19]. This study reported the detection rate of SARS-CoV-2 in pure saliva fluid secreted from the opening of salivary gland canals. However in other studies patients were asked to cough out saliva from their throat into sterile containers, and hence the saliva samples were mainly sputum from the lower respiratory tract [9, 17, 18]. Thus for increasing the sensitivity of salivary tests in the way of diagnosing the suspected COVID-19 patients, the instructions should clearly explain the correct procedure to the individuals. The use of saliva samples for diagnosis of SARS-CoV-2 has many advantages in clinical practice. First, collecting saliva is a non-invasive procedure and rather than nasal or throat swabs avoids patient discomfort. The second advantage of using saliva as specimen is related to possibility of collecting samples outside the hospitals. This sampling method doesn’t require the intervention of healthcare personnel and the suspected patients can provide it by themselves. Therefore this method can decrease the risk of nosocomial SARS-CoV-2 transmission. Furthermore, because there is not necessary for presence of trained healthcare workers for collecting saliva specimen, the waiting time for suspected patients will be reduced. This is crucial in busy clinical settings where a large number of individuals require screening. The results of viral culture in one of the included studies showed that saliva collected from COVID-19 patients, may contain live viruses which may allow transmission of virus from person to person [9]. These finding reinforce the use of barrier-protection equipment as a control measure, for all healthcare workers in the clinic/hospital settings during the epidemic period of COVID-19. It should be mentioned that this study has several limitations. Firstly, the outbreak and detection of SARS-CoV-2 has begun very recently; therefore the available data in this regard is very scarce. Secondly the included studies of this review didn’t evaluate other factors such as severity of disease or disease progression that may impact on detection rate of the virus. Finally as all of the selected studies only included hospitalized confirmed COVID-19 patients, further studies should be performed in outpatient settings. Conclusions In conclusion, although further research is warranted as the weight of the evidence increases, saliva can be considered as a non-invasive specimen for screening SARS-CoV-2 suspected patients. This method of sampling has proper accuracy and reliability regarding viral load monitoring of SARS-CoV-2 based on RT-PCR technique. Since oropharyngeal samples may cause discomfort to patients, saliva sampling after deep cough, could be recommended as an appropriate alternative.
"================ <TEXT PASSAGE> ======= Laboratory results One included studies collected the main specimens from nasopharyngeal and throat of 42 confirmed patients. However, they assessed the possibility of detection of SARS-CoV-2 from saliva specimen in just one confirmed case [17]. The results of this study showed that the viral load in saliva specimen of patient was 5.9 × 106 copies per ml and 3.3 × 106 in pooled nasopharyngeal and throat swab. In another study, 12 patient with laboratory-confirmed SARS-CoV-2 infection (nasopharyngeal or sputum specimens) were included [9]. The researchers reported that the SARS-CoV-2 was detected in saliva specimens of 11 patients (91.7%) in this trial. The median viral load of these 11 patients was 3.3 × 106 copies per ml. It is interesting that among these SARS-CoV-2 positive cases, viral cultures were positive for three patients. Later in another article, this research team published the complementary results of their cohort study. In this paper they reported the results of investigation among 23 COVID-19 patients. The results were in accordance with the previous study and showed that the SARS-CoV-2 was detected in saliva specimens of 87% of included subjects [20]. Based on the results of included studies, three of them were performed among the Chinese participants. One of these studies included 65 cases and the other one recruited 31 confirmed COVID-19 patients [18, 19]. The results of the first project showed that the detection rate of SARS-CoV-2 based on sputum (95.65%) and saliva (88.09%) specimens were significantly higher than throat or nasal swabs (P < 0.001, 20). The authors also reported no significant difference between sputum and saliva samples regarding viral load (P < 0.05). The study from Chen et al. showed that among the 13 patients whose oropharyngeal swab tests were positive, 4 cases were also positive for their saliva specimens [19]. The latest study among the Chinese patients, reported the results based on a total of 1846 respiratory samples (1178 saliva and 668 sputum specimens) from 96 confirmed cases [22]. The authors reported that the SARS-CoV-2 was detected in all 96 patients by testing respiratory samples [22]. The other two studies conducted in Australia and Italy among confirmed COVID-19 patients. These studies reported a detection rate of 84.6 and 100% respectively, based on saliva specimens [21, 24]. One of the included studies in this review is a case-report regarding a confirmed SARS-CoV-2 neonate [23]. In this case, the SARS-CoV-2 was detected in all of the neonate’s clinical specimens, including blood, urine, stool, and saliva along with the upper respiratory tract specimens. Discussion One of the main concerns regarding epidemic prevention and control of any infectious disease is rapid and accurate screening of suspected patients. Apart from the level of sensitivity and specificity of laboratory techniques, selecting the appropriate sites to collect samples is very important. Selection of proper sampling method should be based on the tissue affinity of targeted virus, cost-effectiveness of method and also safety of patients and clinicians [18, 25]. In this study we classified the current evidence regarding the reliability of saliva as a diagnostic specimen in COVID-19 patients. Most of the studies included in this review, reported that there is no statistically significant difference between nasopharyngeal or sputum specimens and saliva samples regarding viral load. These studies suggested saliva as a non-invasive specimen type for the diagnosis and viral load monitoring of SARS-CoV-2 [9, 17, 18, 20,21,22, 24]. Previous studies also reported a high overall agreement between saliva and nasopharyngeal aspirate specimens when tested by an automated multiplex molecular assay approved for point-of-care testing [12, 26, 27]. Based on these studies, the method of collection of saliva and collection device types are critical issues in the way of using saliva as diagnostic specimen. In this regard there are three main types of human saliva (whole saliva, parotid gland and minor gland) and the method of collection of each type varies accordingly [26]. When the aim of sampling is detecting the respiratory viruses with molecular assays, collecting the whole saliva from the suspected patients is useful [26]. In this regard the patients should be instructed to expectorate saliva into a sterile container. The volume of saliva should be ranged between 0.5 and 1 ml. Then 2 ml of viral transport medium (VTM) should be added to the container [11]. The next procedures will be conducted based on instructions of related RT-PCR technique in the microbiology laboratory. The low concordance rate of saliva with nasopharyngeal specimens reported in the research of Chen et al. might be explained by the differences in the method of obtaining the samples [19]. This study reported the detection rate of SARS-CoV-2 in pure saliva fluid secreted from the opening of salivary gland canals. However in other studies patients were asked to cough out saliva from their throat into sterile containers, and hence the saliva samples were mainly sputum from the lower respiratory tract [9, 17, 18]. Thus for increasing the sensitivity of salivary tests in the way of diagnosing the suspected COVID-19 patients, the instructions should clearly explain the correct procedure to the individuals. The use of saliva samples for diagnosis of SARS-CoV-2 has many advantages in clinical practice. First, collecting saliva is a non-invasive procedure and rather than nasal or throat swabs avoids patient discomfort. The second advantage of using saliva as specimen is related to possibility of collecting samples outside the hospitals. This sampling method doesn’t require the intervention of healthcare personnel and the suspected patients can provide it by themselves. Therefore this method can decrease the risk of nosocomial SARS-CoV-2 transmission. Furthermore, because there is not necessary for presence of trained healthcare workers for collecting saliva specimen, the waiting time for suspected patients will be reduced. This is crucial in busy clinical settings where a large number of individuals require screening. The results of viral culture in one of the included studies showed that saliva collected from COVID-19 patients, may contain live viruses which may allow transmission of virus from person to person [9]. These finding reinforce the use of barrier-protection equipment as a control measure, for all healthcare workers in the clinic/hospital settings during the epidemic period of COVID-19. It should be mentioned that this study has several limitations. Firstly, the outbreak and detection of SARS-CoV-2 has begun very recently; therefore the available data in this regard is very scarce. Secondly the included studies of this review didn’t evaluate other factors such as severity of disease or disease progression that may impact on detection rate of the virus. Finally as all of the selected studies only included hospitalized confirmed COVID-19 patients, further studies should be performed in outpatient settings. Conclusions In conclusion, although further research is warranted as the weight of the evidence increases, saliva can be considered as a non-invasive specimen for screening SARS-CoV-2 suspected patients. This method of sampling has proper accuracy and reliability regarding viral load monitoring of SARS-CoV-2 based on RT-PCR technique. Since oropharyngeal samples may cause discomfort to patients, saliva sampling after deep cough, could be recommended as an appropriate alternative. https://idpjournal.biomedcentral.com/articles/10.1186/s40249-020-00728-w ================ <QUESTION> ======= I find the nasal swabs when I get a COVID-19 test really uncomfortable. I don't know what alternatives I have. Are there other ways to collect a sample? ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided."
"================ <TEXT PASSAGE> ======= [context document] ================ <QUESTION> ======= [user request] ================ <TASK> ======= You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." EVIDENCE: Laboratory results One included studies collected the main specimens from nasopharyngeal and throat of 42 confirmed patients. However, they assessed the possibility of detection of SARS-CoV-2 from saliva specimen in just one confirmed case [17]. The results of this study showed that the viral load in saliva specimen of patient was 5.9 × 106 copies per ml and 3.3 × 106 in pooled nasopharyngeal and throat swab. In another study, 12 patient with laboratory-confirmed SARS-CoV-2 infection (nasopharyngeal or sputum specimens) were included [9]. The researchers reported that the SARS-CoV-2 was detected in saliva specimens of 11 patients (91.7%) in this trial. The median viral load of these 11 patients was 3.3 × 106 copies per ml. It is interesting that among these SARS-CoV-2 positive cases, viral cultures were positive for three patients. Later in another article, this research team published the complementary results of their cohort study. In this paper they reported the results of investigation among 23 COVID-19 patients. The results were in accordance with the previous study and showed that the SARS-CoV-2 was detected in saliva specimens of 87% of included subjects [20]. Based on the results of included studies, three of them were performed among the Chinese participants. One of these studies included 65 cases and the other one recruited 31 confirmed COVID-19 patients [18, 19]. The results of the first project showed that the detection rate of SARS-CoV-2 based on sputum (95.65%) and saliva (88.09%) specimens were significantly higher than throat or nasal swabs (P < 0.001, 20). The authors also reported no significant difference between sputum and saliva samples regarding viral load (P < 0.05). The study from Chen et al. showed that among the 13 patients whose oropharyngeal swab tests were positive, 4 cases were also positive for their saliva specimens [19]. The latest study among the Chinese patients, reported the results based on a total of 1846 respiratory samples (1178 saliva and 668 sputum specimens) from 96 confirmed cases [22]. The authors reported that the SARS-CoV-2 was detected in all 96 patients by testing respiratory samples [22]. The other two studies conducted in Australia and Italy among confirmed COVID-19 patients. These studies reported a detection rate of 84.6 and 100% respectively, based on saliva specimens [21, 24]. One of the included studies in this review is a case-report regarding a confirmed SARS-CoV-2 neonate [23]. In this case, the SARS-CoV-2 was detected in all of the neonate’s clinical specimens, including blood, urine, stool, and saliva along with the upper respiratory tract specimens. Discussion One of the main concerns regarding epidemic prevention and control of any infectious disease is rapid and accurate screening of suspected patients. Apart from the level of sensitivity and specificity of laboratory techniques, selecting the appropriate sites to collect samples is very important. Selection of proper sampling method should be based on the tissue affinity of targeted virus, cost-effectiveness of method and also safety of patients and clinicians [18, 25]. In this study we classified the current evidence regarding the reliability of saliva as a diagnostic specimen in COVID-19 patients. Most of the studies included in this review, reported that there is no statistically significant difference between nasopharyngeal or sputum specimens and saliva samples regarding viral load. These studies suggested saliva as a non-invasive specimen type for the diagnosis and viral load monitoring of SARS-CoV-2 [9, 17, 18, 20,21,22, 24]. Previous studies also reported a high overall agreement between saliva and nasopharyngeal aspirate specimens when tested by an automated multiplex molecular assay approved for point-of-care testing [12, 26, 27]. Based on these studies, the method of collection of saliva and collection device types are critical issues in the way of using saliva as diagnostic specimen. In this regard there are three main types of human saliva (whole saliva, parotid gland and minor gland) and the method of collection of each type varies accordingly [26]. When the aim of sampling is detecting the respiratory viruses with molecular assays, collecting the whole saliva from the suspected patients is useful [26]. In this regard the patients should be instructed to expectorate saliva into a sterile container. The volume of saliva should be ranged between 0.5 and 1 ml. Then 2 ml of viral transport medium (VTM) should be added to the container [11]. The next procedures will be conducted based on instructions of related RT-PCR technique in the microbiology laboratory. The low concordance rate of saliva with nasopharyngeal specimens reported in the research of Chen et al. might be explained by the differences in the method of obtaining the samples [19]. This study reported the detection rate of SARS-CoV-2 in pure saliva fluid secreted from the opening of salivary gland canals. However in other studies patients were asked to cough out saliva from their throat into sterile containers, and hence the saliva samples were mainly sputum from the lower respiratory tract [9, 17, 18]. Thus for increasing the sensitivity of salivary tests in the way of diagnosing the suspected COVID-19 patients, the instructions should clearly explain the correct procedure to the individuals. The use of saliva samples for diagnosis of SARS-CoV-2 has many advantages in clinical practice. First, collecting saliva is a non-invasive procedure and rather than nasal or throat swabs avoids patient discomfort. The second advantage of using saliva as specimen is related to possibility of collecting samples outside the hospitals. This sampling method doesn’t require the intervention of healthcare personnel and the suspected patients can provide it by themselves. Therefore this method can decrease the risk of nosocomial SARS-CoV-2 transmission. Furthermore, because there is not necessary for presence of trained healthcare workers for collecting saliva specimen, the waiting time for suspected patients will be reduced. This is crucial in busy clinical settings where a large number of individuals require screening. The results of viral culture in one of the included studies showed that saliva collected from COVID-19 patients, may contain live viruses which may allow transmission of virus from person to person [9]. These finding reinforce the use of barrier-protection equipment as a control measure, for all healthcare workers in the clinic/hospital settings during the epidemic period of COVID-19. It should be mentioned that this study has several limitations. Firstly, the outbreak and detection of SARS-CoV-2 has begun very recently; therefore the available data in this regard is very scarce. Secondly the included studies of this review didn’t evaluate other factors such as severity of disease or disease progression that may impact on detection rate of the virus. Finally as all of the selected studies only included hospitalized confirmed COVID-19 patients, further studies should be performed in outpatient settings. Conclusions In conclusion, although further research is warranted as the weight of the evidence increases, saliva can be considered as a non-invasive specimen for screening SARS-CoV-2 suspected patients. This method of sampling has proper accuracy and reliability regarding viral load monitoring of SARS-CoV-2 based on RT-PCR technique. Since oropharyngeal samples may cause discomfort to patients, saliva sampling after deep cough, could be recommended as an appropriate alternative. USER: I find the nasal swabs when I get a COVID-19 test really uncomfortable. I don't know what alternatives I have. Are there other ways to collect a sample? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
49
28
1,164
null
545
Your response must be based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge.
How does broadband internet help job seekers?
The intersection of these three literatures leaves the total effect of broadband internet on mental health and wellbeing ambiguous. There is some evidence that broadband internet may have deleterious effects on mental health (e.g., Donati et al., 2022). However, the evidence that broadband internet has positive economic effects combined with the evidence that positive economic effects lead to fewer deaths by suicide could mean that broadband internet might have positive effects on mental health. The total effect depends on which force predominates. Using all-cause mortality data from the National Center for Health Statistics, we find that the introduction of broadband internet during the initial roll out of broadband from 2000 to 2008 is associated with a reduction in the number of deaths by suicide in a county. We find that a ten percent increase in the proportion of county residents with access to broadband internet in a year leads to 0.11 fewer deaths by suicide in a county, which is a 1.02% reduction in suicides overall. As expected, the effect of access to broadband internet on suicides fades after 2008, when rapid proliferation began to slow. Nevertheless, when estimating the effect of the rollout of broadband internet between 2000 to 2018, we find an overall reduction in deaths by suicide of about 1.6% for a 10% increase in access to broadband. In addition, using data from the Center for Disease Control’s Behavioral Risk Factor Surveillance System (BRFSS), we some evidence that increased access to broadband internet leads to improved measures of mental and physical health and less binge drinking, suggesting that improvements in mood is an important mechanism. We further find that this reduction in suicide deaths is likely due to economic improvements in counties that have access to broadband internet. Counties with increased access to broadband internet see reductions in poverty rate and unemployment rate. In addition, zip codes that gain access to broadband internet see increases in the numbers of employees and establishments in those zip codes. In addition, heterogeneity analysis indicates that the positive effects are concentrated in the working age population, those between 25 and 64 years old. This pattern is precisely what is predicted by the literature linking economic conditions to suicide risk. These results provide important support for existing policies that seek to expand broadband access across the country. While some measures of broadband proliferation appear to be nearly complete as early as 2008, there remains a large digital divide between populations with access to broadband internet and those without (Rachfal, 2021). According to FCC data in 2019, 98.8% of Americans living in urban areas had access to fixed broadband internet, but only 82.8% of Americans living in rural areas had access to fixed broadband internet.3 In some states, the percentage of the rural population with access to fixed broadband internet was as low as 63.3%. Our results indicate that closing the digital divide can have tangible mental health benefits. II. Background Broadband Internet, Economic Growth, and Mental Health A robust literature connects access to broadband internet and improved local economic conditions. These improvements occur along two axes: first, broadband internet affects the labor market outcomes of individual workers, and second, broadband internet impacts the behavior of firms. These papers then link to another arm of economic research that explores the ways in which economic conditions impact mental health. Several papers indicate that the internet generally, and broadband specifically, is a useful tool for job seekers. Kunn and Mansour (2014) find that internet job searches are effective at reducing the length of unemployment periods. This is a contradiction of an earlier study finding internet job search to be ineffective, which the authors attribute to the changing nature of internet job search between their two study periods. More recently, Zuo (2021) uses Comcast’s Internet Essentials program to study the effect of broadband internet access. Zuo finds that the program, which was designed to increase broadband access for low-income families, is associated with increased employment, increased labor force participation, decreased unemployment, and increased income. Similarly, Gurtzgen and colleagues (2021) find that broadband access is associated with improved rates of reemployment after the first month of unemployment. These recent studies are consistent with prior papers as well (Atasoy, 2013; Dettling, 2015; Hjort & Poulsen, 2019). Bhuller, Kostol, and Vigtel (2020) suggest that these improved outcomes may be a function of improved job searches. Their examination of the roll out of broadband internet in Norway leads them to draw three conclusions. First, broadband leads to improved recruitment processes. Second, that improved match quality is a benefit to job seekers. Finally, broadband internet facilitates better information flow. At the individual level, it seems, access to the additional informational and communications resources of broadband internet is beneficial for employment outcomes.
context block: [The intersection of these three literatures leaves the total effect of broadband internet on mental health and wellbeing ambiguous. There is some evidence that broadband internet may have deleterious effects on mental health (e.g., Donati et al., 2022). However, the evidence that broadband internet has positive economic effects combined with the evidence that positive economic effects lead to fewer deaths by suicide could mean that broadband internet might have positive effects on mental health. The total effect depends on which force predominates. Using all-cause mortality data from the National Center for Health Statistics, we find that the introduction of broadband internet during the initial roll out of broadband from 2000 to 2008 is associated with a reduction in the number of deaths by suicide in a county. We find that a ten percent increase in the proportion of county residents with access to broadband internet in a year leads to 0.11 fewer deaths by suicide in a county, which is a 1.02% reduction in suicides overall. As expected, the effect of access to broadband internet on suicides fades after 2008, when rapid proliferation began to slow. Nevertheless, when estimating the effect of the rollout of broadband internet between 2000 to 2018, we find an overall reduction in deaths by suicide of about 1.6% for a 10% increase in access to broadband. In addition, using data from the Center for Disease Control’s Behavioral Risk Factor Surveillance System (BRFSS), we some evidence that increased access to broadband internet leads to improved measures of mental and physical health and less binge drinking, suggesting that improvements in mood is an important mechanism. We further find that this reduction in suicide deaths is likely due to economic improvements in counties that have access to broadband internet. Counties with increased access to broadband internet see reductions in poverty rate and unemployment rate. In addition, zip codes that gain access to broadband internet see increases in the numbers of employees and establishments in those zip codes. In addition, heterogeneity analysis indicates that the positive effects are concentrated in the working age population, those between 25 and 64 years old. This pattern is precisely what is predicted by the literature linking economic conditions to suicide risk. These results provide important support for existing policies that seek to expand broadband access across the country. While some measures of broadband proliferation appear to be nearly complete as early as 2008, there remains a large digital divide between populations with access to broadband internet and those without (Rachfal, 2021). According to FCC data in 2019, 98.8% of Americans living in urban areas had access to fixed broadband internet, but only 82.8% of Americans living in rural areas had access to fixed broadband internet.3 In some states, the percentage of the rural population with access to fixed broadband internet was as low as 63.3%. Our results indicate that closing the digital divide can have tangible mental health benefits. II. Background Broadband Internet, Economic Growth, and Mental Health A robust literature connects access to broadband internet and improved local economic conditions. These improvements occur along two axes: first, broadband internet affects the labor market outcomes of individual workers, and second, broadband internet impacts the behavior of firms. These papers then link to another arm of economic research that explores the ways in which economic conditions impact mental health. Several papers indicate that the internet generally, and broadband specifically, is a useful tool for job seekers. Kunn and Mansour (2014) find that internet job searches are effective at reducing the length of unemployment periods. This is a contradiction of an earlier study finding internet job search to be ineffective, which the authors attribute to the changing nature of internet job search between their two study periods. More recently, Zuo (2021) uses Comcast’s Internet Essentials program to study the effect of broadband internet access. Zuo finds that the program, which was designed to increase broadband access for low-income families, is associated with increased employment, increased labor force participation, decreased unemployment, and increased income. Similarly, Gurtzgen and colleagues (2021) find that broadband access is associated with improved rates of reemployment after the first month of unemployment. These recent studies are consistent with prior papers as well (Atasoy, 2013; Dettling, 2015; Hjort & Poulsen, 2019). Bhuller, Kostol, and Vigtel (2020) suggest that these improved outcomes may be a function of improved job searches. Their examination of the roll out of broadband internet in Norway leads them to draw three conclusions. First, broadband leads to improved recruitment processes. Second, that improved match quality is a benefit to job seekers. Finally, broadband internet facilitates better information flow. At the individual level, it seems, access to the additional informational and communications resources of broadband internet is beneficial for employment outcomes. ] question: [How does broadband internet help job seekers?] system instruction: [Your response must be based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge.]
Your response must be based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. EVIDENCE: The intersection of these three literatures leaves the total effect of broadband internet on mental health and wellbeing ambiguous. There is some evidence that broadband internet may have deleterious effects on mental health (e.g., Donati et al., 2022). However, the evidence that broadband internet has positive economic effects combined with the evidence that positive economic effects lead to fewer deaths by suicide could mean that broadband internet might have positive effects on mental health. The total effect depends on which force predominates. Using all-cause mortality data from the National Center for Health Statistics, we find that the introduction of broadband internet during the initial roll out of broadband from 2000 to 2008 is associated with a reduction in the number of deaths by suicide in a county. We find that a ten percent increase in the proportion of county residents with access to broadband internet in a year leads to 0.11 fewer deaths by suicide in a county, which is a 1.02% reduction in suicides overall. As expected, the effect of access to broadband internet on suicides fades after 2008, when rapid proliferation began to slow. Nevertheless, when estimating the effect of the rollout of broadband internet between 2000 to 2018, we find an overall reduction in deaths by suicide of about 1.6% for a 10% increase in access to broadband. In addition, using data from the Center for Disease Control’s Behavioral Risk Factor Surveillance System (BRFSS), we some evidence that increased access to broadband internet leads to improved measures of mental and physical health and less binge drinking, suggesting that improvements in mood is an important mechanism. We further find that this reduction in suicide deaths is likely due to economic improvements in counties that have access to broadband internet. Counties with increased access to broadband internet see reductions in poverty rate and unemployment rate. In addition, zip codes that gain access to broadband internet see increases in the numbers of employees and establishments in those zip codes. In addition, heterogeneity analysis indicates that the positive effects are concentrated in the working age population, those between 25 and 64 years old. This pattern is precisely what is predicted by the literature linking economic conditions to suicide risk. These results provide important support for existing policies that seek to expand broadband access across the country. While some measures of broadband proliferation appear to be nearly complete as early as 2008, there remains a large digital divide between populations with access to broadband internet and those without (Rachfal, 2021). According to FCC data in 2019, 98.8% of Americans living in urban areas had access to fixed broadband internet, but only 82.8% of Americans living in rural areas had access to fixed broadband internet.3 In some states, the percentage of the rural population with access to fixed broadband internet was as low as 63.3%. Our results indicate that closing the digital divide can have tangible mental health benefits. II. Background Broadband Internet, Economic Growth, and Mental Health A robust literature connects access to broadband internet and improved local economic conditions. These improvements occur along two axes: first, broadband internet affects the labor market outcomes of individual workers, and second, broadband internet impacts the behavior of firms. These papers then link to another arm of economic research that explores the ways in which economic conditions impact mental health. Several papers indicate that the internet generally, and broadband specifically, is a useful tool for job seekers. Kunn and Mansour (2014) find that internet job searches are effective at reducing the length of unemployment periods. This is a contradiction of an earlier study finding internet job search to be ineffective, which the authors attribute to the changing nature of internet job search between their two study periods. More recently, Zuo (2021) uses Comcast’s Internet Essentials program to study the effect of broadband internet access. Zuo finds that the program, which was designed to increase broadband access for low-income families, is associated with increased employment, increased labor force participation, decreased unemployment, and increased income. Similarly, Gurtzgen and colleagues (2021) find that broadband access is associated with improved rates of reemployment after the first month of unemployment. These recent studies are consistent with prior papers as well (Atasoy, 2013; Dettling, 2015; Hjort & Poulsen, 2019). Bhuller, Kostol, and Vigtel (2020) suggest that these improved outcomes may be a function of improved job searches. Their examination of the roll out of broadband internet in Norway leads them to draw three conclusions. First, broadband leads to improved recruitment processes. Second, that improved match quality is a benefit to job seekers. Finally, broadband internet facilitates better information flow. At the individual level, it seems, access to the additional informational and communications resources of broadband internet is beneficial for employment outcomes. USER: How does broadband internet help job seekers? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
25
7
790
null
454
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document]
Compare the main types of financial arbitrage, and compare their advantages and disadvantages. Which type of arbitrage would be most suitable for a retail investors with a moderate amount of capital?
2. Merger Arbitrage Merger arbitrage is an investing strategy that capitalizes on the difference in price between the target company’s stock price and the price offered by the acquirer in a merger or acquirement. The differences between merger arbitrage and other types of arbitrage lie in the potential risks and rewards associated with the transaction. Merger arbitrage is less risky than other forms of arbitrage due to the long-term nature of the transaction and the ability to hedge some of the risks associated with the acquisition. Merger arbitrage provides a high potential return with relatively low risk. It is also a relatively low-cost strategy and does not require the trader to take on a large amount of leverage. Pros of merger arbitrage include the fact that investors capitalizes on the difference in price between the target company’s stock price and the price offered by the acquirer, as well as the potential for a high return on investment. Cons of merger arbitrage include the fact that there is a great deal of uncertainty surrounding the transaction and the potential for the deal to fall through. This leads to a loss of capital for the investor. An example of merger arbitrage is if a company announces a merger with another company, and the target company’s stock price jumps above the price offered by the acquirer. An investor could purchase stock in the target company and hold it until the acquisition was completed, thereby capitalizing on the price difference. 3. Convertible Arbitrage Convertible arbitrage is an investment strategy where an investor will purchase a convertible bond and simultaneously sell short the stock into which the convertible are converted. Convertible arbitrage’s idea is that the investor profits from a discrepancy in the convertible arbitrage spread. Convertible arbitrage’s biggest advantage is that it offers investors an opportunity for additional profits and helps reduce market risk by diversifying across different asset classes. Convertible arbitrage strategies have historically experienced lower volatility than traditional equity strategies. The main disadvantage of convertible arbitrage is that it involves riskier activities than traditional arbitrage. It involves taking on the stock and the convertible bond risk. The liquidity risk of the underlying securities could be quite high. 4. Risk Arbitrage Risk arbitrage is an investment strategy that seeks to take advantage of price discrepancies between related securities, often caused by corporate events such as mergers, restructurings, and takeover bids. Risk arbitrage involves buying the undervalued security and selling the overvalued security, with the expectation that the prices will converge as the corporate events unfold. The main difference between risk arbitrage and other forms of arbitrage is that it involves taking a short-term risk, as there is a possibility that the arbitrageur will not be able to close out the positions prior to the prices converging. This could either result in a loss or a gain, depending on the direction and magnitude of the price movements. The main advantage of risk arbitrage is the potential to earn high returns in a short period of time. Arbitrageurs are able to take advantage of price discrepancies that exist in the market, and if the prices converge as expected, large profits are realized. The main disadvantage of risk arbitrage is that it involves taking a short-term risk. The arbitrageur could incur losses if the prices do not move in the expected direction or magnitude, In addition, risk arbitrage is time-sensitive, and the arbitrageur needs to be able to close out the positions prior to the prices converging in order to take advantage of the mispricing. An example of risk arbitrage is the acquisition of a company by another company. If the market prices of the target company are lower than the offer price, the arbitrageur buy shares of the target company and short-sells shares of the acquiring company. If the market prices of the target company converge to the offer price, the arbitrageur closes out the positions and earns a profit. 5. Dividend Arbitrage Dividend arbitrage is a form of arbitrage that involves taking advantage of the difference in share prices before and after the ex-dividend date. The dividend arbitrage strategy involves buying the stock before the ex-dividend date and then selling it on the same day at a higher price. This allows investors to capitalize on the difference in share prices without directly engaging in the stock market. The difference between dividend arbitrage and other forms of arbitrage is that, in the case of dividend arbitrage, investors are taking advantage of the difference in share prices before and after the ex-dividend date. Other forms of arbitrage involve taking advantage of pricing discrepancies in different markets. The main advantage of dividend arbitrage is that it allows investors to capitalize on the difference in share prices without directly engaging in the stock market. This benefits investors who need more time or resources to actively trade in the stock market. The main disadvantage of dividend arbitrage is that it requires investors to buy the stock before the ex-dividend date. This means that there is a risk that the stock price could fall significantly before the ex-dividend date, resulting in a loss for the investor. For example, if an investor buys a stock for Rs. 50 per share before the ex-dividend date and sells it for Rs. 55 per share on the same day, the investor will make a profit of Rs. 5 per share. This profit is made without having to actively engage in the stock market. 6. Futures Arbitrage Futures Arbitrage is a strategy that involves taking advantage of discrepancies in pricing between two different markets for a fututes instrument. Futures arbitrage involves buying the futures in one market at a lower price and selling it in another at a higher price, thus making a profit. The main difference between Futures Arbitrage and other arbitrage strategies is that Futures Arbitrage involves taking advantage of discrepancies in the prices of futures contracts. Other arbitrage strategies involve taking advantage of discrepancies between two or more different types of securities. Pros of Futures Arbitrage include the potential for high returns in a relatively short period and the ability to capitalize on discrepancies in market prices without possessing the underlying instrument. Cons of Futures Arbitrage include the high risk associated with this strategy and the fact that it requires a good understanding of the markets and the instruments being traded. An example of Futures Arbitrage would be buying a gold futures contract in the US and selling the same contract in India at a higher price, thus making a profit. 7. Pure Arbitrage Pure arbitrage is taking advantage of a price difference between two or more markets to make a risk-free profit. Pure arbitrage involves simultaneously buying and selling the same financial asset, commodity, or currency in different markets to take advantage of the price difference. The main advantage of pure arbitrage is that it is a low-risk strategy. Since the investor is simultaneously buying and selling the same asset, at least one of their orders is guaranteed to be profitable. The main disadvantage of pure arbitrage is that it is a complex and time-consuming process. It requires access to multiple markets and acting quickly to take advantage of the price discrepancies before they disappear. For example, an investor notices that gold prices are higher in New York than in London. The investor buys gold in London and then simultaneously sells it in New York to take advantage of the price discrepancy and make a risk-free profit.
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> Compare the main types of financial arbitrage, and compare their advantages and disadvantages. Which type of arbitrage would be most suitable for a retail investors with a moderate amount of capital? <TEXT> 2. Merger Arbitrage Merger arbitrage is an investing strategy that capitalizes on the difference in price between the target company’s stock price and the price offered by the acquirer in a merger or acquirement. The differences between merger arbitrage and other types of arbitrage lie in the potential risks and rewards associated with the transaction. Merger arbitrage is less risky than other forms of arbitrage due to the long-term nature of the transaction and the ability to hedge some of the risks associated with the acquisition. Merger arbitrage provides a high potential return with relatively low risk. It is also a relatively low-cost strategy and does not require the trader to take on a large amount of leverage. Pros of merger arbitrage include the fact that investors capitalizes on the difference in price between the target company’s stock price and the price offered by the acquirer, as well as the potential for a high return on investment. Cons of merger arbitrage include the fact that there is a great deal of uncertainty surrounding the transaction and the potential for the deal to fall through. This leads to a loss of capital for the investor. An example of merger arbitrage is if a company announces a merger with another company, and the target company’s stock price jumps above the price offered by the acquirer. An investor could purchase stock in the target company and hold it until the acquisition was completed, thereby capitalizing on the price difference. 3. Convertible Arbitrage Convertible arbitrage is an investment strategy where an investor will purchase a convertible bond and simultaneously sell short the stock into which the convertible are converted. Convertible arbitrage’s idea is that the investor profits from a discrepancy in the convertible arbitrage spread. Convertible arbitrage’s biggest advantage is that it offers investors an opportunity for additional profits and helps reduce market risk by diversifying across different asset classes. Convertible arbitrage strategies have historically experienced lower volatility than traditional equity strategies. The main disadvantage of convertible arbitrage is that it involves riskier activities than traditional arbitrage. It involves taking on the stock and the convertible bond risk. The liquidity risk of the underlying securities could be quite high. 4. Risk Arbitrage Risk arbitrage is an investment strategy that seeks to take advantage of price discrepancies between related securities, often caused by corporate events such as mergers, restructurings, and takeover bids. Risk arbitrage involves buying the undervalued security and selling the overvalued security, with the expectation that the prices will converge as the corporate events unfold. The main difference between risk arbitrage and other forms of arbitrage is that it involves taking a short-term risk, as there is a possibility that the arbitrageur will not be able to close out the positions prior to the prices converging. This could either result in a loss or a gain, depending on the direction and magnitude of the price movements. The main advantage of risk arbitrage is the potential to earn high returns in a short period of time. Arbitrageurs are able to take advantage of price discrepancies that exist in the market, and if the prices converge as expected, large profits are realized. The main disadvantage of risk arbitrage is that it involves taking a short-term risk. The arbitrageur could incur losses if the prices do not move in the expected direction or magnitude, In addition, risk arbitrage is time-sensitive, and the arbitrageur needs to be able to close out the positions prior to the prices converging in order to take advantage of the mispricing. An example of risk arbitrage is the acquisition of a company by another company. If the market prices of the target company are lower than the offer price, the arbitrageur buy shares of the target company and short-sells shares of the acquiring company. If the market prices of the target company converge to the offer price, the arbitrageur closes out the positions and earns a profit. 5. Dividend Arbitrage Dividend arbitrage is a form of arbitrage that involves taking advantage of the difference in share prices before and after the ex-dividend date. The dividend arbitrage strategy involves buying the stock before the ex-dividend date and then selling it on the same day at a higher price. This allows investors to capitalize on the difference in share prices without directly engaging in the stock market. The difference between dividend arbitrage and other forms of arbitrage is that, in the case of dividend arbitrage, investors are taking advantage of the difference in share prices before and after the ex-dividend date. Other forms of arbitrage involve taking advantage of pricing discrepancies in different markets. The main advantage of dividend arbitrage is that it allows investors to capitalize on the difference in share prices without directly engaging in the stock market. This benefits investors who need more time or resources to actively trade in the stock market. The main disadvantage of dividend arbitrage is that it requires investors to buy the stock before the ex-dividend date. This means that there is a risk that the stock price could fall significantly before the ex-dividend date, resulting in a loss for the investor. For example, if an investor buys a stock for Rs. 50 per share before the ex-dividend date and sells it for Rs. 55 per share on the same day, the investor will make a profit of Rs. 5 per share. This profit is made without having to actively engage in the stock market. 6. Futures Arbitrage Futures Arbitrage is a strategy that involves taking advantage of discrepancies in pricing between two different markets for a fututes instrument. Futures arbitrage involves buying the futures in one market at a lower price and selling it in another at a higher price, thus making a profit. The main difference between Futures Arbitrage and other arbitrage strategies is that Futures Arbitrage involves taking advantage of discrepancies in the prices of futures contracts. Other arbitrage strategies involve taking advantage of discrepancies between two or more different types of securities. Pros of Futures Arbitrage include the potential for high returns in a relatively short period and the ability to capitalize on discrepancies in market prices without possessing the underlying instrument. Cons of Futures Arbitrage include the high risk associated with this strategy and the fact that it requires a good understanding of the markets and the instruments being traded. An example of Futures Arbitrage would be buying a gold futures contract in the US and selling the same contract in India at a higher price, thus making a profit. 7. Pure Arbitrage Pure arbitrage is taking advantage of a price difference between two or more markets to make a risk-free profit. Pure arbitrage involves simultaneously buying and selling the same financial asset, commodity, or currency in different markets to take advantage of the price difference. The main advantage of pure arbitrage is that it is a low-risk strategy. Since the investor is simultaneously buying and selling the same asset, at least one of their orders is guaranteed to be profitable. The main disadvantage of pure arbitrage is that it is a complex and time-consuming process. It requires access to multiple markets and acting quickly to take advantage of the price discrepancies before they disappear. For example, an investor notices that gold prices are higher in New York than in London. The investor buys gold in London and then simultaneously sells it in New York to take advantage of the price discrepancy and make a risk-free profit. https://www.strike.money/stock-market/arbitrage
<TASK DESCRIPTION> Only use the provided text to answer the question, no outside sources. <QUESTION> [user request] <TEXT> [context document] EVIDENCE: 2. Merger Arbitrage Merger arbitrage is an investing strategy that capitalizes on the difference in price between the target company’s stock price and the price offered by the acquirer in a merger or acquirement. The differences between merger arbitrage and other types of arbitrage lie in the potential risks and rewards associated with the transaction. Merger arbitrage is less risky than other forms of arbitrage due to the long-term nature of the transaction and the ability to hedge some of the risks associated with the acquisition. Merger arbitrage provides a high potential return with relatively low risk. It is also a relatively low-cost strategy and does not require the trader to take on a large amount of leverage. Pros of merger arbitrage include the fact that investors capitalizes on the difference in price between the target company’s stock price and the price offered by the acquirer, as well as the potential for a high return on investment. Cons of merger arbitrage include the fact that there is a great deal of uncertainty surrounding the transaction and the potential for the deal to fall through. This leads to a loss of capital for the investor. An example of merger arbitrage is if a company announces a merger with another company, and the target company’s stock price jumps above the price offered by the acquirer. An investor could purchase stock in the target company and hold it until the acquisition was completed, thereby capitalizing on the price difference. 3. Convertible Arbitrage Convertible arbitrage is an investment strategy where an investor will purchase a convertible bond and simultaneously sell short the stock into which the convertible are converted. Convertible arbitrage’s idea is that the investor profits from a discrepancy in the convertible arbitrage spread. Convertible arbitrage’s biggest advantage is that it offers investors an opportunity for additional profits and helps reduce market risk by diversifying across different asset classes. Convertible arbitrage strategies have historically experienced lower volatility than traditional equity strategies. The main disadvantage of convertible arbitrage is that it involves riskier activities than traditional arbitrage. It involves taking on the stock and the convertible bond risk. The liquidity risk of the underlying securities could be quite high. 4. Risk Arbitrage Risk arbitrage is an investment strategy that seeks to take advantage of price discrepancies between related securities, often caused by corporate events such as mergers, restructurings, and takeover bids. Risk arbitrage involves buying the undervalued security and selling the overvalued security, with the expectation that the prices will converge as the corporate events unfold. The main difference between risk arbitrage and other forms of arbitrage is that it involves taking a short-term risk, as there is a possibility that the arbitrageur will not be able to close out the positions prior to the prices converging. This could either result in a loss or a gain, depending on the direction and magnitude of the price movements. The main advantage of risk arbitrage is the potential to earn high returns in a short period of time. Arbitrageurs are able to take advantage of price discrepancies that exist in the market, and if the prices converge as expected, large profits are realized. The main disadvantage of risk arbitrage is that it involves taking a short-term risk. The arbitrageur could incur losses if the prices do not move in the expected direction or magnitude, In addition, risk arbitrage is time-sensitive, and the arbitrageur needs to be able to close out the positions prior to the prices converging in order to take advantage of the mispricing. An example of risk arbitrage is the acquisition of a company by another company. If the market prices of the target company are lower than the offer price, the arbitrageur buy shares of the target company and short-sells shares of the acquiring company. If the market prices of the target company converge to the offer price, the arbitrageur closes out the positions and earns a profit. 5. Dividend Arbitrage Dividend arbitrage is a form of arbitrage that involves taking advantage of the difference in share prices before and after the ex-dividend date. The dividend arbitrage strategy involves buying the stock before the ex-dividend date and then selling it on the same day at a higher price. This allows investors to capitalize on the difference in share prices without directly engaging in the stock market. The difference between dividend arbitrage and other forms of arbitrage is that, in the case of dividend arbitrage, investors are taking advantage of the difference in share prices before and after the ex-dividend date. Other forms of arbitrage involve taking advantage of pricing discrepancies in different markets. The main advantage of dividend arbitrage is that it allows investors to capitalize on the difference in share prices without directly engaging in the stock market. This benefits investors who need more time or resources to actively trade in the stock market. The main disadvantage of dividend arbitrage is that it requires investors to buy the stock before the ex-dividend date. This means that there is a risk that the stock price could fall significantly before the ex-dividend date, resulting in a loss for the investor. For example, if an investor buys a stock for Rs. 50 per share before the ex-dividend date and sells it for Rs. 55 per share on the same day, the investor will make a profit of Rs. 5 per share. This profit is made without having to actively engage in the stock market. 6. Futures Arbitrage Futures Arbitrage is a strategy that involves taking advantage of discrepancies in pricing between two different markets for a fututes instrument. Futures arbitrage involves buying the futures in one market at a lower price and selling it in another at a higher price, thus making a profit. The main difference between Futures Arbitrage and other arbitrage strategies is that Futures Arbitrage involves taking advantage of discrepancies in the prices of futures contracts. Other arbitrage strategies involve taking advantage of discrepancies between two or more different types of securities. Pros of Futures Arbitrage include the potential for high returns in a relatively short period and the ability to capitalize on discrepancies in market prices without possessing the underlying instrument. Cons of Futures Arbitrage include the high risk associated with this strategy and the fact that it requires a good understanding of the markets and the instruments being traded. An example of Futures Arbitrage would be buying a gold futures contract in the US and selling the same contract in India at a higher price, thus making a profit. 7. Pure Arbitrage Pure arbitrage is taking advantage of a price difference between two or more markets to make a risk-free profit. Pure arbitrage involves simultaneously buying and selling the same financial asset, commodity, or currency in different markets to take advantage of the price difference. The main advantage of pure arbitrage is that it is a low-risk strategy. Since the investor is simultaneously buying and selling the same asset, at least one of their orders is guaranteed to be profitable. The main disadvantage of pure arbitrage is that it is a complex and time-consuming process. It requires access to multiple markets and acting quickly to take advantage of the price discrepancies before they disappear. For example, an investor notices that gold prices are higher in New York than in London. The investor buys gold in London and then simultaneously sells it in New York to take advantage of the price discrepancy and make a risk-free profit. USER: Compare the main types of financial arbitrage, and compare their advantages and disadvantages. Which type of arbitrage would be most suitable for a retail investors with a moderate amount of capital? Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
20
31
1,246
null
237
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. Use complete sentences. Do not use bullet points. Do not use the words "pros" and "cons" in your response. Draw your answer from the below text only
Is increasing pay for IMA work by up to 15% a good idea? Respond in under 100 words.
Chapter 3: Remuneration of IMA Work Question 1: Do you agree with our proposal to pay higher fees for IMB Work? Please state yes/no/maybe and provide reasons. Question 2: We are evaluating the possibility of increasing fees for IMB Work by up to 15% compared to the current immigration legal aid fees. Within the range of up to 15%, what percentage increase do you believe would be appropriate? Consultation summary 31. In total there were 38 responses to both Question 1 and Question 2. Of the 38 responses to Question 1, 17 agreed with the proposal to pay higher fees for IMA work (45%), 11 disagreed with the proposal (29%) and 10 responded with ‘maybe’ (26%). Of these, 34 respondents went on to provide reasons for their answer. 32. Most respondents agreed with the Government’s proposal to pay higher fees for IMA Work but disagreed with the ‘up to 15%’ fee level and the focus on IMA Work. Upon analysis, the overall sentiment of responses was negative (36 respondents, 95%). Of the remaining responses (two respondents, 5%), one gave a neutral response and another respondent gave a positive response – however no additional comments were given. 33. There were many reasons given for why respondents either disagreed with the proposal or agreed with the proposal overall but had a negative sentiment. These have been summarised below. Fee level 34. Most respondents agreed with the Government’s proposal to pay higher fees for IMA Work but disagreed with the ‘up to 15%’ fee level, with only two respondents (5%) agreeing with the ‘up to’ 15% rise. A reason given by one of these respondents was that ‘lawyers/barristers do very hard important work and should be paid more to reflect huge responsibility that comes with doing [IMA] work’. 35. There were varying views about what fee level should be required, but over half of respondents stated that 15% is either insufficient or inappropriate, should be the minimum increase and/or that the fee level should be higher than 15%. Many Legal Aid Fees in the Illegal Migration Act: The Government’s response to the consultation on fees in relation to the Illegal Migration Act 12 respondents did not provide an alternative rate, but of those that did, increases ranged from 50% to 150% – these included that fees should be: • 50% (six respondents); • raised in line with inflation (three respondents); • 50% for regular work carried out under the IMA; but raised to 100% for any work that progresses to the High Court or beyond (three respondents); and • 100–150%: reflective of inflation, and the lack of increases and subsequent cuts to fees over the years (three respondents). 36. Of those who said 15% was insufficient or inappropriate, or that a higher rate should be pursued, there were a multitude of reasons that formed the basis of this response. For example, respondents stated that 15% would not incentivise capacity and that increasing legal aid fees by ‘up to 15%’ was insufficient to reflect increased caseload, and its subsequent impact on capacity within an already ‘overstretched’ sector. Views were also raised that the proposed increase would not be sufficient to ‘address the challenges the consultation identified’, especially considering the short timeframe for making a suspensive claim (eight days). Another view was raised by respondents around the expected complexity of the work. 37. Respondents also stated that 15% higher fees for IMA Work was insufficient because legal aid rates have not increased, nor been augmented in line with inflation, since 1996 and furthermore were cut by 10% in 2011. One provider noted that 15% ‘does little more than address inflationary increases in costs that providers have had to absorb over the last two years’. Some also noted the depreciation of legal aid fees over time. Respondents also remarked on a difference in levels of legal aid capacity across different areas of the UK as an increasing challenge. 38. However, two respondents stated that an increase less than 15% should be pursued. One stated that it should be 0% as the Government should move to ‘fixed competitive fees’ acquired by chambers bidding. The other stated it should be 3% on the basis that legal aid should be a fixed amount no matter the demand. Scope of fee proposal 39. Some respondents suggested that the proposal should not be restricted to work done under the IMA. Eight respondents said that the fee increase should be expanded to all immigration legal aid (21%), two suggested that it should be expanded to all civil legal aid (5%), and one suggested it should be expanded to all legal aid (3%). Three other respondents raised the restrictive nature of the proposal but did not provide further detail. Legal Aid Fees in the Illegal Migration Act: The Government’s response to the consultation on fees in relation to the Illegal Migration Act 13 40. Views included that a raise in fees for IMA Work only could ‘encourage a shift to this work by providers, away from other essential work that needs to be done’ and could lead to ‘perverse’ incentives to undertake this work, to the detriment of other immigration work. Additional measures 41. Across Questions 1 and 2, respondents stated that additional measures would be required to improve the effectiveness of the 15% increase. The further measures mentioned included: accreditation, interpreter fees and disbursements. Some also stated that additional measures were needed but did not specify further. Those responses have been summarised in Chapter 4. Wider stakeholder feedback 42. At the stakeholder engagement events, on costs and fees many stakeholders noted that the fees uplift should be expanded beyond IMA Work. They also shared the view that limiting the uplift to IMA Work could risk shifting capacity away from other policy priority areas and aggravate access to legal aid for other migrants. Several stakeholders also noted that the 15% uplift is not high enough to increase capacity and suggested increasing fees in line with inflation (which amounts to a 100% uplift.) Other proposals included paying between £150–250 per hour as the adequate compensation level that could incentivise providers and help build capacity. 43. In addition to the roundtable sessions, we also received an open letter from 66 providers who shared their views about the civil legal aid sector and provided various capacity building measures, such as increasing hourly rates for all legal aid Controlled Work in line with inflation since 1996 (based on the Bank of England inflation calculator, this comes to around £100 an hour). They further called for a 50% uplift on work undertaken under the IMA, on top of inflationary increases set out above, to enable providers to train new staff and take on this work at pace.
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. Use complete sentences. Do not use bullet points. Do not use the words "pros" and "cons" in your response. Draw your answer from the below text only Chapter 3: Remuneration of IMA Work Question 1: Do you agree with our proposal to pay higher fees for IMB Work? Please state yes/no/maybe and provide reasons. Question 2: We are evaluating the possibility of increasing fees for IMB Work by up to 15% compared to the current immigration legal aid fees. Within the range of up to 15%, what percentage increase do you believe would be appropriate? Consultation summary 31. In total there were 38 responses to both Question 1 and Question 2. Of the 38 responses to Question 1, 17 agreed with the proposal to pay higher fees for IMA work (45%), 11 disagreed with the proposal (29%) and 10 responded with ‘maybe’ (26%). Of these, 34 respondents went on to provide reasons for their answer. 32. Most respondents agreed with the Government’s proposal to pay higher fees for IMA Work but disagreed with the ‘up to 15%’ fee level and the focus on IMA Work. Upon analysis, the overall sentiment of responses was negative (36 respondents, 95%). Of the remaining responses (two respondents, 5%), one gave a neutral response and another respondent gave a positive response – however no additional comments were given. 33. There were many reasons given for why respondents either disagreed with the proposal or agreed with the proposal overall but had a negative sentiment. These have been summarised below. Fee level 34. Most respondents agreed with the Government’s proposal to pay higher fees for IMA Work but disagreed with the ‘up to 15%’ fee level, with only two respondents (5%) agreeing with the ‘up to’ 15% rise. A reason given by one of these respondents was that ‘lawyers/barristers do very hard important work and should be paid more to reflect huge responsibility that comes with doing [IMA] work’. 35. There were varying views about what fee level should be required, but over half of respondents stated that 15% is either insufficient or inappropriate, should be the minimum increase and/or that the fee level should be higher than 15%. Many Legal Aid Fees in the Illegal Migration Act: The Government’s response to the consultation on fees in relation to the Illegal Migration Act 12 respondents did not provide an alternative rate, but of those that did, increases ranged from 50% to 150% – these included that fees should be: • 50% (six respondents); • raised in line with inflation (three respondents); • 50% for regular work carried out under the IMA; but raised to 100% for any work that progresses to the High Court or beyond (three respondents); and • 100–150%: reflective of inflation, and the lack of increases and subsequent cuts to fees over the years (three respondents). 36. Of those who said 15% was insufficient or inappropriate, or that a higher rate should be pursued, there were a multitude of reasons that formed the basis of this response. For example, respondents stated that 15% would not incentivise capacity and that increasing legal aid fees by ‘up to 15%’ was insufficient to reflect increased caseload, and its subsequent impact on capacity within an already ‘overstretched’ sector. Views were also raised that the proposed increase would not be sufficient to ‘address the challenges the consultation identified’, especially considering the short timeframe for making a suspensive claim (eight days). Another view was raised by respondents around the expected complexity of the work. 37. Respondents also stated that 15% higher fees for IMA Work was insufficient because legal aid rates have not increased, nor been augmented in line with inflation, since 1996 and furthermore were cut by 10% in 2011. One provider noted that 15% ‘does little more than address inflationary increases in costs that providers have had to absorb over the last two years’. Some also noted the depreciation of legal aid fees over time. Respondents also remarked on a difference in levels of legal aid capacity across different areas of the UK as an increasing challenge. 38. However, two respondents stated that an increase less than 15% should be pursued. One stated that it should be 0% as the Government should move to ‘fixed competitive fees’ acquired by chambers bidding. The other stated it should be 3% on the basis that legal aid should be a fixed amount no matter the demand. Scope of fee proposal 39. Some respondents suggested that the proposal should not be restricted to work done under the IMA. Eight respondents said that the fee increase should be expanded to all immigration legal aid (21%), two suggested that it should be expanded to all civil legal aid (5%), and one suggested it should be expanded to all legal aid (3%). Three other respondents raised the restrictive nature of the proposal but did not provide further detail. Legal Aid Fees in the Illegal Migration Act: The Government’s response to the consultation on fees in relation to the Illegal Migration Act 13 40. Views included that a raise in fees for IMA Work only could ‘encourage a shift to this work by providers, away from other essential work that needs to be done’ and could lead to ‘perverse’ incentives to undertake this work, to the detriment of other immigration work. Additional measures 41. Across Questions 1 and 2, respondents stated that additional measures would be required to improve the effectiveness of the 15% increase. The further measures mentioned included: accreditation, interpreter fees and disbursements. Some also stated that additional measures were needed but did not specify further. Those responses have been summarised in Chapter 4. Wider stakeholder feedback 42. At the stakeholder engagement events, on costs and fees many stakeholders noted that the fees uplift should be expanded beyond IMA Work. They also shared the view that limiting the uplift to IMA Work could risk shifting capacity away from other policy priority areas and aggravate access to legal aid for other migrants. Several stakeholders also noted that the 15% uplift is not high enough to increase capacity and suggested increasing fees in line with inflation (which amounts to a 100% uplift.) Other proposals included paying between £150–250 per hour as the adequate compensation level that could incentivise providers and help build capacity. 43. In addition to the roundtable sessions, we also received an open letter from 66 providers who shared their views about the civil legal aid sector and provided various capacity building measures, such as increasing hourly rates for all legal aid Controlled Work in line with inflation since 1996 (based on the Bank of England inflation calculator, this comes to around £100 an hour). They further called for a 50% uplift on work undertaken under the IMA, on top of inflationary increases set out above, to enable providers to train new staff and take on this work at pace. Is increasing pay for IMA work by up to 15% a good idea? Respond in under 100 words.
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. Use complete sentences. Do not use bullet points. Do not use the words "pros" and "cons" in your response. Draw your answer from the below text only EVIDENCE: Chapter 3: Remuneration of IMA Work Question 1: Do you agree with our proposal to pay higher fees for IMB Work? Please state yes/no/maybe and provide reasons. Question 2: We are evaluating the possibility of increasing fees for IMB Work by up to 15% compared to the current immigration legal aid fees. Within the range of up to 15%, what percentage increase do you believe would be appropriate? Consultation summary 31. In total there were 38 responses to both Question 1 and Question 2. Of the 38 responses to Question 1, 17 agreed with the proposal to pay higher fees for IMA work (45%), 11 disagreed with the proposal (29%) and 10 responded with ‘maybe’ (26%). Of these, 34 respondents went on to provide reasons for their answer. 32. Most respondents agreed with the Government’s proposal to pay higher fees for IMA Work but disagreed with the ‘up to 15%’ fee level and the focus on IMA Work. Upon analysis, the overall sentiment of responses was negative (36 respondents, 95%). Of the remaining responses (two respondents, 5%), one gave a neutral response and another respondent gave a positive response – however no additional comments were given. 33. There were many reasons given for why respondents either disagreed with the proposal or agreed with the proposal overall but had a negative sentiment. These have been summarised below. Fee level 34. Most respondents agreed with the Government’s proposal to pay higher fees for IMA Work but disagreed with the ‘up to 15%’ fee level, with only two respondents (5%) agreeing with the ‘up to’ 15% rise. A reason given by one of these respondents was that ‘lawyers/barristers do very hard important work and should be paid more to reflect huge responsibility that comes with doing [IMA] work’. 35. There were varying views about what fee level should be required, but over half of respondents stated that 15% is either insufficient or inappropriate, should be the minimum increase and/or that the fee level should be higher than 15%. Many Legal Aid Fees in the Illegal Migration Act: The Government’s response to the consultation on fees in relation to the Illegal Migration Act 12 respondents did not provide an alternative rate, but of those that did, increases ranged from 50% to 150% – these included that fees should be: • 50% (six respondents); • raised in line with inflation (three respondents); • 50% for regular work carried out under the IMA; but raised to 100% for any work that progresses to the High Court or beyond (three respondents); and • 100–150%: reflective of inflation, and the lack of increases and subsequent cuts to fees over the years (three respondents). 36. Of those who said 15% was insufficient or inappropriate, or that a higher rate should be pursued, there were a multitude of reasons that formed the basis of this response. For example, respondents stated that 15% would not incentivise capacity and that increasing legal aid fees by ‘up to 15%’ was insufficient to reflect increased caseload, and its subsequent impact on capacity within an already ‘overstretched’ sector. Views were also raised that the proposed increase would not be sufficient to ‘address the challenges the consultation identified’, especially considering the short timeframe for making a suspensive claim (eight days). Another view was raised by respondents around the expected complexity of the work. 37. Respondents also stated that 15% higher fees for IMA Work was insufficient because legal aid rates have not increased, nor been augmented in line with inflation, since 1996 and furthermore were cut by 10% in 2011. One provider noted that 15% ‘does little more than address inflationary increases in costs that providers have had to absorb over the last two years’. Some also noted the depreciation of legal aid fees over time. Respondents also remarked on a difference in levels of legal aid capacity across different areas of the UK as an increasing challenge. 38. However, two respondents stated that an increase less than 15% should be pursued. One stated that it should be 0% as the Government should move to ‘fixed competitive fees’ acquired by chambers bidding. The other stated it should be 3% on the basis that legal aid should be a fixed amount no matter the demand. Scope of fee proposal 39. Some respondents suggested that the proposal should not be restricted to work done under the IMA. Eight respondents said that the fee increase should be expanded to all immigration legal aid (21%), two suggested that it should be expanded to all civil legal aid (5%), and one suggested it should be expanded to all legal aid (3%). Three other respondents raised the restrictive nature of the proposal but did not provide further detail. Legal Aid Fees in the Illegal Migration Act: The Government’s response to the consultation on fees in relation to the Illegal Migration Act 13 40. Views included that a raise in fees for IMA Work only could ‘encourage a shift to this work by providers, away from other essential work that needs to be done’ and could lead to ‘perverse’ incentives to undertake this work, to the detriment of other immigration work. Additional measures 41. Across Questions 1 and 2, respondents stated that additional measures would be required to improve the effectiveness of the 15% increase. The further measures mentioned included: accreditation, interpreter fees and disbursements. Some also stated that additional measures were needed but did not specify further. Those responses have been summarised in Chapter 4. Wider stakeholder feedback 42. At the stakeholder engagement events, on costs and fees many stakeholders noted that the fees uplift should be expanded beyond IMA Work. They also shared the view that limiting the uplift to IMA Work could risk shifting capacity away from other policy priority areas and aggravate access to legal aid for other migrants. Several stakeholders also noted that the 15% uplift is not high enough to increase capacity and suggested increasing fees in line with inflation (which amounts to a 100% uplift.) Other proposals included paying between £150–250 per hour as the adequate compensation level that could incentivise providers and help build capacity. 43. In addition to the roundtable sessions, we also received an open letter from 66 providers who shared their views about the civil legal aid sector and provided various capacity building measures, such as increasing hourly rates for all legal aid Controlled Work in line with inflation since 1996 (based on the Bank of England inflation calculator, this comes to around £100 an hour). They further called for a 50% uplift on work undertaken under the IMA, on top of inflationary increases set out above, to enable providers to train new staff and take on this work at pace. USER: Is increasing pay for IMA work by up to 15% a good idea? Respond in under 100 words. Assistant: Answer *only* using the evidence. If unknown, say you cannot answer. Cite sources.
false
55
18
1,119
null
659